skip to navigation
skip to content

Planet Python

Last update: January 01, 2026 09:43 PM UTC

January 01, 2026


Python Morsels

Implicit string concatenation

Python automatically concatenates adjacent string literals thanks to implicit string concatenation. This feature can sometimes lead to bugs.

Table of contents

  1. Strings next to each other
  2. Implicit string concatenation
  3. Implicit line continuations with implicit string concatenation
  4. Concatenating lines of text
  5. Bugs caused by implicit string concatenation
  6. Linter rules for implicit string concatenation
  7. Implicit string concatenation is a bug and a feature

Strings next to each other

Take a look at this line of Python code:

>>> print("Hello" "world!")

It looks kind of like we're passing multiple arguments to the built-in print function.

But we're not:

>>> print("Hello" "world!")
Helloworld!

If we pass multiple arguments to print, Python will put spaces between those values when printing:

>>> print("Hello", "world!")
Hello world!

But Python wasn't doing.

Our code from before didn't have commas to separate the arguments (note the missing comma between "Hello" and "world!"):

>>> print("Hello" "world!")
Helloworld!

How is that possible? This seems like it should have resulted in a SyntaxError!

Implicit string concatenation

A string literal is the …

Read the full article: https://www.pythonmorsels.com/implicit-string-concatenation/

January 01, 2026 05:43 PM UTC


Seth Michael Larson

Cutting spritesheets like cookies with Python & Pillow 🍪

Happy new year! 🎉 For an upcoming project on the blog requiring many video-game sprites I've created a small tool (“sugarcookie”) using the always-lovely Python image-processing library Pillow. This tool takes a spritesheet and a list of mask colors, a minimum size, and then cuts the spritesheet into its component sprites.

I'm sure this could be implemented more efficiently, or with a friendly command line interface, but for more own purposes (~10 spritesheets) this worked just fine. Feel free to use, share, and improve. The script is available as a GitHub gist, but also included below.

Source code for sugarcookie

#!/usr/bin/env python
# /// script
# requires-python = ">=3.13"
# dependencies = [
#   "Pillow",
#   "tqdm"
# ]
# ///
# License: MIT
# Copyright 2025, Seth Larson

import os.path
import math
from PIL import Image
import tqdm

# Parameters
spritesheet = ""  # Path to spritesheet.
masks = {}  # Set of 3-tuples for RGB.
min_dim = 10  # Min and max dimensions in pixels.
max_dim = 260

img = Image.open(spritesheet)
if img.mode == "RGB":  # Ensure an alpha channel.
    alpha = Image.new("L", img.size, 255)
    img.putalpha(alpha)

output_prefix = os.path.splitext(os.path.basename(spritesheet))[0]
data = img.getdata()
visited = set()
shapes = set()
reroll_shapes = set()


def getpixel(x, y) -> tuple[int, int, int, int]:
    return data[x + (img.width * y)]


def make_2n(value: int) -> int:
    return 2 ** int(math.ceil(math.log2(value)))


with tqdm.tqdm(
    desc="Cutting cookies",
    total=int(img.width * img.height),
    unit="pixels",
) as t:
    for x in range(img.width):
        for y in range(img.height):
            xy = (x, y)
            if xy in visited:
                continue
            inshape = set()
            candidates = {(x, y)}

            def add_candidates(cx, cy):
                global candidates
                candidates |= {(cx - 1, cy), (cx + 1, cy), (cx, cy - 1), (cx, cy + 1)}

            while candidates:
                cx, cy = candidates.pop()
                if (
                    (cx, cy) in visited
                    or cx < 0
                    or cx >= img.width
                    or cy < 0
                    or cy >= img.height
                    or abs(cx - x) > max_dim
                    or abs(cy - y) > max_dim
                ):
                    continue
                visited.add((cx, cy))
                rgba = r, g, b, a = getpixel(cx, cy)
                if a == 0 or (r, g, b) in masks:
                    continue
                else:
                    inshape.add((cx, cy))
                    add_candidates(cx, cy)
            if inshape:
                shapes.add(tuple(inshape))
        t.update(img.height)

max_width = 0
max_height = 0
shapes_and_offsets = []
for shape in sorted(shapes):
    min_x = img.width + 2
    min_y = img.height + 2
    max_x = -1
    max_y = -1
    for x, y in shape:
        max_x = max(x, max_x)
        max_y = max(y, max_y)
        min_x = min(x, min_x)
        min_y = min(y, min_y)
    width = max_x - min_x + 1
    height = max_y - min_y + 1

    # Too small! We have to reroll this
    # potentially into another shape.
    if width < min_dim or height < min_dim:
        reroll_shapes.add(shape)
        continue

    max_width = max(max_width, width)
    max_height = max(max_height, height)
    shapes_and_offsets.append((shape, (width, height), (min_x, min_y)))

# Make them powers of two!
max_width = make_2n(max_width)
max_height = make_2n(max_height)

sprite_number = 0
with tqdm.tqdm(
    desc="Baking cookies",
    total=len(shapes_and_offsets),
    unit="sprites"
) as t:
    for shape, (width, height), (offset_x, offset_y) in shapes_and_offsets:
        new_img = Image.new(mode="RGBA", size=(max_width, max_height))
        margin_x = (max_width - width) // 2
        margin_y = (max_height - height) // 2
        for rx in range(max_width):
            for ry in range(max_height):
                x = rx + offset_x
                y = ry + offset_y
                if (x, y) not in shape:
                    continue
                new_img.putpixel((rx + margin_x, ry + margin_y), getpixel(x, y))
        new_img.save(f"images/{output_prefix}-{sprite_number}.png")
        sprite_number += 1
        t.update(1)

When using the tool you may find yourself needing to add additional masking across elements, such as the original spritesheet curator's name, in order for the cutting process to work perfectly. This script also doesn't work great for sprites which aren't contiguous across their bounding box. There's an exercise left to the reader to implement reroll_shapes, a feature I didn't end up needing for my own project. Let me know if you implement this and send me a patch!



Thanks for keeping RSS alive! ♥

January 01, 2026 12:00 AM UTC

December 31, 2025


The Python Coding Stack

Mulled Wine, Mince Pies, and More Python

I’ve been having too much mulled wine. And wine of the standard type. And aperitifs before meals and digestifs after them…the occasional caffé corretto, too. You get the picture…

No wonder I can’t remember what articles I wrote this year here at The Python Coding Stack. So make sure you adjust your expectations for this end-of-year review post.

Parties and Gatherings

And there’s another thing I can never remember, especially at this time of year when large-ish gatherings are more common. How many people are needed in a group to have a probability greater than 50% that two people share a birthday? This could be an ice-breaker in some awkward gatherings, but only if you’re with a geeky crowd. Although the analytical proof is cool, writing Python code to explore this problem is just as fun. Here’s my article from February exploring the Birthday Paradox:

This post also explores some tools from the itertools module. Iteration in Python is different from its implementation in many other languages. And the itertools module provides several tools to iterate in a Pythonic way. Later in the year, I explored more of these tools in The itertools Series. Here’s the first post, exploring Yteria’s adventures in a world a bit similar to ours, yet different:

Here’s the whole series following Yteria’s other adventures and the itertools module:

Christmas Decorations

And something else you can’t avoid at this time of year is all the Christmas decorations you’ll find everywhere. Christmas trees, flashing lights, street displays, and…

…Python has its own decorations. You can adorn functions with Python’s equivalent of tinsel and angels:

This post is the most-read post on The Python Coding Stack in 2025. It also has a follow-up post that explores more:

Python’s decorators don’t necessarily make functions pretty—they make them more versatile. However, Python’s f-strings are there to make displayed outputs look pretty. And what if you want your own custom fancy f-string format specifiers?

Endless Visits to Coffee Shops

I spend a lot of time in coffee shops over the holidays. It’s a great place to meet people for a quick catch-up. And to drink coffee. Coffee featured a few times in articles this year here on The Python Coding Stack.

One of these coffee-themed posts followed Alex’s experience with opening his new coffee shop and explored parameters and arguments in Python functions:

Another one narrates one of my trips to a coffee shop and how it helped me really understand the difference between == and is in Python—equality and identity:

Board Games

Who doesn’t play board games over the holidays? We certainly do. And that means we need a dictionary present to resolve Scrabble-related disagreements. This year, we also played Boggle, another word-based game. So, the dictionary had to work overtime.

And dictionaries work overtime in Python, too. They’re one of the most important built-in data structures. Here’s a question: Are Python dictionaries ordered? The answer is more nuanced than you might think at first:

And to understand how Python dictionaries work, it’s best to understand hashability:

This article is part of The Club, the special area on The Python Coding Stack for premium members. The Club launched this year and includes more articles, an exclusive forum, videos, and more… This premium content is in addition to the free articles, which will always remain a key part of The Python Coding Stack. To make sure you don’t miss a thing here on The Python Coding Stack, join The Club by becoming a premium subscriber:

Subscribe now

And it’s not just dictionaries that play an important role in Python. Indeed, in Python, we often prefer to focus on what a data type can do rather than what it is. Here’s another short post in The Club on this topic:

Unwanted Gifts?

Did you receive gifts you don’t need or want? Or perhaps, you have received the same gift more than once? Python can help, too. Let’s start by removing duplicate presents from the Christmas tree list:

And what about the used wrapping paper and food packaging? You can recycle some of it. But some must end up in the garbage. Python has its own trash bin, too:

Magic

This time of year can feel magical. And maybe it’s for this reason that TV stations here keep showing the Harry Potter films during the Christmas holiday season. I’m a Harry Potter fan, and I’ve written Harry Potter-themed posts and series in the past. And there was one this year, too:

One thing that’s not magic in Python is its behind-the-scenes operations. Python’s special methods deal with this, and once you know the trick, it’s no longer magic. Here’s a post that explores some of these special methods:

Queueing in The Cold

I avoided queueing in the cold this year, but I’ve done this so many times in past Christmas holidays. Queueing for a skating rink or for a Christmas fair. Queueing to get mulled wine from a street stall. If you could skip the queue, would you?

And if it’s cold, you’ll need to zip your jacket well. Python’s zipping and unzipping also feature in this year’s posts:

End-of-Year Reflections

Let me spare you all my Python-related stuff—the courses, articles, updates here on The Stack, and all that. Instead, my news for 2025 was my return to an old interest: track and field athletics. I even started a new Substack to document my adventures in track and field:

And I’ve written some posts with a track-and-field theme, too. Here’s one of these:

But the end of the year is also a time for reflecting on one’s life, past and future. Recently, a Python object has done just that:

Looking forward to a great new year in the Python world and here on The Python Coding Stack. Wishing you all a Happy New Year!


Support The Python Coding Stack


Image by iPicture from Pixabay


Join The Club, the exclusive area for paid subscribers for more Python posts, videos, a members’ forum, and more.

Subscribe now


For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!

Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.

And you can find out more about me at stephengruppetta.com

December 31, 2025 09:12 PM UTC


Django Weblog

DSF member of the month - Clifford Gama

For December 2025, we welcome Clifford Gama as our DSF member of the month! ⭐

Clifford contributed to Django core with more than 5 PRs merged in few months! He is part of the Triage and Review Team. He has been a DSF member since October 2024.

You can learn more about Clifford by visiting Clifford's website and his GitHub Profile.

Let’s spend some time getting to know Clifford better!

Can you tell us a little about yourself (hobbies, education, etc)

I'm Clifford. I hold a Bachelor's degree in Mechanical Engineering from the University of Zimbabwe.

How did you start using Django?

During my first year in college, I was also exploring open online courses on EDx and I came across CS50's introduction to web development. After watching the introductory lecture -- which introduced me to git and GitHub -- I discovered Django's excellent documentation and got started on the polls tutorial. The docs were so comprehensive and helpful I never felt the need to return to CS50. (I generally prefer comprehensive first-hand, written learning material over summaries and videos.)

At the time, I had already experimented with flask, but I guess mainly because I didn't know SQL and because flask didn't have an ORM, I never quite picked it up. With Django I felt like I was taking a learning fast-track where I'd learn everything I needed in one go!

And that's how I started using Django.

What projects are you working on now?

At the moment, I’ve been focusing on improving my core skills in preparation for remote work, so I haven’t been starting new projects because of that.

That said, I’ve been working on a client project involving generating large, image-heavy PDFs with WeasyPrint, where I’ve been investigating performance bottlenecks and ways to speed up generation time, which was previously around 30 minutes 😱.

What are you learning about these days?

I’ve been reading Boost Your Git DX by Adam Johnson and learning how to boost my Git and shell developer experience, which has been a great read. Aside from that, inspired by some blogs and talks by Haki Benita, I am also learning about software design and performance. Additionally, I am working on improving my general fluency in Python.

What other framework do you know and if there is anything you would like to have in Django if you had magical powers?

I am not familiar with any other frameworks, but if I had magic powers I'd add production-grade static-file serving in Django.

Django libraries are your favorite (core or 3rd party)?

The ORM, Wagtail and Django's admin.

What are the top three things in Django that you like?

How did you start contributing to Django?

I started contributing to Django in August last year, which is when I discovered the community, which was a real game changer for me. Python was my first course at university, and I loved it because it was creative and there was no limit to what I could build with it.

Whenever I saw a problem in another course that could be solved programmatically, I jumped at it. My proudest project from that time was building an NxN matrix determinant calculator after learning about recursion and spotting the opportunity in an algebra class.

After COVID lockdown, I gave programming up for a while. With more time on my hands, I found myself prioritizing programming over core courses, so I took a break. Last year, I returned to it when I faced a problem that I could only solve with Django. My goal was simply to build an app quickly and go back to being a non-programmer, but along the way I thought I found a bug in Django, filed a ticket, and ended up writing a documentation PR. That’s when I really discovered the Django community.

What attracted me most was that contributions are held to high standards, but experienced developers are always ready to help you reach them. Contributing was collaborative, pushing everyone to do their best. It was a learning opportunity too good to pass up.

How did you join the Triage and Review team?

About the time after I contributed my first PR, I started looking at open tickets to find more to work on, and keep on learning.

Sometimes a ticket was awaiting triage, in which case the first step was to triage it before assigning it to working on it, and sometimes the ticket I wanted was already taken, in which case I'd look at the PR if available. Reviewing a PR can be a faster way to learn about a particular part of the codebase, because someone has already done most of the investigative part of work, so I reviewed PRs as well.

After a while I got an invitation from Sarah Boyce, one of the fellows, to join the team. I didn't even know that I could join before I got the invitation, so I was thrilled!

How the work is going so far?

It’s been rewarding. I’ve gained familiarity with the Django codebase and real experience collaborating with others, which already exceeds what I expected when I started contributing.

One unexpected highlight was forming a friendship through one of the first PRs I reviewed.

SiHyun Lee and I are now both part of the triage and review team, and I’m grateful for that connection.

What are your hobbies or what do you do when you’re not working?

My main hobby is storytelling in a broad sense. In fact, it was a key reason I returned to programming after a long break. I enjoy discovering enduring stories from different cultures, times, and media—ranging from the deeply personal and literary to the distant and philosophical. I recently watched two Japanese classics and found I quite love them. I wrote about one of the films on my blog, and I also get to practice my Japanese, which I’ve been learning on Duolingo for about two years. I also enjoy playing speed chess.

Do you have any suggestions for people who would like to start triage and review tickets and PRs?

If there’s an issue you care about, or one that touches a part of the codebase you’re familiar with or curious about, jump in. Tickets aren’t always available to work on, but reviews always are, and they’re open to everyone. Reviewing helps PRs move faster, including your own if you have any open, sharpens your understanding of a component, and often clarifies the problem itself.

As Simon Charette puts it:

“Triaging issues and spending time understanding them is often more valuable than landing code itself as it strengthen our common understanding of the problem and allow us to build a consistent experience accross the diverse interfaces Django provides.”

And you can put it on your CV!

Is there anything else you’d like to say?

I’m grateful to everyone who contributes to making every part of Django what it is. I’m particularly thankful to whoever nominated me to be the DSF Member of the month.

I am optimistic about the future of Django. Django 6.1 is already shaping up with new features, and there are new projects like Django Bolt coming up.

Happy new year 🎊!


Thank you for doing the interview, Clifford and happy new year to the Django community 💚!

December 31, 2025 08:42 PM UTC


"Michael Kennedy's Thoughts on Technology"

Python Numbers Every Programmer Should Know

There are numbers every Python programmer should know. For example, how fast or slow is it to add an item to a list in Python? What about opening a file? Is that less than a millisecond? Is there something that makes that slower than you might have guessed? If you have a performance sensitive algorithm, which data structure should you use? How much memory does a floating point number use? What about a single character or the empty string? How fast is FastAPI compared to Django?

I wanted to take a moment and write down performance numbers specifically focused on Python developers. Below you will find an extensive table of such values. They are grouped by category. And I provided a couple of graphs for the more significant analysis below the table.

Source code for the benchmarks

This article is posted without any code. I encourage you to dig into the benchmarks. The code is available on GitHub at:

https://github.com/mikeckennedy/python-numbers-everyone-should-know

📊 System Information

The benchmarks were run on the sytem described in this table. While yours may be faster or slower, the most important thing to consider is relative comparisons.

Property Value
Python Version CPython 3.14.2
Hardware Mac Mini M4 Pro
Platform macOS Tahoe (26.2)
Processor ARM
CPU Cores 14 physical / 14 logical
RAM 24 GB
Timestamp 2025-12-30

Acknowledgments

Inspired by Latency Numbers Every Programmer Should Know and similar resources.


Python numbers you should know

More analysis and graphs by category below the table.

Category Operation Time Memory
💾 Memory Empty Python process 15.73 MB
Empty string 41 bytes
100-char string 141 bytes
Small int (0-256) 28 bytes
Large int 28 bytes
Float 24 bytes
Empty list 56 bytes
List with 1,000 ints 35.2 KB
List with 1,000 floats 32.1 KB
Empty dict 64 bytes
Dict with 1,000 items 63.4 KB
Empty set 216 bytes
Set with 1,000 items 59.6 KB
Regular class instance (5 attrs) 694 bytes
__slots__ class instance (5 attrs) 212 bytes
List of 1,000 regular class instances 165.2 KB
List of 1,000 __slots__ class instances 79.1 KB
dataclass instance 694 bytes
namedtuple instance 228 bytes
⚙️ Basic Ops Add two integers 19.0 ns (52.7M ops/sec)
Add two floats 18.4 ns (54.4M ops/sec)
String concatenation (small) 39.1 ns (25.6M ops/sec)
f-string formatting 64.9 ns (15.4M ops/sec)
.format() 103 ns (9.7M ops/sec)
% formatting 89.8 ns (11.1M ops/sec)
List append 28.7 ns (34.8M ops/sec)
List comprehension (1,000 items) 9.45 μs (105.8k ops/sec)
Equivalent for-loop (1,000 items) 11.9 μs (83.9k ops/sec)
📦 Collections Dict lookup by key 21.9 ns (45.7M ops/sec)
Set membership check 19.0 ns (52.7M ops/sec)
List index access 17.6 ns (56.8M ops/sec)
List membership check (1,000 items) 3.85 μs (259.6k ops/sec)
len() on list 18.8 ns (53.3M ops/sec)
Iterate 1,000-item list 7.87 μs (127.0k ops/sec)
Iterate 1,000-item dict 8.74 μs (114.5k ops/sec)
sum() of 1,000 ints 1.87 μs (534.8k ops/sec)
🏷️ Attributes Read from regular class 14.1 ns (70.9M ops/sec)
Write to regular class 15.7 ns (63.6M ops/sec)
Read from __slots__ class 14.1 ns (70.7M ops/sec)
Write to __slots__ class 16.4 ns (60.8M ops/sec)
Read from @property 19.0 ns (52.8M ops/sec)
getattr() 13.8 ns (72.7M ops/sec)
hasattr() 23.8 ns (41.9M ops/sec)
📄 JSON json.dumps() (simple) 708 ns (1.4M ops/sec)
json.loads() (simple) 714 ns (1.4M ops/sec)
json.dumps() (complex) 2.65 μs (376.8k ops/sec)
json.loads() (complex) 2.22 μs (449.9k ops/sec)
orjson.dumps() (complex) 310 ns (3.2M ops/sec)
orjson.loads() (complex) 839 ns (1.2M ops/sec)
ujson.dumps() (complex) 1.64 μs (611.2k ops/sec)
msgspec encode (complex) 445 ns (2.2M ops/sec)
Pydantic model_dump_json() 1.54 μs (647.8k ops/sec)
Pydantic model_validate_json() 2.99 μs (334.7k ops/sec)
🌐 Web Frameworks Flask (return JSON) 16.5 μs (60.7k req/sec)
Django (return JSON) 18.1 μs (55.4k req/sec)
FastAPI (return JSON) 8.63 μs (115.9k req/sec)
Starlette (return JSON) 8.01 μs (124.8k req/sec)
Litestar (return JSON) 8.19 μs (122.1k req/sec)
📁 File I/O Open and close file 9.05 μs (110.5k ops/sec)
Read 1KB file 10.0 μs (99.5k ops/sec)
Write 1KB file 35.1 μs (28.5k ops/sec)
Write 1MB file 207 μs (4.8k ops/sec)
pickle.dumps() 1.30 μs (769.6k ops/sec)
pickle.loads() 1.44 μs (695.2k ops/sec)
🗄️ Database SQLite insert (JSON blob) 192 μs (5.2k ops/sec)
SQLite select by PK 3.57 μs (280.3k ops/sec)
SQLite update one field 5.22 μs (191.7k ops/sec)
diskcache set 23.9 μs (41.8k ops/sec)
diskcache get 4.25 μs (235.5k ops/sec)
MongoDB insert_one 119 μs (8.4k ops/sec)
MongoDB find_one by _id 121 μs (8.2k ops/sec)
MongoDB find_one by nested field 124 μs (8.1k ops/sec)
📞 Functions Empty function call 22.4 ns (44.6M ops/sec)
Function with 5 args 24.0 ns (41.7M ops/sec)
Method call 23.3 ns (42.9M ops/sec)
Lambda call 19.7 ns (50.9M ops/sec)
try/except (no exception) 21.5 ns (46.5M ops/sec)
try/except (exception raised) 139 ns (7.2M ops/sec)
isinstance() check 18.3 ns (54.7M ops/sec)
⏱️ Async Create coroutine object 47.0 ns (21.3M ops/sec)
run_until_complete(empty) 27.6 μs (36.2k ops/sec)
asyncio.sleep(0) 39.4 μs (25.4k ops/sec)
gather() 10 coroutines 55.0 μs (18.2k ops/sec)
create_task() + await 52.8 μs (18.9k ops/sec)
async with (context manager) 29.5 μs (33.9k ops/sec)

Memory Costs

Understanding how much memory different Python objects consume.

An empty Python process uses 15.73 MB


Strings

The rule of thumb for strings is the core string object takes 41 bytes. Each additional character is 1 byte.

String Size
Empty string "" 41 bytes
1-char string "a" 42 bytes
100-char string 141 bytes


Numbers

Numbers are surprisingly large in Python. They have to derive from CPython’s PyObject and are subject to reference counting for garabage collection, they exceed our typical mental model many of:

Type Size
Small int (0-256, cached) 28 bytes
Large int (1000) 28 bytes
Very large int (10**100) 72 bytes
Float 24 bytes


Collections

Collections are amazing in Python. Dynamically growing lists. Ultra high-perf dictionaries and sets. Here is the empty and “full” overhead of each.

Collection Empty 1,000 items
List (ints) 56 bytes 35.2 KB
List (floats) 56 bytes 32.1 KB
Dict 64 bytes 63.4 KB
Set 216 bytes 59.6 KB


Classes and Instances

Slots are an interesting addition to Python classes. They remove the entire concept of a __dict__ for tracking fields and other values. Even for a single instance, slots classes are significantly smaller (212 bytes vs 694 bytes for 5 attributes). If you are holding a large number of them in memory for a list or cache, the memory savings of a slots class becomes very dramatic - over 2x less memory usage. Luckily for most use-cases, just adding a slots entry saves a ton of memory with minimal effort.

Type Empty 5 attributes
Regular class 344 bytes 694 bytes
__slots__ class 32 bytes 212 bytes
dataclass 694 bytes
@dataclass(slots=True) 212 bytes
namedtuple 228 bytes

Aggregate Memory Usage (1,000 instances):

Type Total Memory
List of 1,000 regular class instances 165.2 KB
List of 1,000 __slots__ class instances 79.1 KB


Basic Operations

The cost of fundamental Python operations: Way slower than C/C++/C# but still quite fast. I added a brief comparison to C# to the source repo.

Arithmetic

Operation Time
Add two integers 19.0 ns (52.7M ops/sec)
Add two floats 18.4 ns (54.4M ops/sec)
Multiply two integers 19.4 ns (51.6M ops/sec)


String Operations

String operations in Python are fast as well. f-strings are the fastest formatting style, while even the slowest style is still measured in just nano-seconds.

Operation Time
Concatenation (+) 39.1 ns (25.6M ops/sec)
f-string 64.9 ns (15.4M ops/sec)
.format() 103 ns (9.7M ops/sec)
% formatting 89.8 ns (11.1M ops/sec)


List Operations

List operations are very fast in Python. Adding a single item usually requires 28ns. Said another way, you can do 35M appends per second. This is unless the list has to expand using something like a doubling algorithm. You can see this in the ops/sec for 1,000 items.

Surprisingly, list comprehensions are 26% faster than the equivalent for loops with append statements.

Operation Time
list.append() 28.7 ns (34.8M ops/sec)
List comprehension (1,000 items) 9.45 μs (105.8k ops/sec)
Equivalent for-loop (1,000 items) 11.9 μs (83.9k ops/sec)


Collection Access and Iteration

How fast can you get data out of Python’s built-in collections? Here is a dramatic example of how much faster the correct data structure is. item in set or item in dict is 200x faster than item in list for just 1,000 items!

The graph below is non-linear in the x-axis.

Access by Key/Index

Operation Time
Dict lookup by key 21.9 ns (45.7M ops/sec)
Set membership (in) 19.0 ns (52.7M ops/sec)
List index access 17.6 ns (56.8M ops/sec)
List membership (in, 1,000 items) 3.85 μs (259.6k ops/sec)


Length

len() is very fast. Maybe we don’t have to optimize it out of the test condition on a while loop looping 100 times after all.

Collection len() time
List (1,000 items) 18.8 ns (53.3M ops/sec)
Dict (1,000 items) 17.6 ns (56.9M ops/sec)
Set (1,000 items) 18.0 ns (55.5M ops/sec)

Iteration

Operation Time
Iterate 1,000-item list 7.87 μs (127.0k ops/sec)
Iterate 1,000-item dict (keys) 8.74 μs (114.5k ops/sec)
sum() of 1,000 integers 1.87 μs (534.8k ops/sec)

Class and Object Attributes

The cost of reading and writing attributes, and how __slots__ changes things. Slots saves over 2x the memory usage on large collections, with virtually identical attribute access speed.

Attribute Access

Operation Regular Class __slots__ Class
Read attribute 14.1 ns (70.9M ops/sec) 14.1 ns (70.7M ops/sec)
Write attribute 15.7 ns (63.6M ops/sec) 16.4 ns (60.8M ops/sec)


Other Attribute Operations

Operation Time
Read @property 19.0 ns (52.8M ops/sec)
getattr(obj, 'attr') 13.8 ns (72.7M ops/sec)
hasattr(obj, 'attr') 23.8 ns (41.9M ops/sec)

JSON and Serialization

Comparing standard library JSON with optimized alternatives. orjson handles more data types and is over 8x faster than standard lib json for complex objects. Impressive!

Serialization (dumps)

Library Simple Object Complex Object
json (stdlib) 708 ns (1.4M ops/sec) 2.65 μs (376.8k ops/sec)
orjson 60.9 ns (16.4M ops/sec) 310 ns (3.2M ops/sec)
ujson 264 ns (3.8M ops/sec) 1.64 μs (611.2k ops/sec)
msgspec 92.3 ns (10.8M ops/sec) 445 ns (2.2M ops/sec)


Deserialization (loads)

Library Simple Object Complex Object
json (stdlib) 714 ns (1.4M ops/sec) 2.22 μs (449.9k ops/sec)
orjson 106 ns (9.4M ops/sec) 839 ns (1.2M ops/sec)
ujson 268 ns (3.7M ops/sec) 1.46 μs (682.8k ops/sec)
msgspec 101 ns (9.9M ops/sec) 850 ns (1.2M ops/sec)

Pydantic

Operation Time
model_dump_json() 1.54 μs (647.8k ops/sec)
model_validate_json() 2.99 μs (334.7k ops/sec)
model_dump() (to dict) 1.71 μs (585.2k ops/sec)
model_validate() (from dict) 2.30 μs (435.5k ops/sec)

Web Frameworks

Returning a simple JSON response. Benchmarked with wrk against localhost running 4 works in Granian. Each framework returns the same JSON payload from a minimal endpoint. No database access or that sort of thing. This is just how much overhead/perf do we get from each framework itself. The code we write that runs within those view methods is largely the same.

Results

Framework Requests/sec Latency (p99)
Flask 16.5 μs (60.7k req/sec) 20.85 ms (48.0 ops/sec)
Django 18.1 μs (55.4k req/sec) 170.3 ms (5.9 ops/sec)
FastAPI 8.63 μs (115.9k req/sec) 1.530 ms (653.6 ops/sec)
Starlette 8.01 μs (124.8k req/sec) 930 μs (1.1k ops/sec)
Litestar 8.19 μs (122.1k req/sec) 1.010 ms (990.1 ops/sec)


File I/O

Reading and writing files of various sizes. Note that the graph is non-linear in y-axis.

Basic Operations

Operation Time
Open and close (no read) 9.05 μs (110.5k ops/sec)
Read 1KB file 10.0 μs (99.5k ops/sec)
Read 1MB file 33.6 μs (29.8k ops/sec)
Write 1KB file 35.1 μs (28.5k ops/sec)
Write 1MB file 207 μs (4.8k ops/sec)


Pickle vs JSON to Disk

Operation Time
pickle.dumps() (complex obj) 1.30 μs (769.6k ops/sec)
pickle.loads() (complex obj) 1.44 μs (695.2k ops/sec)
json.dumps() (complex obj) 2.72 μs (367.1k ops/sec)
json.loads() (complex obj) 2.35 μs (425.9k ops/sec)

Database and Persistence

Comparing SQLite, diskcache, and MongoDB using the same complex object.

Test Object

user_data = {
    "id": 12345,
    "username": "alice_dev",
    "email": "alice@example.com",
    "profile": {
        "bio": "Software engineer who loves Python",
        "location": "Portland, OR",
        "website": "https://alice.dev",
        "joined": "2020-03-15T08:30:00Z"
    },
    "posts": [
        {"id": 1, "title": "First Post", "tags": ["python", "tutorial"], "views": 1520},
        {"id": 2, "title": "Second Post", "tags": ["rust", "wasm"], "views": 843},
        {"id": 3, "title": "Third Post", "tags": ["python", "async"], "views": 2341},
    ],
    "settings": {
        "theme": "dark",
        "notifications": True,
        "email_frequency": "weekly"
    }
}

SQLite (JSON blob approach)

Operation Time
Insert one object 192 μs (5.2k ops/sec)
Select by primary key 3.57 μs (280.3k ops/sec)
Update one field 5.22 μs (191.7k ops/sec)
Delete 191 μs (5.2k ops/sec)
Select with json_extract() 4.27 μs (234.2k ops/sec)

diskcache

Operation Time
cache.set(key, obj) 23.9 μs (41.8k ops/sec)
cache.get(key) 4.25 μs (235.5k ops/sec)
cache.delete(key) 51.9 μs (19.3k ops/sec)
Check key exists 1.91 μs (523.2k ops/sec)

MongoDB

Operation Time
insert_one() 119 μs (8.4k ops/sec)
find_one() by _id 121 μs (8.2k ops/sec)
find_one() by nested field 124 μs (8.1k ops/sec)
update_one() 115 μs (8.7k ops/sec)
delete_one() 30.4 ns (32.9M ops/sec)

Comparison Table

Operation SQLite diskcache MongoDB
Write one object 192 μs (5.2k ops/sec) 23.9 μs (41.8k ops/sec) 119 μs (8.4k ops/sec)
Read by key/id 3.57 μs (280.3k ops/sec) 4.25 μs (235.5k ops/sec) 121 μs (8.2k ops/sec)
Read by nested field 4.27 μs (234.2k ops/sec) N/A 124 μs (8.1k ops/sec)
Update one field 5.22 μs (191.7k ops/sec) 23.9 μs (41.8k ops/sec) 115 μs (8.7k ops/sec)
Delete 191 μs (5.2k ops/sec) 51.9 μs (19.3k ops/sec) 30.4 ns (32.9M ops/sec)

Note: MongoDB is a victim of network access version in-process access.


Function and Call Overhead

The hidden cost of function calls, exceptions, and async.

Function Calls

Operation Time
Empty function call 22.4 ns (44.6M ops/sec)
Function with 5 arguments 24.0 ns (41.7M ops/sec)
Method call on object 23.3 ns (42.9M ops/sec)
Lambda call 19.7 ns (50.9M ops/sec)
Built-in function (len()) 17.1 ns (58.4M ops/sec)

Exceptions

Operation Time
try/except (no exception raised) 21.5 ns (46.5M ops/sec)
try/except (exception raised) 139 ns (7.2M ops/sec)

Type Checking

Operation Time
isinstance() 18.3 ns (54.7M ops/sec)
type() == type 21.8 ns (46.0M ops/sec)

Async Overhead

The cost of async machinery.

Coroutine Creation

Operation Time
Create coroutine object (no await) 47.0 ns (21.3M ops/sec)
Create coroutine (with return value) 45.3 ns (22.1M ops/sec)

Running Coroutines

Operation Time
run_until_complete(empty) 27.6 μs (36.2k ops/sec)
run_until_complete(return value) 26.6 μs (37.5k ops/sec)
Run nested await 28.9 μs (34.6k ops/sec)
Run 3 sequential awaits 27.9 μs (35.8k ops/sec)

asyncio.sleep()

Operation Time
asyncio.sleep(0) 39.4 μs (25.4k ops/sec)
Coroutine with sleep(0) 41.8 μs (23.9k ops/sec)

asyncio.gather()

Operation Time
gather() 5 coroutines 49.7 μs (20.1k ops/sec)
gather() 10 coroutines 55.0 μs (18.2k ops/sec)
gather() 100 coroutines 155 μs (6.5k ops/sec)

Task Creation

Operation Time
create_task() + await 52.8 μs (18.9k ops/sec)
Create 10 tasks + gather 85.5 μs (11.7k ops/sec)

Async Context Managers & Iteration

Operation Time
async with (context manager) 29.5 μs (33.9k ops/sec)
async for (5 items) 30.0 μs (33.3k ops/sec)
async for (100 items) 36.4 μs (27.5k ops/sec)

Sync vs Async Comparison

Operation Time
Sync function call 20.3 ns (49.2M ops/sec)
Async equivalent (run_until_complete) 28.2 μs (35.5k ops/sec)

Methodology

Benchmarking Approach

Environment

Code Repository

All benchmark code available at: https://github.com/mkennedy/python-numbers-everyone-should-know


Key Takeaways

  1. Memory overhead: Python objects have significant memory overhead - even an empty list is 56 bytes
  2. Dict/set speed: Dictionary and set lookups are extremely fast (O(1) average case) compared to list membership checks (O(n))
  3. JSON performance: Alternative JSON libraries like orjson and msgspec are 3-8x faster than stdlib json
  4. Async overhead: Creating and awaiting coroutines has measurable overhead - only use async when you need concurrency
  5. __slots__ tradeoff: __slots__ saves significant memory (over 2x for collections) with virtually no performance impact

Last updated: 2026-01-01

/* Custom table styling for this post only */ table { font-size: 13px; border-collapse: collapse; width: 100%; max-width: 690px; /* margin: 1em auto; */ background-color: #ddd; box-shadow: 0 2px 8px rgba(0, 0, 0, 0.08); border-radius: 6px; overflow: hidden; } table thead { background-color: #444; color: white; } table th { padding: 8px 12px; text-align: left; font-weight: 600; text-transform: uppercase; letter-spacing: 0.5px; font-size: 11px; } table td { padding: 6px 12px; border-bottom: 1px solid #e5e7eb; line-height: 1.4; } table tbody tr { transition: all 0.2s ease-in-out; background-color: #fff; } table tbody tr:hover { background-color: #e3f2fd; transform: scale(1.01); box-shadow: 0 2px 12px rgba(33, 150, 243, 0.15); cursor: default; } table tbody tr:last-child td { border-bottom: none; } /* Code in tables */ table code { background-color: #f3f4f6; padding: 2px 4px; border-radius: 3px; font-size: 12px; }

December 31, 2025 07:49 PM UTC


Zero to Mastery

[December 2025] Python Monthly Newsletter 🐍

73rd issue of Andrei's Python Monthly: A big change is coming. Read the full newsletter to get up-to-date with everything you need to know from last month.

December 31, 2025 10:00 AM UTC

December 30, 2025


Paolo Melchiorre

Looking Back at Python Pescara 2025

A personal retrospective on Python Pescara in 2025: events, people, and moments that shaped a growing local community, reflecting on continuity, experimentation, and how a small group connected to the wider Python ecosystem.

December 30, 2025 11:00 PM UTC


PyCoder’s Weekly

Issue #715: Top 5 of 2025, LlamaIndex, Python 3.15 Speed, and More (Dec. 30, 2025)

#715 – DECEMBER 30, 2025
View in Browser »

The PyCoder’s Weekly Logo

Welcome to the end of 2025 PyCoders newsletter. In addition to your regular content, this week we have included the top 5 most clicked articles of the year.

Thanks for continuing to be with us at PyCoder’s Weekly. I’m sure 2026 will be just as wild. Speaking of 2026, if you come across something cool next year, an article or a project you think deserves some notice, send it to us and it might end up in a future issue.

Happy Pythoning!

— The PyCoder’s Weekly Team
    Christopher Trudeau, Curator
    Dan Bader, Editor


#1: The Inner Workings of Python Dataclasses Explained

Discover how Python dataclasses work internally! Learn how to use __annotations__ and exec() to make our own dataclass decorator!
JACOB PADILLA

#2: Going Beyond requirements.txt With pylock.toml

What is the best way to record the Python dependencies for the reproducibility of your projects? What advantages will lock files provide for those projects? This week on the show, we welcome back Python Core Developer Brett Cannon to discuss his journey to bring PEP 751 and the pylock.toml file format to the community.
REAL PYTHON podcast

#3: Django vs. FastAPI, an Honest Comparison

David has worked with Django for a long time, but recently has done some deeper coding with FastAPI. As a result, he’s able to provide a good contrast between the libraries and why/when you might choose one over the other.
DAVID DAHAN

#4: How to Use Loguru for Simpler Python Logging

Learn how to use Loguru to implement better logging in your Python applications quickly and with less configuration. Spend more time debugging effectively with cleaner, more informative logs.
REAL PYTHON

#5: Narwhals: Unified DataFrame Functions

Narwhals is a lightweight compatibility layer between DataFrame libraries. You can use it as a common interface to write reproducible and maintainable data science code which supports pandas, Polars, DuckDB, PySpark, PyArrow, and more
MARCO GORELLI

Articles & Tutorials

What Actually Makes You Senior

This opinion piece argues that there is one skill that separates senior engineers from everyone else. It isn’t technical. It’s the ability to take ambiguous problems and make them concrete. Associated HN Discussion
MATHEUS LIMA

Jingle Bells (Batman Smells)

Not Python in the least, but a little bit of seasonal fun. Ever wonder about all the variations on the Jingle Bells schoolyard version? Well, there’s a chart for that. Associated HN Discussion
LORE AND ORDURE

Write Python You Won’t Hate in Six Months

Real Python’s live courses are back for 2026. Python for Beginners builds fundamentals correctly from the start. Intermediate Python Deep Dive covers decorators, OOP done right, and Python’s object model. Both include live instruction, hands-on projects, and expert feedback. Learn more at realpython.com/live →
REAL PYTHON sponsor

Top 10 Python Frameworks for IoT

Explore the top 10 Python frameworks for Internet of Things (IoT) development that help with scalable device communication, data processing, and real-time system control.
SAMRADNI

LlamaIndex in Python: A RAG Guide With Examples

Learn how to set up LlamaIndex, choose an LLM, load your data, build and persist an index, and run queries to get grounded, reliable answers with examples.
REAL PYTHON

Quiz: LlamaIndex in Python: A RAG Guide With Examples

REAL PYTHON

Reading User Input From the Keyboard With Python

Master taking user input in Python to build interactive terminal apps with clear prompts, solid error handling, and smooth multi-step flows.
REAL PYTHON course

Python 3.15’s Interpreter Potentially 15% Faster

Python 3.15’s interpreter for Windows x86-64 should hopefully be 15% faster based on the implementation of tail calling results.
KEN JIN

SOLID Design Principles to Improve Object-Oriented Code

Learn how to apply SOLID design principles in Python and build maintainable, reusable, and testable object-oriented code.
REAL PYTHON

Quiz: SOLID Design Principles to Improve Object-Oriented Code

REAL PYTHON

Projects & Code

daffy: DataFrame Validation Decorators

GITHUB.COM/VERTTI • Shared by Janne Sinivirta

x-ray: A Tool to Detect Whether a PDF Has a Bad Redaction

GITHUB.COM/FREELAWPROJECT

sqlit: TUI for SQL Databases

GITHUB.COM/MAXTEABAG

svc-infra: Bundled FastAPI Infrastructure

GITHUB.COM/NFRAXLAB

django-crontask: Cron-Style Django Task Framework

GITHUB.COM/CODINGJOE

Events

Canberra Python Meetup

January 1, 2026
MEETUP.COM

Sydney Python User Group (SyPy)

January 1, 2026
SYPY.ORG

Melbourne Python Users Group, Australia

January 5, 2026
J.MP

PyBodensee Monthly Meetup

January 5, 2026
PYBODENSEE.COM

Weekly Real Python Office Hours Q&A (Virtual)

January 7, 2026
REALPYTHON.COM


Happy Pythoning!
This was PyCoder’s Weekly Issue #715.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

December 30, 2025 07:30 PM UTC


Programiz

Python List

In this tutorial, we will learn about Python lists (creating lists, changing list items, removing items, and other list operations) with the help of examples.

December 30, 2025 04:31 AM UTC

December 29, 2025


Paolo Melchiorre

Django On The Med: A Contributor Sprint Retrospective

A personal retrospective on Django On The Med, three months later. From the first idea to the actual contributor sprint, and how a simple format based on focused mornings and open afternoons created unexpected value for people and the Django open source community.

December 29, 2025 11:00 PM UTC


Hugo van Kemenade

Replacing python-dateutil to remove six

The dateutil library is a popular and powerful Python library for dealing with dates and times.

However, it still supports Python 2.7 by depending on the six compatibility shim, and I’d prefer not to install for Python 3.10 and higher.

Here’s how I replaced three uses of its relativedelta in a couple of CLIs that didn’t really need to use it.

One #

norwegianblue was using it to calculate six months from now:

import datetime as dt

from dateutil.relativedelta import relativedelta

now = dt.datetime.now(dt.timezone.utc)
# datetime.datetime(2025, 12, 29, 15, 59, 44, 518240, tzinfo=datetime.timezone.utc)
six_months_from_now = now + relativedelta(months=+6)
# datetime.datetime(2026, 6, 29, 15, 59, 44, 518240, tzinfo=datetime.timezone.utc)

But we don’t need to be so precise here, and 180 days is good enough, using the standard library’s datetime.timedelta:

import datetime as dt

now = dt.datetime.now(dt.timezone.utc)
# datetime.datetime(2025, 12, 29, 15, 59, 44, 518240, tzinfo=datetime.timezone.utc)
six_months_from_now = now + dt.timedelta(days=180)
# datetime.datetime(2026, 6, 27, 15, 59, 44, 518240, tzinfo=datetime.timezone.utc)

Two #

pypistats was using it get the last day of a month:

import datetime as dt

first = dt.date(year, month, 1)
# datetime.date(2025, 12, 1)
last = first + relativedelta(months=1) - relativedelta(days=1)
# datetime.date(2025, 12, 31)

Instead, we can use the stdlib’s calendar.monthrange:

import calendar
import datetime as dt

last_day = calendar.monthrange(year, month)[1]
# 31
last = dt.date(year, month, last_day)
# datetime.date(2025, 12, 31)

Three #

Finally, to get last month as a yyyy-mm string:

import datetime as dt

from dateutil.relativedelta import relativedelta

today = dt.date.today()
# datetime.date(2025, 12, 29)
d = today - relativedelta(months=1)
# datetime.date(2025, 11, 29)
d.isoformat()[:7]
# '2025-11'

Instead:

import datetime as dt

today = dt.date.today()
# datetime.date(2025, 12, 29)
if today.month == 1:
 year, month = today.year - 1, 12
else:
 year, month = today.year, today.month - 1
 # 2025, 11
f"{year}-{month:02d}"
# '2025-11'

Goodbye six, and we also get slightly quicker install, import and run times.

Bonus #

I recommend Adam Johnson’s tip to import datetime as dt to avoid the ambiguity of which datetime is the module and which is the class.


Header photo: Ver Sacrum calendar by Alfred Roller

December 29, 2025 04:53 PM UTC


Talk Python to Me

#532: 2025 Python Year in Review

Python in 2025 is in a delightfully refreshing place: the GIL's days are numbered, packaging is getting sharper tools, and the type checkers are multiplying like gremlins snacking after midnight. On this episode, we have an amazing panel to give us a range of perspectives on what matter in 2025 in Python. We have Barry Warsaw, Brett Cannon, Gregory Kapfhammer, Jodie Burchell, Reuven Lerner, and Thomas Wouters on to give us their thoughts.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/seer-code-review'>Seer: AI Debugging, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Python Software Foundation (PSF)</strong>: <a href="https://www.python.org/psf/?featured_on=talkpython" target="_blank" >www.python.org</a><br/> <strong>PEP 810: Explicit lazy imports</strong>: <a href="https://peps.python.org/pep-0810/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>PEP 779: Free-threaded Python is officially supported</strong>: <a href="https://peps.python.org/pep-0779/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>PEP 723: Inline script metadata</strong>: <a href="https://peps.python.org/pep-0723/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>PyCharm</strong>: <a href="https://www.jetbrains.com/pycharm/?featured_on=talkpython" target="_blank" >www.jetbrains.com</a><br/> <strong>JetBrains</strong>: <a href="https://www.jetbrains.com/company/?featured_on=talkpython" target="_blank" >www.jetbrains.com</a><br/> <strong>Visual Studio Code</strong>: <a href="https://code.visualstudio.com/?featured_on=talkpython" target="_blank" >code.visualstudio.com</a><br/> <strong>pandas</strong>: <a href="https://pandas.pydata.org/?featured_on=talkpython" target="_blank" >pandas.pydata.org</a><br/> <strong>PydanticAI</strong>: <a href="https://ai.pydantic.dev/?featured_on=talkpython" target="_blank" >ai.pydantic.dev</a><br/> <strong>OpenAI API docs</strong>: <a href="https://platform.openai.com/docs/?featured_on=talkpython" target="_blank" >platform.openai.com</a><br/> <strong>uv</strong>: <a href="https://docs.astral.sh/uv/?featured_on=talkpython" target="_blank" >docs.astral.sh</a><br/> <strong>Hatch</strong>: <a href="https://github.com/pypa/hatch?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>PDM</strong>: <a href="https://pdm-project.org/latest/?featured_on=talkpython" target="_blank" >pdm-project.org</a><br/> <strong>Poetry</strong>: <a href="https://python-poetry.org/?featured_on=talkpython" target="_blank" >python-poetry.org</a><br/> <strong>Project Jupyter</strong>: <a href="https://jupyter.org/?featured_on=talkpython" target="_blank" >jupyter.org</a><br/> <strong>JupyterLite</strong>: <a href="https://jupyterlite.readthedocs.io/en/latest/?featured_on=talkpython" target="_blank" >jupyterlite.readthedocs.io</a><br/> <strong>PEP 690: Lazy Imports</strong>: <a href="https://peps.python.org/pep-0690/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>PyTorch</strong>: <a href="https://pytorch.org/?featured_on=talkpython" target="_blank" >pytorch.org</a><br/> <strong>Python concurrent.futures</strong>: <a href="https://docs.python.org/3/library/concurrent.futures.html?featured_on=talkpython" target="_blank" >docs.python.org</a><br/> <strong>Python Package Index (PyPI)</strong>: <a href="https://pypi.org/?featured_on=talkpython" target="_blank" >pypi.org</a><br/> <strong>EuroPython</strong>: <a href="https://tickets.europython.eu/?featured_on=talkpython" target="_blank" >tickets.europython.eu</a><br/> <strong>TensorFlow</strong>: <a href="https://www.tensorflow.org/?featured_on=talkpython" target="_blank" >www.tensorflow.org</a><br/> <strong>Keras</strong>: <a href="https://keras.io/?featured_on=talkpython" target="_blank" >keras.io</a><br/> <strong>PyCon US</strong>: <a href="https://us.pycon.org/?featured_on=talkpython" target="_blank" >us.pycon.org</a><br/> <strong>NumFOCUS</strong>: <a href="https://numfocus.org/?featured_on=talkpython" target="_blank" >numfocus.org</a><br/> <strong>Python discussion forum (discuss.python.org)</strong>: <a href="https://discuss.python.org/?featured_on=talkpython" target="_blank" >discuss.python.org</a><br/> <strong>Language Server Protocol</strong>: <a href="https://microsoft.github.io/language-server-protocol/?featured_on=talkpython" target="_blank" >microsoft.github.io</a><br/> <strong>mypy</strong>: <a href="https://mypy-lang.org/?featured_on=talkpython" target="_blank" >mypy-lang.org</a><br/> <strong>Pyright</strong>: <a href="https://github.com/microsoft/pyright?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Pylance</strong>: <a href="https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance&featured_on=talkpython" target="_blank" >marketplace.visualstudio.com</a><br/> <strong>Pyrefly</strong>: <a href="https://github.com/facebook/pyrefly?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>ty</strong>: <a href="https://github.com/astral-sh/ty?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Zuban</strong>: <a href="https://docs.zubanls.com/?featured_on=talkpython" target="_blank" >docs.zubanls.com</a><br/> <strong>Jedi</strong>: <a href="https://jedi.readthedocs.io/en/latest/?featured_on=talkpython" target="_blank" >jedi.readthedocs.io</a><br/> <strong>GitHub</strong>: <a href="https://github.com/?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>PyOhio</strong>: <a href="https://www.pyohio.org/?featured_on=talkpython" target="_blank" >www.pyohio.org</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=PfRCbeOrUd8" target="_blank" >youtube.com</a><br/> <strong>Episode #532 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/532/2025-python-year-in-review#takeaways-anchor" target="_blank" >talkpython.fm/532</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/532/2025-python-year-in-review" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

December 29, 2025 08:00 AM UTC


Seth Michael Larson

Nintendo GameCube and Switch “Wrapped” 2025 🎮🎁

This is my last blog post for 2025 💜 Thanks for reading, see you in 2026!

One of my goals for 2025 was to play more games! I've been collecting play activity for my Nintendo Switch, Switch 2, and my Nintendo GameCube. I've published a combined SQLite database with this data for 2025 with games, play sessions, and more. Feel free to dig into this data yourself, I've included some queries and my own thoughts, too.

Here are the questions I answered with this data:

What were my favorite games this year?

Before we get too deep into quantitative analysis, let's start with the games I enjoyed the most and defined this year for me.

My favorite game for the GameCube in 2025 is Pikmin 2. The Pikmin franchise has always held a close place in my heart being a lover of plants, nature, and dandori.

One of my major video-gaming projects in 2025 was to gather every unique treasure in Pikmin 2. I called this the “International Treasure Hoard” following a similar naming to the “National PokéDex”. This project required buying both the Japanese and PAL versions of Pikmin 2 which I was excited to add to my collection.

The project was inspired by a YouTube short from a Pikmin content creator I enjoy named “JessicaIn3D”. The video shows how there are three different treasure hoards in Pikmin 2, one per region, meaning its impossible to collect every treasure in a single game.


“You Can't Collect Every Treasure In Pikmin 2” by JessicaIn3D

The project took 4 months to complete, running from June to October. I published a script which analyzed Pikmin 2 save file data using this documentation on the save file format. From here the HTML table in the project page could be automatically updated as I made more progress. I published regular updates to the project page as the project progressed, too.


Pikmin 2 for the GameCube (NTSC-J, NTSC-M, and PAL) and the Switch.

My favorite game for the Switch family of consoles in 2025 is Kirby Air Riders. This game is pure chaotic fun with a heavy dose of nostalgia for me. I was one of the very few that played the original game on the Nintendo GameCube 22 years ago. I still can't believe this gem of a game only sold 750,000 units in the United States. It's amazing to see what is essentially the game my brother and I dreamed of as a sequel: taking everything great about the original release and adding online play, a more expansive world, and near-infinite customization and unlockables.

This game is fast-paced and fits into a busy life easily: a single play session of City Trail or Air Ride mode lasts less than 7 minutes from start to finish. I'll frequently pick this game up to play a few rounds between work and whatever the plans of the evening are. Each round is different, and you can pursue whichever strategy you prefer (combat, speed, gliding, legendary-vehicle hunting) and expect to have a fighting chance in the end.


Kirby Air Ride for the GameCube (NTSC-J and NTSC-M) and Kirby Air Riders for the Switch 2.

Which game system did I play the most?

Even with the Switch and Switch 2 bundled into one category I played the GameCube more in 2025. This was a big year for me and GameCube: I finally modded a platinum cube and my childhood indigo cube with the FlippyDrive and ETH2GC. I've got a lot more in store for the GameCube next year, too.

System Duration
GameCube 41h 55m
Switch 35h 45m
SQLite query
SELECT game_system, SUM(duration) AS d
FROM sessions
WHERE STRFTIME('%Y', date) = '2025'
GROUP BY game_system
ORDER BY d DESC;

Here's the same data stylized like the GitHub contributor graph. Blue squares represent days when I played GameCube and red squares I played the Switch or Switch 2, starting in June 2025:

I also played Sonic Adventure on a newly-purchased SEGA Dreamcast for the first time in 2025, too. Unfortunately I don't have a way to track play data for the Dreamcast (yet?) but my experience with the Dreamcast and Sonic Adventure will likely get its own short blog post eventually, stay tuned.

Which games did I play the most?

I played 9 unique titles this year (including region and platform variants), but which ones did I play the most?

Game Duration
Pikmin 2 27h 11m
Animal Crossing 16h 47m
Kirby Air Riders 16h 15m
Mario Kart World 10h 25m
Super Mario Odyssey 4h 45m
Pikmin 4 1h 5m
Overcooked! 2 45m
Kirby's Airride 15m
Sonic Origins 10m
SQLite query
SELECT game_name, SUM(duration) AS d
FROM sessions
WHERE STRFTIME('%Y', date) = '2025'
GROUP BY game_name
ORDER BY d DESC;

That's a lot of Pikmin 2, huh? This year collected all 245 unique treasures across the three regions of Pikmin 2 (JP, US, and PAL) including a 100% complete save file for the Japanese region. This is the first time I collected all treasures for a single Pikmin 2 play-through.

We can break down how much time was spent in each region and system for Pikmin 2:

System Region Duration
GameCube US 9h 24m
GameCube JP 9h 17m
GameCube PAL 6h 9m
Switch US 2h 20m
SQLite query
SELECT game_system, game_region, SUM(duration) AS d
FROM sessions
WHERE STRFTIME('%Y', date) = '2025'
AND game_name = 'Pikmin 2'
GROUP BY game_system, game_region
ORDER BY d DESC;

You can see I even started playing the Switch edition of Pikmin 2 but bailed on that idea pretty early. Playing through the same game 3 times in a year was already enough :) The US and JP versions were ~9 hours each with PAL receiving less play time. This is due to PAL only having 10 unique treasures, so I was able to speed-run most of the game.

Which games did I play most per session?

This query sorta indicates “binge-ability”, when I did play a title how long was that play session on average? Super Mario Odyssey just barely took the top spot here, but the two Switch 2 titles I own were close behind.

Name Duration
Super Mario Odyssey 57m
Mario Kart World 56m
Kirby Air Riders 51m
Pikmin 2 49m
Animal Crossing 47m
Overcooked! 2 45m
Pikmin 4 32m
Kirby's Airride 15m
Sonic Origins 5m
SQLite query
SELECT game_name, SUM(duration)/COUNT(DISTINCT date) AS d
FROM sessions
WHERE STRFTIME('%Y', date) = '2025'
GROUP BY game_name
ORDER BY d DESC;

When did I start and stop playing each game?

I only have enough time to focus on one game at a time, so there is a pretty linear history of which game is top-of-mind for me at any one time. From this query we can construct a linear history:

I still want to return to Super Mario Odyssey, I was having a great time with the game! It's just that and Kirby Air Riders came out and stole my attention.

Game First played Last played
Pikmin 2 2025-06-01 2025-10-06
Mario Kart World 2025-07-20 2025-11-17
Animal Crossing 2025-07-29 2025-09-08
Sonic Origins 2025-08-11 2025-08-25
Super Mario Odyssey 2025-10-13 2025-10-21
Kirby Air Riders 2025-11-07 2025-12-21
Pikmin 4 2025-11-10 2025-11-12
SQLite query
SELECT (
  game_name,
  MIN(date) AS fp,
  MAX(date)
)
FROM sessions
WHERE STRFTIME('%Y', date) = '2025'
GROUP BY game_name
ORDER BY fp ASC;

Which game was I most consistently playing?

We can take the data from the “Days” column above and use that as a divisor for the number of unique days each game was played. This will give a sense of how often I was playing a game within the time span that I was “active” for a game:

Game % Days Played Span
Pikmin 4 100% 2 2
Super Mario Odyssey 63% 5 8
Animal Crossing 51% 21 41
Kirby Air Riders 43% 19 44
Pikmin 2 26% 33 127
Sonic Origins 14% 2 14
Mario Kart World 9% 11 120
SQLite query
SELECT (
  game_name,
  COUNT(DISTINCT date) AS played,
  (
    STRFTIME('%j', MAX(date))
    -STRFTIME('%j', MIN(date))
  ) AS days
)
FROM sessions
WHERE STRFTIME('%Y', date) = '2025'
GROUP BY game_name
ORDER BY MIN(date) ASC;

If we look at total year gaming “saturation” for 2025 and June-onwards (214 days):

Days Played % Days (2025) % Days (>=June)
89 24% 42%
SQLite query
SELECT COUNT(DISTINCT date)
FROM sessions
WHERE STRFTIME('%Y', date) = '2025';

When did I play games?

Looking at the year, I didn't start playing games on either system this year until June. That lines up with me receiving my GameCube FlippyDrives which I had pre-ordered in 2024. After installing these modifications to my GameCube I began playing games more regularly again :)

Month Duration
June 10h 4m
July 9h 40m
August 18h 26m
September 7h 22m
October 10h 0m
November 15h 5m
December 7h 0m
SQLite query
SELECT STRFTIME('%m', date) AS m, SUM(duration)
FROM sessions
WHERE STRFTIME('%Y', date) = '2025'
GROUP BY m
ORDER BY m ASC;

August was the month with the most play! This was due entirely to playing Animal Crossing Deluxe (~16 hours), a mod by Cuyler for Animal Crossing on the GameCube. Animal Crossing feels the best when you play for short sessions each day which I why I was playing so often.

Game Duration
Animal Crossing 15h 41m
Mario Kart World 2h 15m
Pikmin 2 19m
Sonic Origins 10m
SQLite query
SELECT game_name, SUM(duration)
FROM sessions
WHERE STRFTIME('%Y-%m', date) = '2025-08'
GROUP BY game_name;

Which day of the week did I play most?

Unsurprisingly, weekends tend to be the days on average with the longest play sessions. Sunday just barely takes the highest average play duration per day. Wednesday, Thursday, and Friday have the lowest play activity as these days are reserved for board-game night, seeing family, and date night respectively :)

Day Duration Days Average
Sun 16h 16m 15 1h 5m
Mon 13h 52m 17 48m
Tues 14h 9m 16 53m
Wed 11h 17m 15 45m
Thur 6h 35m 9 43m
Fri 5h 45m 8 43m
Sat 9h 42m 9 1h 4m
SQLite query
SELECT STRFTIME('%w', date) AS day_of_week,SUM(duration),COUNT(DISTINCT date),SUM(duration)/COUNT(DISTINCT date)
FROM sessions WHERE STRFTIME('%Y', date) = '2025'
GROUP BY day_of_week
ORDER BY day_of_week ASC;


Thanks for keeping RSS alive! ♥

December 29, 2025 12:00 AM UTC

December 28, 2025


Mark Dufour

A (biased) Pure Python Performance Comparison

The following is a performance comparison of several (pure) Python implementations, for a large part of the Shed Skin example set. I left out some of the examples, that would result in an unfair comparison (mostly because of randomization), or that were too interactive to easily measure. Obviously this comparison is very biased, and probably unfair in some way to the other projects (though I've tried to be fair, for example by letting PyPy stabilize before measuring).

This first plot shows the speedups versus CPython 3.10, for CPython 3.14, Nuitka, Pypy and Shed Skin.

Shed Skin is able to speed up the examples by an average factor of about 29 times (not percent, times :)), while PyPy is able to speed up the examples by an average factor of about 16 times. Given that PyPy has its hands tied behind its back trying to support unrestricted Python code, and was not optimized specifically for these examples (that I am aware of), that is actually still quite an impressive result.

As for the few cases where PyPy performs better than Shed Skin, this appears to be mainly because of PyPy being able to optimize away heap allocations for short-lived objects (in many cases, custom Vector(x,y,z) instances). In a few cases also, the STL unordered_map that Shed Skin uses to implement dictionaries appears to perform poorly compared to more modern implementations. Of course it is possible for Shed Skin to improve in these areas with future releases.

Note that some of the examples can run even faster with Shed Skin by providing --nowrap/--nobounds options, which disable wrap-around/bounds-checking respectively. I'm not sure if PyPy has any options to make it run faster, at the cost of certain features (in the distant past there was talk of RPython - does that still exist?).

As the CPython 3.14 and Nuitka results are a bit hard to see in the above plot, here is the same plot but with a logarithmic y-scale:

CPython 3.14 is about 60% faster on average for these examples than CPython 3.10, which to me is actually very promising for the future. While Nuitka outperforms CPython 3.10 by about 30% on average, unfortunately it cannot match the improvements in CPython since.

If there are any CS students out there who would like to help improve Shed Skin, please let me know. I think especially memory optimizations (where PyPy still seems to have an edge) would be a great topic for a Master's Thesis!

December 28, 2025 04:31 AM UTC

December 26, 2025


"Michael Kennedy's Thoughts on Technology"

DevOps Python Supply Chain Security

In my last article, “Python Supply Chain Security Made Easy” I talked about how to automate pip-audit so you don’t accidentally ship malicious Python packages to production. While there was defense in depth with uv’s delayed installs, there wasn’t much safety beyond that for developers themselves on their machines.

This follow up fixes that so even dev machines stay safe.

Defending your dev machine

My recommendation is instead of installing directly into a local virtual environment and then running pip-audit, create a dedicated Docker image meant for testing dependencies with pip-audit in isolation.

Our workflow can go like this.

First, we update your local dependencies file:

uv pip compile requirements.piptools --output-file requirements.txt --exclude-newer 1 week

This will update the requirements.txt file, or tweak the command to update your uv.lock file, but it don’t install anything.

Second, run a command that uses this new requirements file inside of a temporary docker container to install the requirements and run pip-audit on them.

Third, only if that pip-audit test succeeds, install the updated requirements into your local venv.

uv pip install -r requirements.txt

The pip-audit docker image

What do we use for our Docker testing image? There are of course a million ways to do this. Here’s one optimized for building Python packages that deeply leverages uv’s and pip-audit’s caching to make subsequent runs much, much faster.

Create a Dockerfile with this content:

# Image for installing python packages with uv and testing with pip-audit
# Saved as Dockerfile
FROM ubuntu:latest

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get autoremove -y

RUN apt-get -y install curl

# Dependencies for building Python packages
RUN apt-get install -y gcc 
RUN apt-get install -y build-essential
RUN apt-get install -y clang
RUN apt-get install -y openssl 
RUN apt-get install -y checkinstall 
RUN apt-get install -y libgdbm-dev 
RUN apt-get install -y libc6-dev
RUN apt-get install -y libtool
RUN apt-get install -y zlib1g-dev
RUN apt-get install -y libffi-dev
RUN apt-get install -y libxslt1-dev

ENV PATH=/venv/bin:$PATH
ENV PATH=/root/.cargo/bin:$PATH
ENV PATH=/root/.local/bin/:$PATH
ENV UV_LINK_MODE=copy

# Install uv
RUN curl -LsSf https://astral.sh/uv/install.sh | sh

# set up a virtual env to use for temp dependencies in isolation.
RUN --mount=type=cache,target=/root/.cache uv venv --python 3.14 /venv
# test that uv is working
RUN uv --version 

WORKDIR "/"

# Install pip-audit
RUN --mount=type=cache,target=/root/.cache uv pip install --python /venv/bin/python3 pip-audit

This installs a bunch of Linux libraries used for edge-case builds of Python packages. It takes a moment, but you only need to build the image once. Then you’ll run it again and again. If you want to use a newer version of Python later, change the version in uv venv --python 3.14 /venv. Even then on rebuilds, the apt-get steps are reused from cache.

Next you build with a fixed tag so you can create aliases to run using this image:

# In the same folder as the Dockerfile above.
docker build -t pipauditdocker .

Finally, we need to run the container with a few bells and whistles. Add caching via a volume so subsequent runs are very fast: -v pip-audit-cache:/root/.cache. And map a volume so whatever working directory you are in will find the local requirements.txt: -v \"\$(pwd)/requirements.txt:/workspace/requirements.txt:ro\"

Here is the alias to add to your .bashrc or .zshrc accomplishing this:

alias pip-audit-proj="echo '🐳 Launching isolated test env in Docker...' && \
docker run --rm \
  -v pip-audit-cache:/root/.cache \
  -v \"\$(pwd)/requirements.txt:/workspace/requirements.txt:ro\" \
  pipauditdocker \
  /bin/bash -c \"echo '📦 Installing requirements from /workspace/requirements.txt...' && \
    uv pip install --quiet -r /workspace/requirements.txt && \
    echo '🔍 Running pip-audit security scan...' && \
    /venv/bin/pip-audit \
      --ignore-vuln CVE-2025-53000 \
      --ignore-vuln PYSEC-2023-242 \
      --skip-editable\""

That’s it! Once you reload your shell, all you have to do is type is pip-audit-proj when you’re in the root of your project that contains your requirements.txt file. You should see something like this below. Slow the first time, fast afterwards.

Protecting Docker in production too

Let’s handle one more situation while we are at it. You’re running your Python app IN Docker. Part of the Docker build configures the image and installs your dependencies. We can add a pip-audit check there too:

# Dockerfile for your app (different than validation image above)
# All the steps to copy your app over and configure the image ...

# After creating a venv in /venv and copying your requirements.txt to /app

# Check for any sketchy packages.
# We are using mount rather than a volume because
# we want to cache build time activity, not runtime activity.

RUN --mount=type=cache,target=/root/.cache uv pip install --python /venv/bin/python3 --upgrade pip-audit
RUN --mount=type=cache,target=/root/.cache /venv/bin/pip-audit --ignore-vuln CVE-2025-53000 --ignore-vuln PYSEC-2023-242 --skip-editable

# ENTRYPOINT ... for your app

Conclusion

There you have it. Two birds, one Docker stone for both. Our first Dockerfile built a reusable Docker image named pipauditdocker to run isolated tests against a requirements file. This second one demonstrates how we can make our docker/docker compose build completely fail if there is a bad dependency saving us from letting it slip into production.

Cheers
Michael

December 26, 2025 11:53 PM UTC


Seth Michael Larson

Getting started with Playdate on Ubuntu 🟨

Trina got me a Playdate for Christmas this year! I've always been intrigued by this console, as it is highly constrained in terms of pixel and color-depth (400x240, 2 colors), but also provides many helpful resources for game development such as a software development kit (SDK) and a simulator to quickly test games during development.

I first discovered software programming as an amateur game developer using BYOND, so “returning to my roots” and doing some game development feels like a fun and fulfilling diversion from the current direction software is taking. Plus, I now have a reason to learn a new programming language: Lua!


Running software on the Playdate!

Getting started with Playdate on Ubuntu

Here's what I did to quickly get started with a Playdate development environment on my Ubuntu 24.04 laptop:

That's it, your Playdate development environment should be ready to use.

“Hello, world” on the Playdate

Within source/main.lua put the following Lua code:

import "CoreLibs/graphics"
import "CoreLibs/ui"

-- Localizing commonly used globals
local pd <const> = playdate
local gfx <const> = playdate.graphics

function playdate.update()
    gfx.drawTextAligned(
      "Hello, world",
      200, 30, kTextAlignment.center
    )
end

Try building and running this with the simulator Ctrl+Shift+B. You should see our "Hello world" message in the simulator.

Running “Hello, world” on real hardware

Next is getting your game running on an actual Playdate console. Connect the Playdate to your computer using the USB cable and make sure the console is awake.

Start your game in the simulator (Ctrl+Shift+B) and then once the simulator starts select Device > Upload Game to Device in the menus or use the hotkey Ctrl+U.

Uploading the game to the Playdate console takes a few seconds, so be patient. The console will show a message like “Sharing DATA segment with USB. Will reboot when ejected”. You can select the "Home" button in the Playdate console menu to stop the game.

Making a network request

One of my initial hesitations with buying a Playdate was that it didn't originally ship with network connectivity within games, despite supporting Wi-Fi. This is no longer the case, as this year Playdate OS 2.7 added support for HTTP and TCP networking.

So immediately after my "Hello world" game, I wanted to try this new feature. I created the following small application that sends an HTTP request after pressing the A button:

import "CoreLibs/graphics"
import "CoreLibs/ui"

local pd <const> = playdate
local gfx <const> = playdate.graphics
local net <const> = playdate.network

local networkEnabled = false

function networkHttpRequest()
    local host = "sethmlarson.dev"
    local port = 443
    local useHttps = true
    local req = net.http.new(
      host, port, useHttps, "Making an HTTP request"
    )
    local path = "/"
    local headers = {}
    req:get(path, headers)
end

function networkEnabledCallback(err)
    networkEnabled = true
end

function init()
    net.setEnabled(true, networkEnabledCallback)
end

function playdate.update()
    gfx.clear()
    if networkEnabled then
        gfx.drawTextAligned(
          "Network enabled",
          200, 30, kTextAlignment.center
        )
        if pd.buttonJustPressed(pd.kButtonA) then
            networkHttpRequest()
        end
    else
        gfx.drawTextAligned(
          "Network disabled",
          200, 30, kTextAlignment.center
        )
    end
end

init()

First I tried running this program with a local HTTP server on localhost:8080 with useHttps set to false and was able to capture this HTTP request using Wireshark:

0000   47 45 54 20 2f 20 48 54 54 50 2f 31 2e 31 0d 0a   GET / HTTP/1.1..
0010   48 6f 73 74 3a 20 6c 6f 63 61 6c 68 6f 73 74 0d   Host: localhost.
0020   0a 55 73 65 72 2d 41 67 65 6e 74 3a 20 50 6c 61   .User-Agent: Pla
0030   79 64 61 74 65 2f 53 69 6d 0d 0a 43 6f 6e 6e 65   ydate/Sim..Conne
0040   63 74 69 6f 6e 3a 20 63 6c 6f 73 65 0d 0a 0d 0a   ction: close....

So we can see that Playdate HTTP requests are quite minimal, only sending a Host, User-Agent and Connection: close header by default. Keep-Alive and other headers can be optionally configured. The User-Agent for the Playdate simulator was Playdate/Sim.

I then tested on real hardware and targeting my own website: sethmlarson.dev:443 with useHttps set to true. This resulted in the same request being sent, with a User-Agent of Playdate/3.0.2. There's no-doubt lots of experimentation ahead for what's possible with a networked Playdate. That's all for now, happy cranking!



Thanks for keeping RSS alive! ♥

December 26, 2025 12:00 AM UTC

December 25, 2025


Seth Michael Larson

Blind Carbon Copy (BCC) for SMS

Have you ever wanted the power of email Blind Carbon Copy (BCC), but for SMS? I've wanted this functionality myself for parties and organizing, specifically without needing to use a third-party service. This script automates the difficult parts of drafting and sending a text message to many recipients with SMS URLs and QR codes.

Draft your message, choose your recipients, and then scan-and-send all the QR codes until you're done. Save your command for later to follow-up in different groups.

Source code

Copy-and-paste the following source code into a file named sms-bcc, make the file executable (chmod a+x sms-bcc) and you're ready to start using the script. Requires Python and the qrcode package (pip install qrcode) to run. This script is licensed MIT.

Source code for sms-bcc script
#!/usr/bin/env python
# /// script
# requires-python = ">=3.12"
# dependencies = [
#   "qrcode"
# ]
# ///
# License: MIT
# Copyright 2025, Seth Larson

import argparse
import pathlib
import re
import sys
import urllib.parse

from qrcode.console_scripts import main as qrcode_main

__version__ = "2025.12.26"


def sms_url(recipients: list[str], message: str, mobile_os: str | None = None) -> str:
    """
    Generate an SMS URL from a list of recipients and a message.
    """
    if len(recipients) > 1 and mobile_os is None:
        raise ValueError("Mobile OS required for multi-recipient messages")
    if not recipients:
        raise ValueError("Recipients required")
    message_encoded = urllib.parse.quote(message)
    if mobile_os is None or mobile_os == "android":
        return f"sms:{','.join(recipients)}?body={message_encoded}"
    else:  # mobile_os == "ios"
        return f"sms://open?addresses={','.join(recipients)}&body={message_encoded}"


def parse_contacts(contacts_data: str) -> dict[str, str]:
    """
    Parses a vCard file. Assumes that each contact
    has a full name and telephone number.
    """
    vcard_fn_re = re.compile(r"^FN:(.+)$", re.MULTILINE)
    vcard_tel_re = re.compile(r"^(?:item[0-9]\.)?TEL[^:]*:([ \.\(\)+0-9\-]+)$", re.MULTILINE)
    names_to_tel = {}
    for vcard in contacts_data.split("BEGIN:VCARD"):
        if not (
            (match_fn := vcard_fn_re.search(vcard))
            and (match_tel := vcard_tel_re.search(vcard))
        ):
            continue
        tel = re.sub(r"[^0-9]", "", match_tel.group(1))
        names_to_tel[match_fn.group(1)] = tel

    return names_to_tel


def main() -> int:
    parser = argparse.ArgumentParser(
        description="Blind Carbon Copy (BCC) for SMS"
    )
    parser.add_argument(
        "--contacts",
        type=str,
        required=True,
        help="Path to contacts file in the vCard format",
    )
    parser.add_argument(
        "--recipients",
        type=str,
        nargs="+",
        required=False,
        help="List of recipients pulled from contacts",
    )
    parser.add_argument(
        "--always-recipients",
        type=str,
        nargs="+",
        required=False,
        help="List of recipients to include in every recipient group",
    )
    parser.add_argument(
        "--message",
        type=str,
        required=True,
        help="Message to send",
    )
    parser.add_argument(
        "--mobile-os",
        type=str,
        choices=["ios", "android"],
        required=False,
        default="ios",
        help="Mobile OS, only required for multi-recipient messages",
    )
    args = parser.parse_args(sys.argv[1:])

    contacts_data = pathlib.Path(args.contacts).read_text()
    names_to_tel = parse_contacts(contacts_data)

    message_data = pathlib.Path(args.message).read_text()
    list_of_recipients = [
        [r.strip() for r in recipients.split(",")] for recipients in args.recipients
    ]
    always_recipients = list(args.always_recipients or ())
    if (mobile_os := args.mobile_os) not in (None, "android", "ios"):
        raise ValueError("--mobile-os must be one of 'android' or 'ios'")

    def clear_terminal() -> None:
        print(chr(27) + "[2J")

    for recipients in list_of_recipients:
        recipients.extend(always_recipients)

        # Figure out which telephone numbers to include
        # and exclude. Can be numbers or names.
        recipient_tels = {}
        for recipient in recipients:
            # Last character is a number, probably a telephone number.
            if recipient[-1].isdigit():
                recipient_tels[recipient] = recipient
                continue
            for name, tel in names_to_tel.items():
                if recipient in name:
                    recipient_tels[name] = tel

        # Remove names filtered via '-Name'.
        for recipient in recipients:
            if recipient.startswith("-"):
                recipient_tels = {
                    name: tel
                    for name, tel in recipient_tels.items()
                    if recipient[1:] not in name
                }

        clear_terminal()
        qrcode_data = sms_url(
            sorted(set(recipient_tels.values())), message_data, mobile_os
        )
        qrcode_main(["--error-correction=L", qrcode_data])
        input(
            f"\n\nSending to: {', '.join(sorted(recipient_tels.keys()))}\nScan, send, and press enter to continue."
        )

    clear_terminal()
    print(f"Done sending {len(list_of_recipients)} messages")
    return 0


if __name__ == "__main__":
    sys.exit(main())

How to use

Export your contacts from your phone to a vCard file (.vcf). For iPhones this is done within the contacts app: long-press-and-hold “All Contacts” and select “Export”. This will create a .vcf file that you can transfer to your computer.

Now run the sms-bcc script with --contacts for the .vcf file, draft a message in a file and pass with the --message option, and choose your recipients by their name with the --recipients option.

./sms-bcc \
  --contacts contacts.vcf \
  --recipients Alex,Bob Charlie \
  --message ./message.txt

This will draft the message to two groups: "You, Alex, and Bob" and "You and Charlie". Note how spaces delimit groups of recipients and commas (,) delimit recipient names within a group.

After running this script, a series of QR codes using the sms:// URL scheme will be generated one after another. Scan the QR code to load the message and recipient into your phone, then you can optionally send the message or skip, then press Enter to generate the next QR code.

The --recipients option uses a simple string-contains operation, so I recommend having full names in your contacts to avoid excessive duplicates. You can pass a name with a leading hyphen/minus (-) character to exclude a name from the list of recipients. The below invocation will match people named "Alex" without matching "Alexander":

./sms-bcc --recipients Alex,-Alexander

If you have a spouse or partner that you want to include in every recipient group, use the --always-recipients option:

./sms-bcc \
  --contacts contacts.vcf \
  --recipients Bob Charlie,Dave \
  --always-recipients Alex \
  --message ./message.txt

This will draft the message for "You, Alex, and Bob" and "You, Alex, Charlie, and Dave".

🎄 Merry Christmas and happy organizing! 🎄

Changelog



Thanks for keeping RSS alive! ♥

December 25, 2025 12:00 AM UTC

December 24, 2025


Real Python

LlamaIndex in Python: A RAG Guide With Examples

Discover how to use LlamaIndex with practical examples. This framework helps you build retrieval-augmented generation (RAG) apps using Python. LlamaIndex lets you load your data and documents, create and persist searchable indexes, and query an LLM using your data as context.

In this tutorial, you’ll learn the basics of installing the package, setting AI providers, spinning up a query engine, and running synchronous or asynchronous queries against remote or local models.

By the end of this tutorial, you’ll understand that:

  • You use LlamaIndex to connect your data to LLMs, allowing you to build AI agents, workflows, query engines, and chat engines.
  • You can perform RAG with LlamaIndex to retrieve relevant context at query time, helping the LLM generate grounded answers and minimize hallucinations.

You’ll start by preparing your environment and installing LlamaIndex. From there, you’ll learn how to load your own files, build and save an index, choose different AI providers, and run targeted queries over your data through a query engine.

Get Your Code: Click here to download the free sample code that shows you how to use LlamaIndex in Python.

Take the Quiz: Test your knowledge with our interactive “LlamaIndex in Python: A RAG Guide With Examples” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

LlamaIndex in Python: A RAG Guide With Examples

Take this Python LlamaIndex quiz to test your understanding of index persistence, reloading, and performance gains in RAG applications.

Start Using LlamaIndex

Training or fine-tuning an AI model—like a large language model (LLM)—on your own data can be a complex and resource-intensive process. Instead of modifying the model itself, you can rely on a pattern called retrieval-augmented generation (RAG).

RAG is a technique where the system, at query time, first retrieves relevant external documents or data and then passes them to the LLM as additional context. The model uses this context as a source of truth when generating its answer, which typically makes the response more accurate, up to date, and on topic.

Note: RAG can help reduce hallucinations and prevent models from giving wrong answers. However, recent LLMs are much better at admitting when they don’t know something rather than making up an answer.

This technique also allows LLMs to provide answers to questions that they wouldn’t have been able to answer otherwise—for example, questions about your internal company information, email history, and similar private data.

LlamaIndex is a Python framework that enables you to build AI-powered apps capable of performing RAG. It helps you feed LLMs with your own data through indexing and retrieval tools. Next, you’ll learn the basics of installing, setting up, and using LlamaIndex in your Python projects.

Install and Set Up LlamaIndex

Before installing LlamaIndex, you should create and activate a Python virtual environment. Refer to Python Virtual Environments: A Primer for detailed instructions on how to do this.

Once you have the virtual environment ready, you can install LlamaIndex from the Python Package Index (PyPI):

Shell
(.venv) $ python -m pip install llama-index

This command downloads the framework from PyPI and installs it in your current Python environment. In practice, llama-index is a core starter bundle of packages containing the following:

  • llama-index-core
  • llama-index-llms-openai
  • llama-index-embeddings-openai
  • llama-index-readers-file

As you can see, OpenAI is the default LLM provider for LlamaIndex. In this tutorial, you’ll rely on this default setting, so after installation, you must set up an environment variable called OPENAI_API_KEY that points to a valid OpenAI API key:

Windows PowerShell
(.venv) PS> $ENV:OPENAI_API_KEY = "<your-api-key-here>"
Shell
(.venv) $ export OPENAI_API_KEY="<your-api-key-here>"

With this command, you make the API key accessible under the environment variable OPENAI_API_KEY in your current terminal session. Note that you’ll lose it when you close your terminal. To persist this variable, add the export command to your shell’s configuration file—typically ~/.bashrc or ~/.zshrc on Linux and macOS—or use the System Properties dialog on Windows.

LlamaIndex also supports many other LLMs. For a complete list of models, visit the Available LLM integrations section in the official documentation.

Run a Quick LlamaIndex RAG Example

Read the full article at https://realpython.com/llamaindex-examples/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

December 24, 2025 02:00 PM UTC

Quiz: LlamaIndex in Python: A RAG Guide With Examples

In this quiz, you’ll test your understanding of the LlamaIndex in Python: A RAG Guide With Examples tutorial.

By working through this quiz, you’ll revisit how to create and persist an index to disk, review how to reload it, and see why persistence improves performance, lowers costs, saves time, and keeps results consistent.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

December 24, 2025 12:00 PM UTC

December 23, 2025


PyCoder’s Weekly

Issue #714: Narwhals, Selenium, Testing Conundrum, and More (Dec. 23, 2025)

#714 – DECEMBER 23, 2025
View in Browser »

The PyCoder’s Weekly Logo


Writing DataFrame-Agnostic Python Code With Narwhals

If you’re a Python library developer looking to write DataFrame-agnostic code, this tutorial will show how the Narwhals library could give you a solution.
REAL PYTHON

Eliminate Flaky Tests in Selenium

Learn why time.sleep() and WebDriverWait aren’t enough when testing with Selenium and what to do about race conditions caused by UI state changes
DHIRAJ DAS • Shared by Dhiraj

Which Code Review Tool Catches the Most Python Bugs

alt

We benchmarked the leading AI code review tools against 118 real-world runtime bugs from 45 open-source repos, across 8 languages. Macroscope dominated—catching more bugs with fewer false positives—especially in Python. Check out our benchmark results →
MACROSCOPE sponsor

A Testing Conundrum

Ned presents a useful class that is hard to test thoroughly and his failed attempt to use Hypothesis to do it.
NED BATCHELDER

Python 3.15.0 Alpha 3 Released

CPYTHON DEV BLOG

DjangoCon US Chicago 2026 Call for Proposals

DJANGOCON US

Django Software Foundation Fundraiser

DJANGO SOFTWARE FOUNDATION

Articles & Tutorials

Moving Towards Spec-Driven Development

What are the advantages of spec-driven development compared to vibe coding with an LLM? Are these recent trends a move toward declarative programming? This week on the show, Marc Brooker, VP and Distinguished Engineer at AWS, joins us to discuss specification-driven development and Kiro.
REAL PYTHON podcast

Tap Compare Testing for Service Migration

A common pattern used when migrating from one system to another at scale is “tap compare” or “shadow testing”. This approach involves copying and splitting traffic to ensure good behavior before switching things over.
REDOWAN DELOWAR

Real Python Opens New Live Course Cohorts for 2026

Real Python is enrolling new cohorts for two instructor-led courses: Python for Beginners: Code with Confidence for those just starting out, and Intermediate Python Deep Dive for developers ready to master advanced patterns and OOP. Both feature hands-on projects, expert feedback, and certificates of completion.
REAL PYTHON sponsor

Talk Python in Production

A guest host for Talk Python interviews Michael Kennedy (Talk Python’s creator) about his new book “Talk Python in Production” which talks about the tools and techniques used to host Talk Python and its related sites.
TALK PYTHON podcast

Deliver Code You Have Proven to Work

This opinion piece by Simon talks about what it means to be a responsible developer in the age of AI tooling. In short: you’re still responsible for checking the code works regardless of who/what wrote it.
SIMON WILSON

What’s New in PySpark 4.0

Discover PySpark 4.0’s game-changing features: 3x faster Arrow UDFs, native Plotly visualization, and dynamic schema UDTFs for flexible data transformations.
CODECUT.AI • Shared by Khuyen Tran

What’s New in Python 3.15

Python 3.15 is actively in development and they’ve already started creating the “What’s new” document. Learn about what is coming in next year’s release.
PYTHON.ORG

Embrace Whitespace

Well placed spaces and line breaks can greatly improve the readability of your Python code. Read on to learn how to write more readable Python.
TREY HUNNER

How to Build the Python Skills That Get You Hired

Build a focused learning plan that helps you identify essential Python skills, assess your strengths, and practice effectively to progress.
REAL PYTHON

Exploring Asynchronous Iterators and Iterables

Learn to build async iterators and iterables in Python to handle async operations efficiently and write cleaner, faster code.
REAL PYTHON course

How I Write Django Views

Kevin talks about why he uses Django’s base View class instead of generic class-based views or function-based ones.
KEVIN RENSKERS

Inline SVGs in Jupyter Notebooks

This quick TIL article shows how to inline SVGs in Jupyter notebooks in two simple steps.
RODRIGO GIRÃO SERRÃO

Projects & Code

snob: A Picky Test Selector

GITHUB.COM/ALEXPASMANTIER • Shared by alex pasmantier

qcrawl: Fast Async Web Crawling & Scraping Framework

GITHUB.COM/CRAWLCORE

JustHTML: Pure Python HTML5 Parser

GITHUB.COM/EMILSTENSTROM • Shared by Emil Stenström

PyArabic: Arabic Language and Text Library

GITHUB.COM/ALMUBARMIJ

Django LiveView: Framework for Realtime SPAs

GITHUB.COM/DJANGO-LIVEVIEW

Events

PyDelhi User Group Meetup

December 27, 2025
MEETUP.COM

Python Sheffield

December 30, 2025
GOOGLE.COM

Python Southwest Florida (PySWFL)

December 31, 2025
MEETUP.COM


Happy Pythoning!
This was PyCoder’s Weekly Issue #714.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

December 23, 2025 07:30 PM UTC


Reuven Lerner

Reuven’s 2025 in review

Can you believe that 2025 is almost over? It was full of big events for me, and yet it also whizzed past at breakneck speed.

And so, before we start 2026, I want to share a whole bunch of updates on what I’ve done over the last 12 months — and where I plan to go in 2026, as well.

LernerPython

The biggest thing for me this year was the new LernerPython site. This site supports functionality that was previously impossible — and because it’s mine, it also allows me to fix problems and customize things more easily. I look forward to extending and customizing it even more in the coming months. Thanks to everyone who sent me bug reports about the site and course content during this transition period.

Among other things, the site automatically integrates with our private Discord server, which is our hub for not only questions and discussions, but also calendar invites to live Zoom sessions. It’s also where I save recordings from our Zoom meetings.

The site is also integrated with Bamboo Weekly, ensuring that LernerPython+data members get a complimentary subscription without the need for manual intervention.

In 2025, I held live office hours on Zoom nearly every month for Python subscribers, and separate office hours nearly every month for Pandas subscribers. I really enjoy those sessions! Keep bringing your questions, thoughts, and stories.

I also held special, members-only lectures just about every month. These ranged in topic from the Unix shell to Marimo to dataclasses to concurrency. Thanks to those of you who attended, and especially those who suggested lecture topics. Recordings from these sessions are in the “meeting recordings” channels on Discord.

This year marked my start as a preferred partner with the Python Institute, a certification agency for Python. Members of LernerPython get discounts on their exams, making it easier (I hope!) to get good jobs in the Python world. In 2026, I plan to start a special monthly session of office hours to help you prepare for these exams.

With the new LernerPython.com now ready, I’ll record some new courses in 2026, as well as re-record some of the older, existing ones.

I’ll also bump up visibility of my Personal Python Coaching program, for people who just want an hour of my time for strategy, code review, or a clearer understanding of Python, Git, and Pandas topics.

Intensive training — PythonDAB and HOPPy

My best, longest, and most intensive course is PythonDAB, the Python Data Analytics Bootcamp. Over four months, participants learn Python, Git, and Pandas, meeting twice each week, solving exercises, and digging into the nuts and bolts of Python and Pandas. Cohort 8 started earlier this month, and the sessions are (as always) full of insightful questions and comments. I expect to open PythonDAB 9 in late May or early June of 2026 — and if you think it’s a good fit for you, I hope that you’ll apply, or at least ask me about it!

This year marked the start of a new class: HOPPy, Hands-on Projects in Python. HOPPy is about learning through doing, building a project that’s meaningful to you — but within the general theme of that specific HOPPy cohort. People created some amazing applications, from a communications system for heatlh clinics to a personal blood-pressure monitor to a bank status summarizer.

HOPPy is open (for an additional fee) to LernerPython members, and is included in the price for PythonDAB participants. I will be running 4-5 HOPPy cohorts in 2026, including one in January about data dashboards. More info is coming soon — but if you’ve always wanted to learn more in a mentored environment, and as a bonus add a new product to your personal portfolio, then HOPPy is just what you’re looking for.

Corporate training

I gave a good deal of training classes at companies in 2025, including at Apple, Arm, Cisco, Intel, and Sandisk. (I also gave a number of online classes for O’Reilly’s platform.) These range from “Python for non-programmers” to intro and advanced Python, to intro Pandas, to my “Python Practice Workshop” and “Pandas Practice Workshop” one-day courses.

If your team wants to level up its Python skills, let’s chat! I’d love to hear more about your team’s needs, and what kind of custom class would work best for you.

A number of companies also joined LernerPython using my team membership feature, allowing a manager to control the allocation of seats.

Conferences

I can’t get enough of Python conferences, which combine serious learning with friendly people. This year, I attended a number of conferences in person:

I also spoke at a number of online user groups, meetups, and regional conferences, including Python Ho (in Ghana) and the Marimo user community.

If you run a user group or conference, and would like to have me speak, please don’t hesitate to reach out!

I’ve already signed the contract to sponsor PyCon US 2026 in Long Beach, and I’ve submitted several talk and tutorial proposals. I hope to see you there!

Books

When I finished Pandas Workout last year, I wasn’t sure if I really wanted to write another book. So of course, I found myself working on two books this year:

Newsletters

As you might know, I publish three weekly newsletters:

This year, I published a new, free e-mail course about uv, called “uv crash course,” taken from some recent editions of Better Developers. You can check it out at https://uvCrashCourse.com .

If you’re enjoying one or more of my newsletters, please tell others about them and encourage them to subscribe! 

And if there are specific topics you would like me to cover? I’m always happy to hear from readers.

YouTube and social media

I’ve been especially active on YouTube this year, at https://YouTube.com/reuvenlerner, with about 60 new videos published about Python, Pandas, Git, Jupyter, and Marimo.

My most recent addition is a new playlist about Pandas 3. I’m adding new videos every day, and hope to get a good collection in place before Pandas 3 is released in the near future.

I also put the entire “Python for non-programmers” course (15 videos) and “Ace Python Interviews” course (50 videos) on my YouTube channel.

I’ve mainly been posting to Bluesky and LinkedIn, but I’ll often mirror postings to X (aka Twitter), Threads, and Fosstodon.

My blog has taken a back seat to other channels over the last few years, but I did find some reasons to post in 2025. Among my more interesting postings:

Podcasts

I believe that I only appeared on two podcasts this year — and both were episodes of Talk Python! I appeared on episode 503 in April, about PyArrow and Pandas (https://talkpython.fm/episodes/show/503/the-pyarrow-revolution), and more recently appeared on a panel discussion reviewing the year in Python news (https://www.youtube.com/watch?v=PfRCbeOrUd8) .

Several personal notes, and a request

The last two years have been difficult in Israel. I’m relieved that the war with Hamas (and related conflicts with Hezbollah, Yemen, and Iran) are largely over. And I hope that we can now work to bring about peace, prosperity, freedom, and coexistence between Israelis and our neighbors, most especially the Palestinians.

The missile alerts and attacks, which regularly woke us up for the better part of two years, and which caused untold death, injury, and destruction, were one of the more terrifying periods I’ve ever lived through. Of course, I know that things were also bad for many Palestinian civilians.

My family donates to Israeli organizations that promote the rule of law, democracy, religious pluralism, and peacemaking with our neighbors — and while it’s easy to give up hope that things will improve, I refuse to do so. We can and should try to make a difference in the world, even if it’s just a small one.

I appreciate the very large, warm outpouring of care and support that I received throughout the last two years from so many of you. It really means a lot.

Beyond Israel, I’ve been watching developments in the US with concern. In particular, it’s quite upsetting to see the wholescale destruction of science, engineering, and medical research in the US. As a regular consumer of US government data (for Bamboo Weekly), the degree to which that data is no longer considered the most reliable and nonpartisan in the world is a grave disappointment — and a professional frustration.

If you’re reading this, then the Trump administration’s policies have affected you, too: The Python Software Foundation recently turned down a $1.5 million grant for increased Python security. That’s because the grant required the PSF give up its efforts to make Python available to everyone, no matter who they are. 

If you’ve gotten $100 of value out of Python in the last year, then I ask that you join the PSF as a paid member. If even 5 percent of Python users were to join the PSF, that would reduce or eliminate Python’s dependence on any one government or organization, and allow it to concentrate on its goals. Joining the PSF also give you the right to vote in annual elections, which means choosing the people who will set Python’s priorities over the coming years.

Thanks again for your subscriptions, support, friendly notes, and bug reports. I look forward to a new year of learning even more new things, of meeting more interesting, smart people, serving your learning needs, and to helping make our world just a bit friendlier, closer, and peaceful.

Best wishes for a great 2026!

Reuven

The post Reuven’s 2025 in review appeared first on Reuven Lerner.

December 23, 2025 02:44 PM UTC


Real Python

Reading User Input From the Keyboard With Python

You may often want to make your Python programs more interactive by responding dynamically to input from the user. Learning how to read user input from the keyboard unlocks exciting possibilities and can make your code far more useful.

The ability to gather input from the keyboard with Python allows you to build programs that can respond uniquely based on the preferences, decisions, or data provided by different users. By fetching input and assigning it to variables, your code can react to adjustable conditions rather than just executing static logic flows. This personalizes programs to individual users.

The input() function is the simplest way to get keyboard data from the user in Python. When called, it asks the user for input with a prompt that you specify, and it waits for the user to type a response and press the Enter key before continuing. This response string is returned by input() so you can save it to a variable or use it directly.

Using only Python, you can start building interactive programs that accept customizable data from the user right within the terminal. Taking user input is an essential skill that unlocks more dynamic Python coding and allows you to elevate simple scripts into personalized applications.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

December 23, 2025 02:00 PM UTC


Hugo van Kemenade

And now for something completely different

Starting in 2019, Python 3.8 and 3.9 release manager Łukasz Langa added a new section to the release notes called “And now for something completely different” with a sketch transcript from Monty Python.

For Python 3.10 and 3.11, the next release manager Pablo Galindo Salgado continued the section but included astrophysics facts.

For Python 3.12, the next RM Thomas Wouters shared poems (and took a break for 3.13).

And for Python 3.14, I’m doing all things π, pie and [mag]pie.

Here’s a collection of my different things for the first year (and a bit) of Python 3.14.

alpha 1 #

2024-10-15

π (or pi) is a mathematical constant, approximately 3.14, for the ratio of a circle’s circumference to its diameter. It is an irrational number, which means it cannot be written as a simple fraction of two integers. When written as a decimal, its digits go on forever without ever repeating a pattern.

Here’s 76 digits of π:

3.141592653589793238462643383279502884197169399375105820974944592307816406286

Piphilology is the creation of mnemonics to help remember digits of π.

In a pi-poem, or “piem”, the number of letters in a word equal the corresponding digit. This covers 9 digits, 3.14159265:

How I wish I could recollect pi easily today!

One of the most well-known covers 15 digits, 3.14159265358979:

How I want a drink, alcoholic of course, after the heavy chapters involving quantum mechanics!

Here’s a 35-word piem in the shape of a circle, 3.1415926535897932384626433832795728:

It’s a fact A ratio immutable Of circle round and width, Produces geometry’s deepest conundrum. For as the numerals stay random, No repeat lets out its presence, Yet it forever stretches forth. Nothing to eternity.

The Guinness World Record for memorising the most digits is held by Rajveer Meena, who recited 70,000 digits blindfold in 2015. The unofficial record is held by Akira Haraguchi who recited 100,000 digits in 2006.

alpha 2 #

2024-11-19

Ludolph van Ceulen (1540-1610) was a fencing and mathematics teacher in Leiden, Netherlands, and spent around 25 years calculating π (or pi), using essentially the same methods Archimedes employed some seventeen hundred years earlier.

Archimedes estimated π by calculating the circumferences of polygons that fit just inside and outside of a circle, reasoning the circumference of the circle lies between these two values. Archimedes went up to polygons with 96 sides, for a value between 3.1408 and 3.1428, which is accurate to two decimal places.

Van Ceulen used a polygon with half a billion sides. He published a 20-decimal value in his 1596 book Vanden Circkel (“On the Circle”), and later expanded it to 35 decimals:

3.14159265358979323846264338327950288

Van Ceulen’s 20 digits is more than enough precision for any conceivable practical purpose. For example, even if a printed circle was perfect down to the atomic scale, the thermal vibrations of the molecules of ink would make most of those digits physically meaningless. NASA Jet Propulsion Laboratory’s highest accuracy calculations, for interplanetary navigation, uses 15 decimals: 3.141592653589793.

At Van Ceulen’s request, his upper and lower bounds for π were engraved on his tombstone in Leiden. The tombstone was eventually lost but restored in 2000. In the Netherlands and Germany, π is sometimes referred to as the “Ludolphine number”, after Van Ceulen.

alpha 3 #

2024-12-17

A mince pie is a small, round covered tart filled with “mincemeat”, usually eaten during the Christmas season – the UK consumes some 800 million each Christmas. Mincemeat is a mixture of things like apple, dried fruits, candied peel and spices, and originally would have contained meat chopped small, but rarely nowadays. They are often served warm with brandy butter.

According to the Oxford English Dictionary, the earliest mention of Christmas mince pies is by Thomas Dekker, writing in the aftermath of the 1603 London plague, in Newes from Graues-end: Sent to Nobody (1604):

Ten thousand in London swore to feast their neighbors with nothing but plum-porredge, and mince-pyes all Christmas.

Here’s a meaty recipe from Rare and Excellent Receipts, Experienc’d and Taught by Mrs Mary Tillinghast and now Printed for the Use of her Scholars Only (1678):

XV. How to make Mince-pies.

To every pound of Meat, take two pound of beef Suet, a pound of Corrants, and a quarter of an Ounce of Cinnamon, one Nutmeg, a little beaten Mace, some beaten Colves, a little Sack & Rose-water, two large Pippins, some Orange and Lemon peel cut very thin, and shred very small, a few beaten Carraway-seeds, if you love them the Juyce of half a Lemon squez’d into this quantity of meat; for Sugar, sweeten it to your relish; then mix all these together and fill your Pie. The best meat for Pies is Neats-Tongues, or a leg of Veal; you may make them of a leg of Mutton if you please; the meat must be parboyl’d if you do not spend it presently; but if it be for present use, you may do it raw, and the Pies will be the better.

alpha 4 #

2025-01-14

In Python, you can use Greek letters as constants. For example:

from math import pi as π

def circumference(radius: float) -> float:
 return 2 * π * radius

print(circumference(6378.137)) # 40075.016685578485

alpha 5 #

2025-02-11

2025-01-29 marked the start of a new lunar year, the Year of the Snake :snake: (and the Year of Python?).

For centuries, π was often approximated as 3 in China. Some time between the years 1 and 5 CE, astronomer, librarian, mathematician and politician Liu Xin (劉歆) calculated π as 3.154.

Around 130 CE, mathematician, astronomer, and geographer Zhang Heng (張衡, 78–139) compared the celestial circle with the diameter of the earth as 736:232 to get 3.1724. He also came up with a formula for the ratio between a cube and inscribed sphere as 8:5, implying the ratio of a square’s area to an inscribed circle is √8:√5. From this, he calculated π as √10 (~3.162).

Third century mathematician Liu Hui (刘徽) came up with an algorithm for calculating π iteratively: calculate the area of a polygon inscribed in a circle, then as the number of sides of the polygon is increased, the area becomes closer to that of the circle, from which you can approximate π.

This algorithm is similar to the method used by Archimedes in the 3rd century BCE and Ludolph van Ceulen in the 16th century CE (see 3.14.0a2 release notes), but Archimedes only went up to a 96-sided polygon (96-gon). Liu Hui went up to a 192-gon to approximate π as 157/50 (3.14) and later a 3072-gon for 3.14159.

Liu Hu wrote a commentary on the book The Nine Chapters on the Mathematical Art which included his π approximations.

In the fifth century, astronomer, inventor, mathematician, politician, and writer Zu Chongzhi (祖沖之, 429–500) used Liu Hui’s algorithm to inscribe a 12,288-gon to compute π between 3.1415926 and 3.1415927, correct to seven decimal places. This was more accurate than Hellenistic calculations and wouldn’t be improved upon for 900 years.

Happy Year of the Snake!

alpha 6 #

2025-03-14

March 14 is celebrated as pi day, because 3.14 is an approximation of π. The day is observed by eating pies (savoury and/or sweet) and celebrating π. The first pi day was organised by physicist and tinkerer Larry Shaw of the San Francisco Exploratorium in 1988. It is also the International Day of Mathematics and Albert Einstein’s birthday. Let’s all eat some pie, recite some π, install and test some py, and wish a happy birthday to Albert, Loren and all the other pi day children!

alpha 7 #

2025-04-08

On Saturday, 5th April, 3.141592653589793 months of the year had elapsed.

beta 1 #

2025-05-07

The mathematical constant pi is represented by the Greek letter π and represents the ratio of a circle’s circumference to its diameter. The first person to use π as a symbol for this ratio was Welsh self-taught mathematician William Jones in 1706. He was a farmer’s son born in Llanfihangel Tre’r Beirdd on Angelsy (Ynys Môn) in 1675 and only received a basic education at a local charity school. However, the owner of his parents’ farm noticed his mathematical ability and arranged for him to move to London to work in a bank.

By age 20, he served at sea in the Royal Navy, teaching sailors mathematics and helping with the ship’s navigation. On return to London seven years later, he became a maths teacher in coffee houses and a private tutor. In 1706, Jones published Synopsis Palmariorum Matheseos which used the symbol π for the ratio of a circle’s circumference to diameter (hunt for it on pages 243 and 263 or here). Jones was also the first person to realise π is an irrational number, meaning it can be written as decimal number that goes on forever, but cannot be written as a fraction of two integers.

But why π? It’s thought Jones used the Greek letter π because it’s the first letter in perimetron or perimeter. Jones was the first to use π as our familiar ratio but wasn’t the first to use it in as part of the ratio. William Oughtred, in his 1631 Clavis Mathematicae (The Key of Mathematics), used π/δ to represent what we now call pi. His π was the circumference, not the ratio of circumference to diameter. James Gregory, in his 1668 Geometriae Pars Universalis (The Universal Part of Geometry) used π/ρ instead, where ρ is the radius, making the ratio 6.28… or τ. After Jones, Leonhard Euler had used π for 6.28…, and also p for 3.14…, before settling on and popularising π for the famous ratio.

beta 2 #

2025-05-26

In 1897, the State of Indiana almost passed a bill defining π as 3.2.

Of course, it’s not that simple.

Edwin J. Goodwin, M.D., claimed to have come up with a solution to an ancient geometrical problem called squaring the circle, first proposed in Greek mathematics. It involves trying to draw a circle and a square with the same area, using only a compass and a straight edge. It turns out to be impossible because π is transcendental (and this had been proved just 13 years earlier by Ferdinand von Lindemann), but Goodwin fudged things so the value of π was 3.2 (his writings have included at least nine different values of π: including 4, 3.236, 3.232, 3.2325… and even 9.2376…).

Goodwin had copyrighted his proof and offered it to the State of Indiana to use in their educational textbooks without paying royalties, provided they endorsed it. And so Indiana Bill No. 246 was introduced to the House on 18th January 1897. It was not understood and initially referred to the House Committee on Canals, also called the Committee on Swamp Lands. They then referred it to the Committee on Education, who duly recommended on 2nd February that “said bill do pass”. It passed its second reading on the 5th and the education chair moved that they suspend the constitutional rule that required bills to be read on three separate days. This passed 72-0, and the bill itself passed 67-0.

The bill was referred to the Senate on 10th February, had its first reading on the 11th, and was referred to the Committee on Temperance, whose chair on the 12th recommended “that said bill do pass”.

A mathematics professor, Clarence Abiathar Waldo, happened to be in the State Capitol on the day the House passed the bill and walked in during the debate to hear an ex-teacher argue:

The case is perfectly simple. If we pass this bill which establishes a new and correct value for pi , the author offers to our state without cost the use of his discovery and its free publication in our school text books, while everyone else must pay him a royalty.

Waldo ensured the senators were “properly coached”; and on the 12th, during the second reading, after an unsuccessful attempt to amend the bill it was postponed indefinitely. But not before the senators had some fun.

The Indiana News reported on the 13th:

…the bill was brought up and made fun of. The Senators made bad puns about it, ridiculed it and laughed over it. The fun lasted half an hour. Senator Hubbell said that it was not meet for the Senate, which was costing the State $250 a day, to waste its time in such frivolity. He said that in reading the leading newspapers of Chicago and the East, he found that the Indiana State Legislature had laid itself open to ridicule by the action already taken on the bill. He thought consideration of such a proposition was not dignified or worthy of the Senate. He moved the indefinite postponement of the bill, and the motion carried.

beta 3 #

2025-06-17

If you’re heading out to sea, remember the Maritime Approximation:

π mph = e knots

beta 4 #

2025-07-08

All this talk of π and yet some say π is wrong. Tau Day (June 28th, 6/28 in the US) celebrates τ as the “true circle constant”, as the ratio of a circle’s circumference to its radius, C/r = 6.283185… The Tau Manifesto declares π “a confusing and unnatural choice for the circle constant”, in part because “2π occurs with astonishing frequency throughout mathematics”.

If you wish to embrace τ the good news is PEP 628 added math.tau to Python 3.6 in 2016:

When working with radians, it is trivial to convert any given fraction of a circle to a value in radians in terms of tau. A quarter circle is tau/4, a half circle is tau/2, seven 25ths is 7*tau/25, etc. In contrast with the equivalent expressions in terms of pi (pi/2, pi, 14*pi/25), the unnecessary and needlessly confusing multiplication by two is gone.

release candidate 1 #

2025-07-22

Today, 22nd July, is Pi Approximation Day, because 22/7 is a common approximation of π and closer to π than 3.14.

22/7 is a Diophantine approximation, named after Diophantus of Alexandria (3rd century CE), which is a way of estimating a real number as a ratio of two integers. 22/7 has been known since antiquity; Archimedes (3rd century BCE) wrote the first known proof that 22/7 overestimates π by comparing 96-sided polygons to the circle it circumscribes.

Another approximation is 355/113. In Chinese mathematics, 22/7 and 355/113 are respectively known as Yuelü (约率; yuēlǜ; “approximate ratio”) and Milü (密率; mìlǜ; “close ratio”).

Happy Pi Approximation Day!

release candidate 2 #

2025-08-14

The magpie, Pica pica in Latin, is a black and white bird in the crow family, known for its chattering call.

The first-known use in English is from a 1589 poem, where magpie is spelled “magpy” and cuckoo is “cookow”:

Th[e]y fly to wood like breeding hauke, And leave old neighbours loue, They pearch themselves in syluane lodge, And soare in th’ aire aboue. There : magpy teacheth them to chat, And cookow soone doth hit them pat.

The name comes from Mag, short for Margery or Margaret (compare robin redbreast, jenny wren, and its corvid relative jackdaw); and pie, a magpie or other bird with black and white (or pied) plumage. The sea-pie (1552) is the oystercatcher, the grey pie (1678) and murdering pie (1688) is the great grey shrike. Others birds include the yellow and black pie, red-billed pie, wandering tree-pie, and river pie. The rain-pie, wood-pie and French pie are woodpeckers.

Pie on its own dates to before 1225, and comes from the Latin name for the bird, pica.

release candidate 3 #

2025-09-18

According to Pablo Galindo Salgado at PyCon Greece:

There are things that are supercool indeed, like for instance, this is one of the results that I’m more proud about. This equation over here, which you don’t need to understand, you don’t need to be scared about, but this equation here tells what is the maximum time that it takes for a ray of light to fall into a black hole. And as you can see the math is quite complicated but the answer is quite simple: it’s 2π times the mass of the black hole. So if you normalise by the mass of the black hole, the answer is 2π. And because there is nothing specific about your election of things in this formula, this formula is universal. It means it doesn’t depend on anything other than nature itself. Which means that you can use this as a definition of π. This is a valid alternative definition of the number π. It’s literally half the maximum time it takes to fall into a black hole, which is kind of crazy. So next time someone asks you what π means you can just drop this thing and impress them quite a lot. Maybe Hugo could use this information to put it into the release notes of πthon [yes, I can, thank you!].

3.14.0 (final) #

2025-10-07

Edgar Allen Poe died on 7th October 1849.

As we all recall from 3.14.0a1, piphilology is the creation of mnemonics to help memorise the digits of π, and the number of letters in each word in a pi-poem (or “piem”) successively correspond to the digits of π.

In 1995, Mike Keith, an American mathematician and author of constrained writing, retold Poe’s The Raven as a 740-word piem. Here’s the first two stanzas of Near A Raven:

Poe, E. Near a Raven

Midnights so dreary, tired and weary. Silently pondering volumes extolling all by-now obsolete lore. During my rather long nap - the weirdest tap! An ominous vibrating sound disturbing my chamber’s antedoor. “This”, I whispered quietly, “I ignore”.

Perfectly, the intellect remembers: the ghostly fires, a glittering ember. Inflamed by lightning’s outbursts, windows cast penumbras upon this floor. Sorrowful, as one mistreated, unhappy thoughts I heeded: That inimitable lesson in elegance - Lenore - Is delighting, exciting…nevermore.

3.14.1 #

2025-12-02

Seki Takakazu (関 孝和; c. March 1642 – December 5, 1708) was a Japanese mathematician and samurai who laid the foundations of Japanese mathematics, later known as wasan (和算, from wa (“Japanese”) and san (“calculation”).

Seki was a contemporary of Isaac Newton and Gottfried Leibniz but worked independently. He created a new algebraic system, worked on infinitesimal calculus, and is credited with the discovery of Bernoulli numbers (before Bernoulli’s birth).

Seki also calculated π to 11 decimal places using a polygon with 131,072 sides inscribed within a circle, using an acceleration method now known as Aitken’s delta-squared process, which was rediscovered by Alexander Aitken in 1926.


Header photo: A scan of Seki Takakazu’s posthumous Katsuyō Sanpō (1712) showing calculations of π.

December 23, 2025 01:03 PM UTC


Real Python

Quiz: Recursion in Python: An Introduction

In this quiz, you’ll test your understanding of Recursion in Python.

By working through this quiz, you’ll revisit what recursion is, how base and recursive cases work, when recursion is a good fit for a problem, and when an iterative approach fits.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

December 23, 2025 12:00 PM UTC


"Michael Kennedy's Thoughts on Technology"

Python Supply Chain Security Made Easy

Maybe you’ve heard that hackers have been trying to take advantage of open source software to inject code into your machine, and worst case scenario, even the consumers of your libraries or your applications machines. In this quick post, I’ll show you how to integrate Python’s “Official” package scanning technology directly into your continuous integration and your project’s unit tests. While pip-audit is maintained in part by Trail of Bits with support from Google, it’s part of the PyPA organization.

Why this matters

Here are 5 recent, high-danger PyPI security issues supply chain attacks where “pip install” can turn into “pip install a backdoor.” Afterwards, we talk about how to scan for and prevent these from making it to your users.

What happened: A malicious version (8.3.41) of the widely-used ultralytics package was published to PyPI, containing code that downloaded the XMRig coinminer. Follow-on versions also carried the malicious downloader, and the writeup attributes the initial compromise to a GitHub Actions script injection, plus later abuse consistent with a stolen PyPI API token. Source: ReversingLabs

Campaign of fake packages stealing cloud access tokens, 14,100+ downloads before removal

What happened: Researchers reported multiple bogus PyPI libraries (including “time-related utilities”) designed to exfiltrate cloud access tokens, with the campaign exceeding 14,100 downloads before takedown. If those tokens are real, this can turn into cloud account takeover. Source: The Hacker News

Typosquatting and name-confusion targeting colorama, with remote control and data theft payloads

What happened: A campaign uploaded lookalike package names to PyPI to catch developers intending to install colorama, with payloads described as enabling persistent remote access/remote control plus harvesting and exfiltration of sensitive data. High danger mainly because colorama is popular and typos happen. Source: Checkmarx

PyPI credential-phishing led to real account compromise and malicious releases of a legit project (num2words)

What happened: PyPI reported an email phishing campaign using a lookalike domain; 4 accounts were successfully phished, attacker-generated API tokens were revoked, and malicious releases of num2words were uploaded then removed. This is the “steal maintainer creds, ship malware via trusted package name” playbook. Source: Python Package Index Blog

SilentSync RAT delivered via malicious PyPI packages (sisaws, secmeasure)

What happened: Zscaler documented malicious packages (including typosquatting) that deliver a Python-based remote access trojan (RAT) with command execution, file exfiltration, screen capture, and browser data theft (credentials, cookies, etc.). Source: Zscaler

Integrating pip-audit

Those are definitely scary situations. I’m sure you’ve heard about typo squatting and how annoying that can be. Caution will save you there. Where caution will not save you is when a legitimate package has its supply chain taken over. A lot of times this could look like a package that you use depends on another package whose maintainer was phished. And now everything that uses that library is carrying that vulnerability forward.

Enter pip-audit.

pip-audit is great because you can just run it on the command line. It will check against PyPA’s official list of vulnerabilities and tell you if anything in your virtual environment or requirements files is known to be malicious.

You could even set up a GitHub Action to do so, and I wouldn’t recommend against that at all. But it’s also valuable to make this check happen on developers’ machines. It’s a simple two-step process to do so:

  1. Add pip-audit to your project’s development dependencies or install it globally with uv tool install pip-audit.
  2. Create a unit test that simply shells out to execute pip-audit and fails the test if an issue is found.

Part one’s easy. Part two takes a little bit more work. That’s okay, because I got it for you. Just download the file here and drop it in your pytest test directory:

test_pypi_security_audit.py

Here’s a small segment to give you a sense of what’s involved.

def test_pip_audit_no_vulnerabilities():
	  # setup ...
    # Run pip-audit with JSON output for easier parsing
    try:
        result = subprocess.run(
            [
                sys.executable,
                '-m',
                'pip_audit',
                '--format=json',
                '--progress-spinner=off',
                '--ignore-vuln',
                'CVE-2025-53000', # example of skipping an irrelevant cve
                '--skip-editable', # don't test your own package in dev
            ],
            cwd=project_root,
            capture_output=True,
            text=True,
            timeout=120,  # 2 minute timeout
        )
    except subprocess.TimeoutExpired:
        pytest.fail('pip-audit command timed out after 120 seconds')
    except FileNotFoundError:
        pytest.fail('pip-audit not installed or not accessible')

That’s it! When anything runs your unit test, whether that’s continuous integration, a git hook, or just a developer testing their code, you’ll also run a pip-audit audit of your project.

Let others find out

Now, pip-audit tests if a malicious package has been installed, In which case, for that poor developer or machine, it may be too late. If it’s CI, who cares? But one other feature you can combine with this that is really nice is uv’s ability to put a delay on upgrading your dependencies.

Many developers, myself included, will typically run some kind of command that will pin your versions. Periodically we also run a command that looks for newer libraries and updates pinned versions so we’re using the latest code. So this way you upgrade in a stair-step manner at the time you’re intending to change versions.

This works great. However, what if the malicious version of a package is released five minutes before before you run this command. You’re getting it installed. But pretty soon, the community is going to find out that something is afoot, report it, and it will be yanked from PyPI. Here bad timing got you hacked.

While it’s not a guaranteed solution, certainly Defense In Depth would tell us maybe wait a few days to install a package. But you don’t want to review packages manually one by one, do you? For example, for Talk Python Training, we have over 200 packages for that website. It would be an immense hassle to verify the dates of each one and manually pick the versions.

No need! We can just add a simple delay to our uv command:

uv pip compile requirements.piptools --upgrade --output-file requirements.txt --exclude-newer "1 week"

In particular, notice –exclude-newer “1 week”. The exact duration isn’t the important thing. It’s about putting a little bit of a delay for issues to be reported into your workflow. You can read about the full feature here. This way, we only incorporate packages that have survived in the public on PyPI for at least one week.

Part 2

Be sure to check out the follow up post DevOps Python Supply Chain Security for even more tips.

Hope this helps. Stay safe out there.

December 23, 2025 12:16 AM UTC