Planet Python
Last update: January 04, 2026 04:43 PM UTC
January 04, 2026
EuroPython
Humans of EuroPython: Marina Moro López
EuroPython wouldn’t exist if it weren’t for all the volunteers who put in countless hours to organize it. Whether it’s contracting the venue, selecting and confirming talks & workshops or coordinating with speakers, hundreds of hours of loving work have been put into making each edition the best one yet.
Read our latest interview with Marina Moro López, a member of the EuroPython 2025 Programme Team and a former EuroPython speaker.
Thank you for contributing to the conference programme, Marina!
Marina Moro López, member of the Programme Team at EuroPython 2025EP: What first inspired you to volunteer for EuroPython? And which edition of the conference was it?
I volunteered at EuroPython 2025 because I was a speaker at the 2024 edition and fell in love with the event, so I wanted to do my bit to help keep it amazing.
EP: What was your primary role as a volunteer, and what did a typical day look like for you?
I was involved in reviewing talks and putting together the schedule, as well as contacting keynote speakers and organizing the open spaces. A typical day was filled with Excel spreadsheets and emails :)
EP: Could you share your favorite memory from contributing to EuroPython?
Honestly, the day the program team got together at the event. We shared an intimate moment exclusively for ourselves after all the hard work we had done, seeing how it was paying off.
EP: Is there anything that surprised you about the volunteer experience?
It may seem that organizing such a large event can be chaotic at times with so many people involved, so I was surprised to see that this wasn’t the case at all and that, in the end, we were all one big team.
EP: How has contributing to EuroPython impacted your own career or learning journey?
Without a doubt, an event like EuroPython gives you top communication and organizational skills. Also, in my particular case, and maybe this is a little silly, I am super proud to say that I did my first PR ever!
EP: What&aposs one misconception about conference volunteering you&aposd like to clear up?
Even if you don&apost have a tech background (like me), if you want to help, that&aposs reason enough to participate. You don’t need anything else.
EP: If you were to invite someone else, what do you think are the top 3 reasons to join the EuroPython organizing team?
Seeing how an event like this is created from the inside is incredible, plus the team is lovely, and you&aposll learn a lot because you’ll be surrounded by top people from the community.
EP: Thank you for your work, Marina!
January 03, 2026
Hugo van Kemenade
Localising xkcd
I gave a lightning talk at a bunch of conferences in 2025 about some of the exciting new things coming in Python 3.14, including template strings.
One thing we can use t-strings for is to prevent SQL injection. The user gives you an untrusted T-string, and you can sanitise it, before using it in a safer way.
I illustrated this with xkcd 327, titled “Exploits of a Mom”, but commonly known as “Little Bobby Tables”.
I localised most of the slides for the PyCon I was at, including this comic. Here they are!
PyCon Italia #
May, Bologna
PyCon Greece #
August, Athens
PyCon Estonia #
October, Tallinn
PyCon Finland #
October, Jyväskylä
PyCon Sweden #
October, Stockholm
Thanks #
Thanks to Randall Munroe for licensing the comic under a Creative Commons Attribution-NonCommercial 2.5 License. These adaptations are therefore licensed the same way.
Finally, here’s links for 2026, I recommend them all:
- PyCon Italia, 27-30 May: the CFP is open until 6th January
- PyCon Estonia, 8-9 October
- PyCon Greece, 12-13 October
- PyCon Sweden, TBA
- PyCon Finland, TBA
January 02, 2026
Real Python
The Real Python Podcast – Episode #278: PyCoder's Weekly 2025 Top Articles & Hidden Gems
PyCoder's Weekly included over 1,500 links to articles, blog posts, tutorials, and projects in 2025. Christopher Trudeau is back on the show this week to help wrap up everything by sharing some highlights and uncovering a few hidden gems from the pile.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Glyph Lefkowitz
The Next Thing Will Not Be Big
The dawning of a new year is an opportune moment to contemplate what has transpired in the old year, and consider what is likely to happen in the new one.
Today, I’d like to contemplate that contemplation itself.
The 20th century was an era characterized by rapidly accelerating change in technology and industry, creating shorter and shorter cultural cycles of changes in lifestyles. Thus far, the 21st century seems to be following that trend, at least in its recently concluded first quarter.
The early half of the twentieth century saw the massive disruption caused by electrification, radio, motion pictures, and then television.
In 1971, Intel poured gasoline on that fire by releasing the 4004, a microchip generally recognized as the first general-purpose microprocessor. Popular innovations rapidly followed: the computerized cash register, the personal computer, credit cards, cellular phones, text messaging, the Internet, the web, online games, mass surveillance, app stores, social media.
These innovations have arrived faster than previous generations, but also, they have crossed a crucial threshold: that of the human lifespan.
While the entire second millennium A.D. has been characterized by a gradually accelerating rate of technological and social change — the printing press and the industrial revolution were no slouches, in terms of changing society, and those predate the 20th century — most of those changes had the benefit of unfolding throughout the course of a generation or so.
Which means that any individual person in any given century up to the 20th might remember one major world-altering social shift within their lifetime, not five to ten of them. The diversity of human experience is vast, but most people would not expect that the defining technology of their lifetime was merely the latest in a progression of predictable civilization-shattering marvels.
Along with each of these successive generations of technology, we minted a new generation of industry titans. Westinghouse, Carnegie, Sarnoff, Edison, Ford, Hughes, Gates, Jobs, Zuckerberg, Musk. Not just individual rich people, but entire new classes of rich people that did not exist before. “Radio DJ”, “Movie Star”, “Rock Star”, “Dot Com Founder”, were all new paths to wealth opened (and closed) by specific technologies. While most of these people did come from at least some level of generational wealth, they no longer came from a literal hereditary aristocracy.
To describe this new feeling of constant acceleration, a new phrase was coined: “The Next Big Thing”. In addition to denoting that some Thing was coming and that it would be Big (i.e.: that it would change a lot about our lives), this phrase also carries the strong implication that such a Thing would be a product. Not a development in social relationships or a shift in cultural values, but some new and amazing form of conveying salted meat paste or what-have-you, that would make whatever lucky tinkerer who stumbled into it into a billionaire — along with any friends and family lucky enough to believe in their vision and get in on the ground floor with an investment.
In the latter part of the 20th century, our entire model of capital allocation shifted to account for this widespread belief. No longer were mega-businesses built by bank loans, stock issuances, and reinvestment of profit, the new model was “Venture Capital”. Venture capital is a model of capital allocation explicitly predicated on the idea that carefully considering each bet on a likely-to-succeed business and reducing one’s risk was a waste of time, because the return on the equity from the Next Big Thing would be so disproportionately huge — 10x, 100x, 1000x – that one could afford to make at least 10 bad bets for each good one, and still come out ahead.
The biggest risk was in missing the deal, not in giving a bunch of money to a scam. Thus, value investing and focus on fundamentals have been broadly disregarded in favor of the pursuit of the Next Big Thing.
If Americans of the twentieth century were temporarily embarrassed millionaires, those of the twenty-first are all temporarily embarrassed FAANG CEOs.
The predicament that this tendency leaves us in today is that the world is increasingly run by generations — GenX and Millennials — with the shared experience that the computer industry, either hardware or software, would produce some radical innovation every few years. We assume that to be true.
But all things change, even change itself, and that industry is beginning to slow down. Physically, transistor density is starting to brush up against physical limits. Economically, most people are drowning in more compute power than they know what to do with anyway. Users already have most of what they need from the Internet.
The big new feature in every operating system is a bunch of useless junk nobody really wants and is seeing remarkably little uptake. Social media and smartphones changed the world, true, but… those are both innovations from 2008. They’re just not new any more.
So we are all — collectively, culturally — looking for the Next Big Thing, and we keep not finding it.
It wasn’t 3D printing. It wasn’t crowdfunding. It wasn’t smart watches. It wasn’t VR. It wasn’t the Metaverse, it wasn’t Bitcoin, it wasn’t NFTs1.
It’s also not AI, but this is why so many people assume that it will be AI. Because it’s got to be something, right? If it’s got to be something then AI is as good a guess as anything else right now.
The fact is, our lifetimes have been an extreme anomaly. Things like the Internet used to come along every thousand years or so, and while we might expect that the pace will stay a bit higher than that, it is not reasonable to expect that something new like “personal computers” or “the Internet”3 will arrive again.
We are not going to get rich by getting in on the ground floor of the next Apple or the next Google because the next Apple and the next Google are Apple and Google. The industry is maturing. Software technology, computer technology, and internet technology are all maturing.
There Will Be Next Things
Research and development is happening in all fields all the time. Amazing new developments quietly and regularly occur in pharmaceuticals and in materials science. But these are not predictable. They do not inhabit the public consciousness until they’ve already happened, and they are rarely so profound and transformative that they change everybody’s life.
There will even be new things in the computer industry, both software and hardware. Foldable phones do address a real problem (I wish the screen were even bigger but I don’t want to carry around such a big device), and would probably be more popular if they got the costs under control. One day somebody’s going to crack the problem of volumetric displays, probably. Some VR product will probably, eventually, hit a more realistic price/performance ratio where the niche will expand at least a little more.
Maybe there will even be something genuinely useful, which is recognizably adjacent to the current “AI” fad, but if it is, it will be some new development that we haven’t seen yet. If current AI technology were sufficient to drive some interesting product, it would already be doing it, not using marketing disguised as science to conceal diminishing returns on current investments.
But They Will Not Be Big
The impulse to find the One Big Thing that will dominate the next five years is a fool’s errand. Incremental gains are diminishing across the board. The markets for time and attention2 are largely saturated. There’s no need for another streaming service if 100% of your leisure time is already committed to TikTok, YouTube and Netflix; famously, Netflix has already considered sleep its primary competitor for close to a decade - years before the pandemic.
Those rare tech markets which aren’t saturated are suffering from pedestrian economic problems like wealth inequality, not technological bottlenecks.
For example, the thing preventing the development of a robot that can do your laundry and your dishes without your input is not necessarily that we couldn’t build something like that, but that most households just can’t afford it without wage growth catching up to productivity growth. It doesn’t make sense for anyone to commit to the substantial R&D investment that such a thing would take, if the market doesn’t exist because the average worker isn’t paid enough to afford it on top of all the other tech which is already required to exist in society.
The projected income from the tiny, wealthy sliver of the population who could pay for the hardware, cannot justify an investment in the software past a fake version remotely operated by workers in the global south, only made possible by Internet wage arbitrage, i.e. a more palatable, modern version of indentured servitude.
Even if we were to accept the premise of an actually-“AI” version of this, that is still just a wish that ChatGPT could somehow improve enough behind the scenes to replace that worker, not any substantive investment in a novel, proprietary-to-the-chores-robot software system which could reliably perform specific functions.
What, Then?
The expectation for, and lack of, a “big thing” is a big problem. There are others who could describe its economic, political, and financial dimensions better than I can. So then let me speak to my expertise and my audience: open source software developers.
When I began my own involvement with open source, a big part of the draw for me was participating in a low-cost (to the corporate developer) but high-value (to society at large) positive externality. None of my employers would ever have cared about many of the applications for which Twisted forms a core bit of infrastructure; nor would I have been able to predict those applications’ existence. Yet, it is nice to have contributed to their development, even a little bit.
However, it’s not actually a positive externality if the public at large can’t directly benefit from it.
When real world-changing, disruptive developments are occurring, the bean-counters are not watching positive externalities too closely. As we discovered with many of the other benefits that temporarily accrued to labor in the tech economy, Open Source that is usable by individuals and small companies may have been a ZIRP. If you know you’re gonna make a billion dollars you’re not going to worry about giving away a few hundred thousand here and there.
When gains are smaller and harder to realize, and margins are starting to get squeezed, it’s harder to justify the investment in vaguely good vibes.
But this, itself, is not a call to action. I doubt very much that anyone reading this can do anything about the macroeconomic reality of higher interest rates. The technological reality of “development is happening slower” is inherently something that you can’t change on purpose.
However, what we can do is to be aware of this trend in our own work.
Fight Scale Creep
It seems to me that more and more open source infrastructure projects are tools for hyper-scale application development, only relevant to massive cloud companies. This is just a subjective assessment on my part — I’m not sure what tools even exist today to measure this empirically — but I remember a big part of the open source community when I was younger being things like Inkscape, Themes.Org and Slashdot, not React, Docker Hub and Hacker News.
This is not to say that the hobbyist world no longer exists. There is of course a ton of stuff going on with Raspberry Pi, Home Assistant, OwnCloud, and so on. If anything there’s a bit of a resurgence of self-hosting. But the interests of self-hosters and corporate developers are growing apart; there seems to be far less of a beneficial overflow from corporate infrastructure projects into these enthusiast or prosumer communities.
This is the concrete call to action: if you are employed in any capacity as an open source maintainer, dedicate more energy to medium- or small-scale open source projects.
If your assumption is that you will eventually reach a hyper-scale inflection point, then mimicking Facebook and Netflix is likely to be a good idea. However, if we can all admit to ourselves that we’re not going to achieve a trillion-dollar valuation and a hundred thousand engineer headcount, we can begin to consider ways to make our Next Thing a bit smaller, and to accommodate the world as it is rather than as we wish it would be.
Be Prepared to Scale Down
Here are some design guidelines you might consider, for just about any open source project, particularly infrastructure ones:
-
Don’t assume that your software can sustain an arbitrarily large fixed overhead because “you just pay that cost once” and you’re going to be running a billion instances so it will always amortize; maybe you’re only going to be running ten.
-
Remember that such fixed overhead includes not just CPU, RAM, and filesystem storage, but also the learning curve for developers. Front-loading a massive amount of conceptual complexity to accommodate the problems of hyper-scalers is a common mistake. Try to smooth out these complexities and introduce them only when necessary.
-
Test your code on edge devices. This means supporting Windows and macOS, and even Android and iOS. If you want your tool to help empower individual users, you will need to meet them where they are, which is not on an EC2 instance.
-
This includes considering Desktop Linux as a platform, as opposed to Server Linux as a platform, which (while they certainly have plenty in common) they are also distinct in some details. Consider the highly specific example of secret storage: if you are writing something that intends to live in a cloud environment, and you need to configure it with a secret, you will probably want to provide it via a text file or an environment variable. By contrast, if you want this same code to run on a desktop system, your users will expect you to support the Secret Service. This will likely only require a few lines of code to accommodate, but it is a massive difference to the user experience.
-
Don’t rely on LLMs remaining cheap or free. If you have LLM-related features4, make sure that they are sufficiently severable from the rest of your offering that if ChatGPT starts costing $1000 a month, your tool doesn’t break completely. Similarly, do not require that your users have easy access to half a terabyte of VRAM and a rack full of 5090s in order to run a local model.
Even if you were going to scale up to infinity, the ability to scale down and consider smaller deployments means that you can run more comfortably on, for example, a developer’s laptop. So even if you can’t convince your employer that this is where the economy and the future of technology in our lifetimes is going, it can be easy enough to justify this sort of design shift, particularly as individual choices. Make your onboarding cheaper, your development feedback loops tighter, and your systems generally more resilient to economic headwinds.
So, please design your open source libraries, applications, and services to run on smaller devices, with less complexity. It will be worth your time as well as your users’.
But if you can fix the whole wealth inequality thing, do that first.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!
-
These sorts of lists are pretty funny reads, in retrospect. ↩
-
Which is to say, “distraction”. ↩
-
... or even their lesser-but-still-profound aftershocks like “Social Media”, “Smartphones”, or “On-Demand Streaming Video” ... secondary manifestations of the underlying innovation of a packet-switched global digital network ... ↩
-
My preference would of course be that you just didn’t have such features at all, but perhaps even if you agree with me, you are part of an organization with some mandate to implement LLM stuff. Just try not to wrap the chain of this anchor all the way around your code’s neck. ↩
Seth Michael Larson
New ROM dumping tool for SNES & Super Famicom from Epilogue
Just heard the news from the WULFF Den Podcast that Epilogue has released pre-orders for the next ROM backup tool in their “Operator” series for the Super NES (SNES) and Super Famicom called the “SN Operator”. The SN Operator pre-order costs $60 USD plus shipping.
This is great news for collectors and people interested in owning and playing SNES and Super Famicom games without a subscription service and for cheaper than purchasing a console, currently hovering around $120 USD on eBay. If there's only one or two games you're interested in playing and ownership isn't a huge deal for you: Nintendo Switch Online offers a substantial SNES and Super Famicom library for $20/year.
Most importantly, these devices are important to provide legal pathways for enthusiasts to archive and play their aging collection on newer storage mediums and devices. Emulation does not equate to piracy, and ROM dumpers are an important tool for legal emulation and archiving. I've previously written about using the GB Operator from Epilogue with Ubuntu to successfully archive my Game Boy, Game Boy Color, and Game Boy Advance ROMs and save files.
I personally won't be buying this product as my collection doesn't have any SNES games. Is there a game in particular that I missed out on from this era? Let me know.
Their website also features a teaser for another upcoming Operator! The Wulff brothers guessed that this would likely be an N64 Operator, especially given the N64's prominence with the new Analogue 3D console. An N64 operator similarly wouldn't be super useful for me, I am holding out hope for a SEGA Genesis/Mega Drive Operator in the future.
Thanks for keeping RSS alive! ♥
January 01, 2026
Python Morsels
Implicit string concatenation
Python automatically concatenates adjacent string literals thanks to implicit string concatenation. This feature can sometimes lead to bugs.
Strings next to each other
Take a look at this line of Python code:
>>> print("Hello" "world!")
It looks kind of like we're passing multiple arguments to the built-in print function.
But we're not:
>>> print("Hello" "world!")
Helloworld!
If we pass multiple arguments to print, Python will put spaces between those values when printing:
>>> print("Hello", "world!")
Hello world!
But Python wasn't doing.
Our code from before didn't have commas to separate the arguments (note the missing comma between "Hello" and "world!"):
>>> print("Hello" "world!")
Helloworld!
How is that possible?
This seems like it should have resulted in a SyntaxError!
Implicit string concatenation
A string literal is the …
Read the full article: https://www.pythonmorsels.com/implicit-string-concatenation/
Seth Michael Larson
Cutting spritesheets like cookies with Python & Pillow 🍪
Happy new year! 🎉 For an upcoming project on the blog requiring many video-game sprites I've created a small tool (“sugarcookie”) using the always-lovely Python image-processing library Pillow. This tool takes a spritesheet and a list of mask colors, a minimum size, and then cuts the spritesheet into its component sprites.
I'm sure this could be implemented more efficiently, or with a friendly command line interface, but for more own purposes (~10 spritesheets) this worked just fine. Feel free to use, share, and improve. The script is available as a GitHub gist, but also included below.
Source code for sugarcookie
#!/usr/bin/env python
# /// script
# requires-python = ">=3.13"
# dependencies = [
# "Pillow",
# "tqdm"
# ]
# ///
# License: MIT
# Copyright 2025, Seth Larson
import os.path
import math
from PIL import Image
import tqdm
# Parameters
spritesheet = "" # Path to spritesheet.
masks = {} # Set of 3-tuples for RGB.
min_dim = 10 # Min and max dimensions in pixels.
max_dim = 260
img = Image.open(spritesheet)
if img.mode == "RGB": # Ensure an alpha channel.
alpha = Image.new("L", img.size, 255)
img.putalpha(alpha)
output_prefix = os.path.splitext(os.path.basename(spritesheet))[0]
data = img.getdata()
visited = set()
shapes = set()
reroll_shapes = set()
def getpixel(x, y) -> tuple[int, int, int, int]:
return data[x + (img.width * y)]
def make_2n(value: int) -> int:
return 2 ** int(math.ceil(math.log2(value)))
with tqdm.tqdm(
desc="Cutting cookies",
total=int(img.width * img.height),
unit="pixels",
) as t:
for x in range(img.width):
for y in range(img.height):
xy = (x, y)
if xy in visited:
continue
inshape = set()
candidates = {(x, y)}
def add_candidates(cx, cy):
global candidates
candidates |= {(cx - 1, cy), (cx + 1, cy), (cx, cy - 1), (cx, cy + 1)}
while candidates:
cx, cy = candidates.pop()
if (
(cx, cy) in visited
or cx < 0
or cx >= img.width
or cy < 0
or cy >= img.height
or abs(cx - x) > max_dim
or abs(cy - y) > max_dim
):
continue
visited.add((cx, cy))
rgba = r, g, b, a = getpixel(cx, cy)
if a == 0 or (r, g, b) in masks:
continue
else:
inshape.add((cx, cy))
add_candidates(cx, cy)
if inshape:
shapes.add(tuple(inshape))
t.update(img.height)
max_width = 0
max_height = 0
shapes_and_offsets = []
for shape in sorted(shapes):
min_x = img.width + 2
min_y = img.height + 2
max_x = -1
max_y = -1
for x, y in shape:
max_x = max(x, max_x)
max_y = max(y, max_y)
min_x = min(x, min_x)
min_y = min(y, min_y)
width = max_x - min_x + 1
height = max_y - min_y + 1
# Too small! We have to reroll this
# potentially into another shape.
if width < min_dim or height < min_dim:
reroll_shapes.add(shape)
continue
max_width = max(max_width, width)
max_height = max(max_height, height)
shapes_and_offsets.append((shape, (width, height), (min_x, min_y)))
# Make them powers of two!
max_width = make_2n(max_width)
max_height = make_2n(max_height)
sprite_number = 0
with tqdm.tqdm(
desc="Baking cookies",
total=len(shapes_and_offsets),
unit="sprites"
) as t:
for shape, (width, height), (offset_x, offset_y) in shapes_and_offsets:
new_img = Image.new(mode="RGBA", size=(max_width, max_height))
margin_x = (max_width - width) // 2
margin_y = (max_height - height) // 2
for rx in range(max_width):
for ry in range(max_height):
x = rx + offset_x
y = ry + offset_y
if (x, y) not in shape:
continue
new_img.putpixel((rx + margin_x, ry + margin_y), getpixel(x, y))
new_img.save(f"images/{output_prefix}-{sprite_number}.png")
sprite_number += 1
t.update(1)
When using the tool you may find yourself needing to add additional masking
across elements, such as the original spritesheet curator's name, in order
for the cutting process to work perfectly. This script also doesn't work
great for sprites which aren't contiguous across their bounding box.
There's an exercise left to the reader to implement reroll_shapes, a feature
I didn't end up needing for my own project. Let me know if you implement this
and send me a patch!
Thanks for keeping RSS alive! ♥
December 31, 2025
The Python Coding Stack
Mulled Wine, Mince Pies, and More Python
I’ve been having too much mulled wine. And wine of the standard type. And aperitifs before meals and digestifs after them…the occasional caffé corretto, too. You get the picture…
No wonder I can’t remember what articles I wrote this year here at The Python Coding Stack. So make sure you adjust your expectations for this end-of-year review post.
Parties and Gatherings
And there’s another thing I can never remember, especially at this time of year when large-ish gatherings are more common. How many people are needed in a group to have a probability greater than 50% that two people share a birthday? This could be an ice-breaker in some awkward gatherings, but only if you’re with a geeky crowd. Although the analytical proof is cool, writing Python code to explore this problem is just as fun. Here’s my article from February exploring the Birthday Paradox:
This post also explores some tools from the itertools module. Iteration in Python is different from its implementation in many other languages. And the itertools module provides several tools to iterate in a Pythonic way. Later in the year, I explored more of these tools in The itertools Series. Here’s the first post, exploring Yteria’s adventures in a world a bit similar to ours, yet different:
Here’s the whole series following Yteria’s other adventures and the itertools module:
Christmas Decorations
And something else you can’t avoid at this time of year is all the Christmas decorations you’ll find everywhere. Christmas trees, flashing lights, street displays, and…
…Python has its own decorations. You can adorn functions with Python’s equivalent of tinsel and angels:
This post is the most-read post on The Python Coding Stack in 2025. It also has a follow-up post that explores more:
Python’s decorators don’t necessarily make functions pretty—they make them more versatile. However, Python’s f-strings are there to make displayed outputs look pretty. And what if you want your own custom fancy f-string format specifiers?
Endless Visits to Coffee Shops
I spend a lot of time in coffee shops over the holidays. It’s a great place to meet people for a quick catch-up. And to drink coffee. Coffee featured a few times in articles this year here on The Python Coding Stack.
One of these coffee-themed posts followed Alex’s experience with opening his new coffee shop and explored parameters and arguments in Python functions:
Another one narrates one of my trips to a coffee shop and how it helped me really understand the difference between == and is in Python—equality and identity:
Board Games
Who doesn’t play board games over the holidays? We certainly do. And that means we need a dictionary present to resolve Scrabble-related disagreements. This year, we also played Boggle, another word-based game. So, the dictionary had to work overtime.
And dictionaries work overtime in Python, too. They’re one of the most important built-in data structures. Here’s a question: Are Python dictionaries ordered? The answer is more nuanced than you might think at first:
And to understand how Python dictionaries work, it’s best to understand hashability:
This article is part of The Club, the special area on The Python Coding Stack for premium members. The Club launched this year and includes more articles, an exclusive forum, videos, and more… This premium content is in addition to the free articles, which will always remain a key part of The Python Coding Stack. To make sure you don’t miss a thing here on The Python Coding Stack, join The Club by becoming a premium subscriber:
And it’s not just dictionaries that play an important role in Python. Indeed, in Python, we often prefer to focus on what a data type can do rather than what it is. Here’s another short post in The Club on this topic:
Unwanted Gifts?
Did you receive gifts you don’t need or want? Or perhaps, you have received the same gift more than once? Python can help, too. Let’s start by removing duplicate presents from the Christmas tree list:
And what about the used wrapping paper and food packaging? You can recycle some of it. But some must end up in the garbage. Python has its own trash bin, too:
Magic
This time of year can feel magical. And maybe it’s for this reason that TV stations here keep showing the Harry Potter films during the Christmas holiday season. I’m a Harry Potter fan, and I’ve written Harry Potter-themed posts and series in the past. And there was one this year, too:
One thing that’s not magic in Python is its behind-the-scenes operations. Python’s special methods deal with this, and once you know the trick, it’s no longer magic. Here’s a post that explores some of these special methods:
Queueing in The Cold
I avoided queueing in the cold this year, but I’ve done this so many times in past Christmas holidays. Queueing for a skating rink or for a Christmas fair. Queueing to get mulled wine from a street stall. If you could skip the queue, would you?
And if it’s cold, you’ll need to zip your jacket well. Python’s zipping and unzipping also feature in this year’s posts:
End-of-Year Reflections
Let me spare you all my Python-related stuff—the courses, articles, updates here on The Stack, and all that. Instead, my news for 2025 was my return to an old interest: track and field athletics. I even started a new Substack to document my adventures in track and field:
And I’ve written some posts with a track-and-field theme, too. Here’s one of these:
But the end of the year is also a time for reflecting on one’s life, past and future. Recently, a Python object has done just that:
Looking forward to a great new year in the Python world and here on The Python Coding Stack. Wishing you all a Happy New Year!
Image by iPicture from Pixabay
Join The Club, the exclusive area for paid subscribers for more Python posts, videos, a members’ forum, and more.
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
December 31, 2025 09:12 PM UTC
Django Weblog
DSF member of the month - Clifford Gama
For December 2025, we welcome Clifford Gama as our DSF member of the month! ⭐
Clifford contributed to Django core with more than 5 PRs merged in few months! He is part of the Triage and Review Team. He has been a DSF member since October 2024.
You can learn more about Clifford by visiting Clifford's website and his GitHub Profile.
Let’s spend some time getting to know Clifford better!
Can you tell us a little about yourself (hobbies, education, etc)
I'm Clifford. I hold a Bachelor's degree in Mechanical Engineering from the University of Zimbabwe.
How did you start using Django?
During my first year in college, I was also exploring open online courses on EDx and I came across CS50's introduction to web development. After watching the introductory lecture -- which introduced me to git and GitHub -- I discovered Django's excellent documentation and got started on the polls tutorial. The docs were so comprehensive and helpful I never felt the need to return to CS50. (I generally prefer comprehensive first-hand, written learning material over summaries and videos.)
At the time, I had already experimented with flask, but I guess mainly because I didn't know SQL and because flask didn't have an ORM, I never quite picked it up. With Django I felt like I was taking a learning fast-track where I'd learn everything I needed in one go!
And that's how I started using Django.
What projects are you working on now?
At the moment, I’ve been focusing on improving my core skills in preparation for remote work, so I haven’t been starting new projects because of that.
That said, I’ve been working on a client project involving generating large, image-heavy PDFs with WeasyPrint, where I’ve been investigating performance bottlenecks and ways to speed up generation time, which was previously around 30 minutes 😱.
What are you learning about these days?
I’ve been reading Boost Your Git DX by Adam Johnson and learning how to boost my Git and shell developer experience, which has been a great read. Aside from that, inspired by some blogs and talks by Haki Benita, I am also learning about software design and performance. Additionally, I am working on improving my general fluency in Python.
What other framework do you know and if there is anything you would like to have in Django if you had magical powers?
I am not familiar with any other frameworks, but if I had magic powers I'd add production-grade static-file serving in Django.
Django libraries are your favorite (core or 3rd party)?
The ORM, Wagtail and Django's admin.
What are the top three things in Django that you like?
- The community
- The documentation
- Djangonaut Space and the way new contributors are welcomed
How did you start contributing to Django?
I started contributing to Django in August last year, which is when I discovered the community, which was a real game changer for me. Python was my first course at university, and I loved it because it was creative and there was no limit to what I could build with it.
Whenever I saw a problem in another course that could be solved programmatically, I jumped at it. My proudest project from that time was building an NxN matrix determinant calculator after learning about recursion and spotting the opportunity in an algebra class.
After COVID lockdown, I gave programming up for a while. With more time on my hands, I found myself prioritizing programming over core courses, so I took a break. Last year, I returned to it when I faced a problem that I could only solve with Django. My goal was simply to build an app quickly and go back to being a non-programmer, but along the way I thought I found a bug in Django, filed a ticket, and ended up writing a documentation PR. That’s when I really discovered the Django community.
What attracted me most was that contributions are held to high standards, but experienced developers are always ready to help you reach them. Contributing was collaborative, pushing everyone to do their best. It was a learning opportunity too good to pass up.
How did you join the Triage and Review team?
About the time after I contributed my first PR, I started looking at open tickets to find more to work on, and keep on learning.
Sometimes a ticket was awaiting triage, in which case the first step was to triage it before assigning it to working on it, and sometimes the ticket I wanted was already taken, in which case I'd look at the PR if available. Reviewing a PR can be a faster way to learn about a particular part of the codebase, because someone has already done most of the investigative part of work, so I reviewed PRs as well.
After a while I got an invitation from Sarah Boyce, one of the fellows, to join the team. I didn't even know that I could join before I got the invitation, so I was thrilled!
How the work is going so far?
It’s been rewarding. I’ve gained familiarity with the Django codebase and real experience collaborating with others, which already exceeds what I expected when I started contributing.
One unexpected highlight was forming a friendship through one of the first PRs I reviewed.
SiHyun Lee and I are now both part of the triage and review team, and I’m grateful for that connection.
What are your hobbies or what do you do when you’re not working?
My main hobby is storytelling in a broad sense. In fact, it was a key reason I returned to programming after a long break. I enjoy discovering enduring stories from different cultures, times, and media—ranging from the deeply personal and literary to the distant and philosophical. I recently watched two Japanese classics and found I quite love them. I wrote about one of the films on my blog, and I also get to practice my Japanese, which I’ve been learning on Duolingo for about two years. I also enjoy playing speed chess.
Do you have any suggestions for people who would like to start triage and review tickets and PRs?
If there’s an issue you care about, or one that touches a part of the codebase you’re familiar with or curious about, jump in. Tickets aren’t always available to work on, but reviews always are, and they’re open to everyone. Reviewing helps PRs move faster, including your own if you have any open, sharpens your understanding of a component, and often clarifies the problem itself.
As Simon Charette puts it:
“Triaging issues and spending time understanding them is often more valuable than landing code itself as it strengthen our common understanding of the problem and allow us to build a consistent experience accross the diverse interfaces Django provides.”
And you can put it on your CV!
Is there anything else you’d like to say?
I’m grateful to everyone who contributes to making every part of Django what it is. I’m particularly thankful to whoever nominated me to be the DSF Member of the month.
I am optimistic about the future of Django. Django 6.1 is already shaping up with new features, and there are new projects like Django Bolt coming up.
Happy new year 🎊!
Thank you for doing the interview, Clifford and happy new year to the Django community 💚!
December 31, 2025 08:42 PM UTC
"Michael Kennedy's Thoughts on Technology"
Python Numbers Every Programmer Should Know
There are numbers every Python programmer should know. For example, how fast or slow is it to add an item to a list in Python? What about opening a file? Is that less than a millisecond? Is there something that makes that slower than you might have guessed? If you have a performance sensitive algorithm, which data structure should you use? How much memory does a floating point number use? What about a single character or the empty string? How fast is FastAPI compared to Django?
I wanted to take a moment and write down performance numbers specifically focused on Python developers. Below you will find an extensive table of such values. They are grouped by category. And I provided a couple of graphs for the more significant analysis below the table.
Acknowledgements: Inspired by Latency Numbers Every Programmer Should Know and similar resources.
Source code for the benchmarks
This article is posted without any code. I encourage you to dig into the benchmarks. The code is available on GitHub at:
https://github.com/mikeckennedy/python-numbers-everyone-should-know
📊 System Information
The benchmarks were run on the sytem described in this table. While yours may be faster or slower, the most important thing to consider is relative comparisons.
| Property | Value |
|---|---|
| Python Version | CPython 3.14.2 |
| Hardware | Mac Mini M4 Pro |
| Platform | macOS Tahoe (26.2) |
| Processor | ARM |
| CPU Cores | 14 physical / 14 logical |
| RAM | 24 GB |
| Timestamp | 2025-12-30 |
TL;DR; Python Numbers
This first version is a quick “pyramid” of growing time/size for common Python ops. There is much more detail below.
Python Operation Latency Numbers (the pyramid)
Attribute read (obj.x) 14 ns Dict key lookup 22 ns 1.5x attr Function call (empty) 22 ns List append 29 ns 2x attr f-string formatting 65 ns 3x function Exception raised + caught 140 ns 10x attr orjson.dumps() complex object 310 ns 0.3 μs json.loads() simple object 714 ns 0.7 μs 2x orjson sum() 1,000 integers 1,900 ns 1.9 μs 3x json SQLite SELECT by primary key 3,600 ns 3.6 μs 5x json Iterate 1,000-item list 7,900 ns 7.9 μs 2x SQLite read Open and close file 9,100 ns 9.1 μs 2x SQLite read asyncio run_until_complete (empty) 28,000 ns 28 μs 3x file open Write 1KB file 35,000 ns 35 μs 4x file open MongoDB find_one() by _id 121,000 ns 121 μs 3x write 1KB SQLite INSERT (with commit) 192,000 ns 192 μs 5x write 1KB Write 1MB file 207,000 ns 207 μs 6x write 1KB import json 2,900,000 ns 2,900 μs 3 ms 15x write 1MB import asyncio 17,700,000 ns 17,700 μs 18 ms 6x import json import fastapi 104,000,000 ns 104,000 μs 104 ms 6x import asyncio
Python Memory Numbers (the pyramid)
Float 24 bytes Small int (cached -5 to 256) 28 bytes Empty string 41 bytes Empty list 56 bytes 2x int Empty dict 64 bytes 2x int Empty set 216 bytes 8x int __slots__ class (5 attrs) 212 bytes 8x int Regular class (5 attrs) 694 bytes 25x int List of 1,000 ints 36,856 bytes 36 KB Dict of 1,000 items 92,924 bytes 91 KB List of 1,000 __slots__ instances 220,856 bytes 216 KB List of 1,000 regular instances 309,066 bytes 302 KB 1.4x slots list Empty Python process 16,000,000 bytes 16 MB
Python numbers you should know (detailed version)
Here is a deeper table comparing many more details.
| Category | Operation | Time | Memory |
|---|---|---|---|
| 💾 Memory | Empty Python process | — | 15.77 MB |
| Empty string | — | 41 bytes | |
| 100-char string | — | 141 bytes | |
| Small int (-5 to 256) | — | 28 bytes | |
| Large int | — | 28 bytes | |
| Float | — | 24 bytes | |
| Empty list | — | 56 bytes | |
| List with 1,000 ints | — | 36.0 KB | |
| List with 1,000 floats | — | 32.1 KB | |
| Empty dict | — | 64 bytes | |
| Dict with 1,000 items | — | 90.7 KB | |
| Empty set | — | 216 bytes | |
| Set with 1,000 items | — | 59.6 KB | |
| Regular class instance (5 attrs) | — | 694 bytes | |
__slots__ class instance (5 attrs) |
— | 212 bytes | |
| List of 1,000 regular class instances | — | 301.8 KB | |
List of 1,000 __slots__ class instances |
— | 215.7 KB | |
| dataclass instance | — | 694 bytes | |
| namedtuple instance | — | 228 bytes | |
| ⚙️ Basic Ops | Add two integers | 19.0 ns (52.7M ops/sec) | — |
| Add two floats | 18.4 ns (54.4M ops/sec) | — | |
| String concatenation (small) | 39.1 ns (25.6M ops/sec) | — | |
| f-string formatting | 64.9 ns (15.4M ops/sec) | — | |
.format() |
103 ns (9.7M ops/sec) | — | |
% formatting |
89.8 ns (11.1M ops/sec) | — | |
| List append | 28.7 ns (34.8M ops/sec) | — | |
| List comprehension (1,000 items) | 9.45 μs (105.8k ops/sec) | — | |
| Equivalent for-loop (1,000 items) | 11.9 μs (83.9k ops/sec) | — | |
| 📦 Collections | Dict lookup by key | 21.9 ns (45.7M ops/sec) | — |
| Set membership check | 19.0 ns (52.7M ops/sec) | — | |
| List index access | 17.6 ns (56.8M ops/sec) | — | |
| List membership check (1,000 items) | 3.85 μs (259.6k ops/sec) | — | |
len() on list |
18.8 ns (53.3M ops/sec) | — | |
| Iterate 1,000-item list | 7.87 μs (127.0k ops/sec) | — | |
| Iterate 1,000-item dict | 8.74 μs (114.5k ops/sec) | — | |
sum() of 1,000 ints |
1.87 μs (534.8k ops/sec) | — | |
| 🏷️ Attributes | Read from regular class | 14.1 ns (70.9M ops/sec) | — |
| Write to regular class | 15.7 ns (63.6M ops/sec) | — | |
Read from __slots__ class |
14.1 ns (70.7M ops/sec) | — | |
Write to __slots__ class |
16.4 ns (60.8M ops/sec) | — | |
Read from @property |
19.0 ns (52.8M ops/sec) | — | |
getattr() |
13.8 ns (72.7M ops/sec) | — | |
hasattr() |
23.8 ns (41.9M ops/sec) | — | |
| 📄 JSON | json.dumps() (simple) |
708 ns (1.4M ops/sec) | — |
json.loads() (simple) |
714 ns (1.4M ops/sec) | — | |
json.dumps() (complex) |
2.65 μs (376.8k ops/sec) | — | |
json.loads() (complex) |
2.22 μs (449.9k ops/sec) | — | |
orjson.dumps() (complex) |
310 ns (3.2M ops/sec) | — | |
orjson.loads() (complex) |
839 ns (1.2M ops/sec) | — | |
ujson.dumps() (complex) |
1.64 μs (611.2k ops/sec) | — | |
msgspec encode (complex) |
445 ns (2.2M ops/sec) | — | |
Pydantic model_dump_json() |
1.54 μs (647.8k ops/sec) | — | |
Pydantic model_validate_json() |
2.99 μs (334.7k ops/sec) | — | |
| 🌐 Web Frameworks | Flask (return JSON) | 16.5 μs (60.7k req/sec) | — |
| Django (return JSON) | 18.1 μs (55.4k req/sec) | — | |
| FastAPI (return JSON) | 8.63 μs (115.9k req/sec) | — | |
| Starlette (return JSON) | 8.01 μs (124.8k req/sec) | — | |
| Litestar (return JSON) | 8.19 μs (122.1k req/sec) | — | |
| 📁 File I/O | Open and close file | 9.05 μs (110.5k ops/sec) | — |
| Read 1KB file | 10.0 μs (99.5k ops/sec) | — | |
| Write 1KB file | 35.1 μs (28.5k ops/sec) | — | |
| Write 1MB file | 207 μs (4.8k ops/sec) | — | |
pickle.dumps() |
1.30 μs (769.6k ops/sec) | — | |
pickle.loads() |
1.44 μs (695.2k ops/sec) | — | |
| 🗄️ Database | SQLite insert (JSON blob) | 192 μs (5.2k ops/sec) | — |
| SQLite select by PK | 3.57 μs (280.3k ops/sec) | — | |
| SQLite update one field | 5.22 μs (191.7k ops/sec) | — | |
| diskcache set | 23.9 μs (41.8k ops/sec) | — | |
| diskcache get | 4.25 μs (235.5k ops/sec) | — | |
| MongoDB insert_one | 119 μs (8.4k ops/sec) | — | |
| MongoDB find_one by _id | 121 μs (8.2k ops/sec) | — | |
| MongoDB find_one by nested field | 124 μs (8.1k ops/sec) | — | |
| 📞 Functions | Empty function call | 22.4 ns (44.6M ops/sec) | — |
| Function with 5 args | 24.0 ns (41.7M ops/sec) | — | |
| Method call | 23.3 ns (42.9M ops/sec) | — | |
| Lambda call | 19.7 ns (50.9M ops/sec) | — | |
| try/except (no exception) | 21.5 ns (46.5M ops/sec) | — | |
| try/except (exception raised) | 139 ns (7.2M ops/sec) | — | |
isinstance() check |
18.3 ns (54.7M ops/sec) | — | |
| ⏱️ Async | Create coroutine object | 47.0 ns (21.3M ops/sec) | — |
run_until_complete(empty) |
27.6 μs (36.2k ops/sec) | — | |
asyncio.sleep(0) |
39.4 μs (25.4k ops/sec) | — | |
gather() 10 coroutines |
55.0 μs (18.2k ops/sec) | — | |
create_task() + await |
52.8 μs (18.9k ops/sec) | — | |
async with (context manager) |
29.5 μs (33.9k ops/sec) | — |
Memory Costs
Understanding how much memory different Python objects consume.
An empty Python process uses 15.77 MB
Strings
The rule of thumb for ASCII strings is the core string object takes 41 bytes, with each additional character adding 1 byte. Note: Python uses different internal representations based on content—strings with Latin-1 characters use 1 byte/char, those with most Unicode use 2 bytes/char, and strings with emoji or rare characters use 4 bytes/char.
| String | Size |
|---|---|
Empty string "" |
41 bytes |
1-char string "a" |
42 bytes |
| 100-char string | 141 bytes |

Numbers
Numbers are surprisingly large in Python. They have to derive from CPython’s PyObject and are subject to reference counting for garabage collection, they exceed our typical mental model many of:
- 2 bytes = short int
- 4 bytes = long int
- etc.
| Type | Size |
|---|---|
| Small int (-5 to 256, cached) | 28 bytes |
| Large int (1000) | 28 bytes |
| Very large int (10**100) | 72 bytes |
| Float | 24 bytes |

Collections
Collections are amazing in Python. Dynamically growing lists. Ultra high-perf dictionaries and sets. Here is the empty and “full” overhead of each.
| Collection | Empty | 1,000 items |
|---|---|---|
| List (ints) | 56 bytes | 36.0 KB |
| List (floats) | 56 bytes | 32.1 KB |
| Dict | 64 bytes | 90.7 KB |
| Set | 216 bytes | 59.6 KB |

Classes and Instances
Slots are an interesting addition to Python classes. They remove the entire concept of a __dict__ for tracking fields and other values. Even for a single instance, slots classes are significantly smaller (212 bytes vs 694 bytes for 5 attributes). If you are holding a large number of them in memory for a list or cache, the memory savings of a slots class becomes meaningful - about 30% less memory usage. Luckily for most use-cases, just adding a slots entry saves memory with minimal effort.
| Type | Empty | 5 attributes |
|---|---|---|
| Regular class | 344 bytes | 694 bytes |
__slots__ class |
32 bytes | 212 bytes |
| dataclass | — | 694 bytes |
@dataclass(slots=True) |
— | 212 bytes |
| namedtuple | — | 228 bytes |
Aggregate Memory Usage (1,000 instances):
| Type | Total Memory |
|---|---|
| List of 1,000 regular class instances | 301.8 KB |
List of 1,000 __slots__ class instances |
215.7 KB |

Basic Operations
The cost of fundamental Python operations: Way slower than C/C++/C# but still quite fast. I added a brief comparison to C# to the source repo.
Arithmetic
| Operation | Time |
|---|---|
| Add two integers | 19.0 ns (52.7M ops/sec) |
| Add two floats | 18.4 ns (54.4M ops/sec) |
| Multiply two integers | 19.4 ns (51.6M ops/sec) |

String Operations
String operations in Python are fast as well. Among template-based formatting styles, f-strings are the fastest. Simple concatenation (+) is faster still for combining a couple strings, but f-strings scale better and are more readable. Even the slowest formatting style is still measured in just nanoseconds.
| Operation | Time |
|---|---|
Concatenation (+) |
39.1 ns (25.6M ops/sec) |
| f-string | 64.9 ns (15.4M ops/sec) |
.format() |
103 ns (9.7M ops/sec) |
% formatting |
89.8 ns (11.1M ops/sec) |

List Operations
List operations are very fast in Python. Adding a single item usually requires 28ns. Said another way, you can do 35M appends per second. This is unless the list has to expand using something like a doubling algorithm. You can see this in the ops/sec for 1,000 items.
Surprisingly, list comprehensions are 26% faster than the equivalent for loops with append statements.
| Operation | Time |
|---|---|
list.append() |
28.7 ns (34.8M ops/sec) |
| List comprehension (1,000 items) | 9.45 μs (105.8k ops/sec) |
| Equivalent for-loop (1,000 items) | 11.9 μs (83.9k ops/sec) |

Collection Access and Iteration
How fast can you get data out of Python’s built-in collections? Here is a dramatic example of how much faster the correct data structure is. item in set or item in dict is 200x faster than item in list for just 1,000 items! This difference comes from algorithmic complexity: sets and dicts use O(1) hash lookups, while lists require O(n) linear scans—and this gap grows with collection size.
The graph below is non-linear in the x-axis.
Access by Key/Index
| Operation | Time |
|---|---|
| Dict lookup by key | 21.9 ns (45.7M ops/sec) |
Set membership (in) |
19.0 ns (52.7M ops/sec) |
| List index access | 17.6 ns (56.8M ops/sec) |
List membership (in, 1,000 items) |
3.85 μs (259.6k ops/sec) |

Length
len() is very fast. Maybe we don’t have to optimize it out of the test condition on a while loop looping 100 times after all.
| Collection | len() time |
|---|---|
| List (1,000 items) | 18.8 ns (53.3M ops/sec) |
| Dict (1,000 items) | 17.6 ns (56.9M ops/sec) |
| Set (1,000 items) | 18.0 ns (55.5M ops/sec) |
Iteration
| Operation | Time |
|---|---|
| Iterate 1,000-item list | 7.87 μs (127.0k ops/sec) |
| Iterate 1,000-item dict (keys) | 8.74 μs (114.5k ops/sec) |
sum() of 1,000 integers |
1.87 μs (534.8k ops/sec) |
Class and Object Attributes
The cost of reading and writing attributes, and how __slots__ changes things. Slots saves ~30% memory on large collections, with virtually identical attribute access speed.
Attribute Access
| Operation | Regular Class | __slots__ Class |
|---|---|---|
| Read attribute | 14.1 ns (70.9M ops/sec) | 14.1 ns (70.7M ops/sec) |
| Write attribute | 15.7 ns (63.6M ops/sec) | 16.4 ns (60.8M ops/sec) |

Other Attribute Operations
| Operation | Time |
|---|---|
Read @property |
19.0 ns (52.8M ops/sec) |
getattr(obj, 'attr') |
13.8 ns (72.7M ops/sec) |
hasattr(obj, 'attr') |
23.8 ns (41.9M ops/sec) |
JSON and Serialization
Comparing standard library JSON with optimized alternatives. orjson handles more data types and is over 8x faster than standard lib json for complex objects. Impressive!
Serialization (dumps)
| Library | Simple Object | Complex Object |
|---|---|---|
json (stdlib) |
708 ns (1.4M ops/sec) | 2.65 μs (376.8k ops/sec) |
orjson |
60.9 ns (16.4M ops/sec) | 310 ns (3.2M ops/sec) |
ujson |
264 ns (3.8M ops/sec) | 1.64 μs (611.2k ops/sec) |
msgspec |
92.3 ns (10.8M ops/sec) | 445 ns (2.2M ops/sec) |

Deserialization (loads)
| Library | Simple Object | Complex Object |
|---|---|---|
json (stdlib) |
714 ns (1.4M ops/sec) | 2.22 μs (449.9k ops/sec) |
orjson |
106 ns (9.4M ops/sec) | 839 ns (1.2M ops/sec) |
ujson |
268 ns (3.7M ops/sec) | 1.46 μs (682.8k ops/sec) |
msgspec |
101 ns (9.9M ops/sec) | 850 ns (1.2M ops/sec) |
Pydantic
| Operation | Time |
|---|---|
model_dump_json() |
1.54 μs (647.8k ops/sec) |
model_validate_json() |
2.99 μs (334.7k ops/sec) |
model_dump() (to dict) |
1.71 μs (585.2k ops/sec) |
model_validate() (from dict) |
2.30 μs (435.5k ops/sec) |
Web Frameworks
Returning a simple JSON response. Benchmarked with wrk against localhost running 4 works in Granian. Each framework returns the same JSON payload from a minimal endpoint. No database access or that sort of thing. This is just how much overhead/perf do we get from each framework itself. The code we write that runs within those view methods is largely the same.
Results
| Framework | Requests/sec | Latency (p99) |
|---|---|---|
| Flask | 16.5 μs (60.7k req/sec) | 20.85 ms (48.0 ops/sec) |
| Django | 18.1 μs (55.4k req/sec) | 170.3 ms (5.9 ops/sec) |
| FastAPI | 8.63 μs (115.9k req/sec) | 1.530 ms (653.6 ops/sec) |
| Starlette | 8.01 μs (124.8k req/sec) | 930 μs (1.1k ops/sec) |
| Litestar | 8.19 μs (122.1k req/sec) | 1.010 ms (990.1 ops/sec) |

File I/O
Reading and writing files of various sizes. Note that the graph is non-linear in y-axis.
Basic Operations
| Operation | Time |
|---|---|
| Open and close (no read) | 9.05 μs (110.5k ops/sec) |
| Read 1KB file | 10.0 μs (99.5k ops/sec) |
| Read 1MB file | 33.6 μs (29.8k ops/sec) |
| Write 1KB file | 35.1 μs (28.5k ops/sec) |
| Write 1MB file | 207 μs (4.8k ops/sec) |

Pickle vs JSON (Serialization)
For more serialization options including orjson, msgspec, and pydantic, see JSON and Serialization above.
| Operation | Time |
|---|---|
pickle.dumps() (complex obj) |
1.30 μs (769.6k ops/sec) |
pickle.loads() (complex obj) |
1.44 μs (695.2k ops/sec) |
json.dumps() (complex obj) |
2.72 μs (367.1k ops/sec) |
json.loads() (complex obj) |
2.35 μs (425.9k ops/sec) |
Database and Persistence
Comparing SQLite, diskcache, and MongoDB using the same complex object.
Test Object
user_data = {
"id": 12345,
"username": "alice_dev",
"email": "alice@example.com",
"profile": {
"bio": "Software engineer who loves Python",
"location": "Portland, OR",
"website": "https://alice.dev",
"joined": "2020-03-15T08:30:00Z"
},
"posts": [
{"id": 1, "title": "First Post", "tags": ["python", "tutorial"], "views": 1520},
{"id": 2, "title": "Second Post", "tags": ["rust", "wasm"], "views": 843},
{"id": 3, "title": "Third Post", "tags": ["python", "async"], "views": 2341},
],
"settings": {
"theme": "dark",
"notifications": True,
"email_frequency": "weekly"
}
}
SQLite (JSON blob approach)
| Operation | Time |
|---|---|
| Insert one object | 192 μs (5.2k ops/sec) |
| Select by primary key | 3.57 μs (280.3k ops/sec) |
| Update one field | 5.22 μs (191.7k ops/sec) |
| Delete | 191 μs (5.2k ops/sec) |
Select with json_extract() |
4.27 μs (234.2k ops/sec) |
diskcache
| Operation | Time |
|---|---|
cache.set(key, obj) |
23.9 μs (41.8k ops/sec) |
cache.get(key) |
4.25 μs (235.5k ops/sec) |
cache.delete(key) |
51.9 μs (19.3k ops/sec) |
| Check key exists | 1.91 μs (523.2k ops/sec) |
MongoDB
| Operation | Time |
|---|---|
insert_one() |
119 μs (8.4k ops/sec) |
find_one() by _id |
121 μs (8.2k ops/sec) |
find_one() by nested field |
124 μs (8.1k ops/sec) |
update_one() |
115 μs (8.7k ops/sec) |
delete_one() |
30.4 ns (32.9M ops/sec) |
Comparison Table
| Operation | SQLite | diskcache | MongoDB |
|---|---|---|---|
| Write one object | 192 μs (5.2k ops/sec) | 23.9 μs (41.8k ops/sec) | 119 μs (8.4k ops/sec) |
| Read by key/id | 3.57 μs (280.3k ops/sec) | 4.25 μs (235.5k ops/sec) | 121 μs (8.2k ops/sec) |
| Read by nested field | 4.27 μs (234.2k ops/sec) | N/A | 124 μs (8.1k ops/sec) |
| Update one field | 5.22 μs (191.7k ops/sec) | 23.9 μs (41.8k ops/sec) | 115 μs (8.7k ops/sec) |
| Delete | 191 μs (5.2k ops/sec) | 51.9 μs (19.3k ops/sec) | 30.4 ns (32.9M ops/sec) |
Note: MongoDB is a victim of network access version in-process access.

Function and Call Overhead
The hidden cost of function calls, exceptions, and async.
Function Calls
| Operation | Time |
|---|---|
| Empty function call | 22.4 ns (44.6M ops/sec) |
| Function with 5 arguments | 24.0 ns (41.7M ops/sec) |
| Method call on object | 23.3 ns (42.9M ops/sec) |
| Lambda call | 19.7 ns (50.9M ops/sec) |
Built-in function (len()) |
17.1 ns (58.4M ops/sec) |
Exceptions
| Operation | Time |
|---|---|
try/except (no exception raised) |
21.5 ns (46.5M ops/sec) |
try/except (exception raised) |
139 ns (7.2M ops/sec) |
Type Checking
| Operation | Time |
|---|---|
isinstance() |
18.3 ns (54.7M ops/sec) |
type() == type |
21.8 ns (46.0M ops/sec) |
Async Overhead
The cost of async machinery.
Coroutine Creation
| Operation | Time |
|---|---|
| Create coroutine object (no await) | 47.0 ns (21.3M ops/sec) |
| Create coroutine (with return value) | 45.3 ns (22.1M ops/sec) |
Running Coroutines
| Operation | Time |
|---|---|
run_until_complete(empty) |
27.6 μs (36.2k ops/sec) |
run_until_complete(return value) |
26.6 μs (37.5k ops/sec) |
| Run nested await | 28.9 μs (34.6k ops/sec) |
| Run 3 sequential awaits | 27.9 μs (35.8k ops/sec) |
asyncio.sleep()
Note: asyncio.sleep(0) is a special case in Python’s event loop—it yields control but schedules an immediate callback, making it faster than typical sleeps but not representative of general event loop overhead.
| Operation | Time |
|---|---|
asyncio.sleep(0) |
39.4 μs (25.4k ops/sec) |
Coroutine with sleep(0) |
41.8 μs (23.9k ops/sec) |
asyncio.gather()
| Operation | Time |
|---|---|
gather() 5 coroutines |
49.7 μs (20.1k ops/sec) |
gather() 10 coroutines |
55.0 μs (18.2k ops/sec) |
gather() 100 coroutines |
155 μs (6.5k ops/sec) |
Task Creation
| Operation | Time |
|---|---|
create_task() + await |
52.8 μs (18.9k ops/sec) |
| Create 10 tasks + gather | 85.5 μs (11.7k ops/sec) |
Async Context Managers & Iteration
| Operation | Time |
|---|---|
async with (context manager) |
29.5 μs (33.9k ops/sec) |
async for (5 items) |
30.0 μs (33.3k ops/sec) |
async for (100 items) |
36.4 μs (27.5k ops/sec) |
Sync vs Async Comparison
| Operation | Time |
|---|---|
| Sync function call | 20.3 ns (49.2M ops/sec) |
Async equivalent (run_until_complete) |
28.2 μs (35.5k ops/sec) |
Methodology
Benchmarking Approach
- All benchmarks run multiple times and with warmup not timed
- Timing uses
timeitorperf_counter_nsas appropriate - Memory measured with
sys.getsizeof()andtracemalloc - Results are median of N runs
Environment
- OS: macOS 26.2
- Python: 3.14.2 (CPython)
- CPU: ARM - 14 cores (14 logical)
- RAM: 24.0 GB
Code Repository
All benchmark code available at: https://github.com/mkennedy/python-numbers-everyone-should-know
Key Takeaways
- Memory overhead: Python objects have significant memory overhead - even an empty list is 56 bytes
- Dict/set speed: Dictionary and set lookups are extremely fast (O(1) average case) compared to list membership checks (O(n))
- JSON performance: Alternative JSON libraries like
orjsonandmsgspecare 3-8x faster than stdlibjson - Async overhead: Creating and awaiting coroutines has measurable overhead - only use async when you need concurrency
__slots__tradeoff:__slots__saves memory (~30% for collections of instances) with virtually no performance impact
Last updated: 2026-01-01
/* Custom table styling for this post only */ table { font-size: 13px; border-collapse: collapse; width: 100%; max-width: 690px; /* margin: 1em auto; */ background-color: #ddd; box-shadow: 0 2px 8px rgba(0, 0, 0, 0.08); border-radius: 6px; overflow: hidden; } table thead { background-color: #444; color: white; } table th { padding: 8px 12px; text-align: left; font-weight: 600; text-transform: uppercase; letter-spacing: 0.5px; font-size: 11px; } table td { padding: 6px 12px; border-bottom: 1px solid #e5e7eb; line-height: 1.4; } table tbody tr { transition: all 0.2s ease-in-out; background-color: #fff; } table tbody tr:hover { background-color: #e3f2fd; transform: scale(1.01); box-shadow: 0 2px 12px rgba(33, 150, 243, 0.15); cursor: default; } table tbody tr:last-child td { border-bottom: none; } /* Code in tables */ table code { background-color: #f3f4f6; padding: 2px 4px; border-radius: 3px; font-size: 12px; }December 31, 2025 07:49 PM UTC
Zero to Mastery
[December 2025] Python Monthly Newsletter 🐍
73rd issue of Andrei's Python Monthly: A big change is coming. Read the full newsletter to get up-to-date with everything you need to know from last month.
December 31, 2025 10:00 AM UTC
December 30, 2025
Paolo Melchiorre
Looking Back at Python Pescara 2025
A personal retrospective on Python Pescara in 2025: events, people, and moments that shaped a growing local community, reflecting on continuity, experimentation, and how a small group connected to the wider Python ecosystem.
December 30, 2025 11:00 PM UTC
PyCoder’s Weekly
Issue #715: Top 5 of 2025, LlamaIndex, Python 3.15 Speed, and More (Dec. 30, 2025)
#715 – DECEMBER 30, 2025
View in Browser »
Welcome to the end of 2025 PyCoders newsletter. In addition to your regular content, this week we have included the top 5 most clicked articles of the year.
Thanks for continuing to be with us at PyCoder’s Weekly. I’m sure 2026 will be just as wild. Speaking of 2026, if you come across something cool next year, an article or a project you think deserves some notice, send it to us and it might end up in a future issue.
Happy Pythoning!
— The PyCoder’s Weekly Team
Christopher Trudeau, Curator
Dan Bader, Editor
#1: The Inner Workings of Python Dataclasses Explained
Discover how Python dataclasses work internally! Learn how to use __annotations__ and exec() to make our own dataclass decorator!
JACOB PADILLA
#2: Going Beyond requirements.txt With pylock.toml
What is the best way to record the Python dependencies for the reproducibility of your projects? What advantages will lock files provide for those projects? This week on the show, we welcome back Python Core Developer Brett Cannon to discuss his journey to bring PEP 751 and the pylock.toml file format to the community.
REAL PYTHON podcast
#3: Django vs. FastAPI, an Honest Comparison
David has worked with Django for a long time, but recently has done some deeper coding with FastAPI. As a result, he’s able to provide a good contrast between the libraries and why/when you might choose one over the other.
DAVID DAHAN
#4: How to Use Loguru for Simpler Python Logging
Learn how to use Loguru to implement better logging in your Python applications quickly and with less configuration. Spend more time debugging effectively with cleaner, more informative logs.
REAL PYTHON
#5: Narwhals: Unified DataFrame Functions
Narwhals is a lightweight compatibility layer between DataFrame libraries. You can use it as a common interface to write reproducible and maintainable data science code which supports pandas, Polars, DuckDB, PySpark, PyArrow, and more
MARCO GORELLI
Articles & Tutorials
What Actually Makes You Senior
This opinion piece argues that there is one skill that separates senior engineers from everyone else. It isn’t technical. It’s the ability to take ambiguous problems and make them concrete. Associated HN Discussion
MATHEUS LIMA
Jingle Bells (Batman Smells)
Not Python in the least, but a little bit of seasonal fun. Ever wonder about all the variations on the Jingle Bells schoolyard version? Well, there’s a chart for that. Associated HN Discussion
LORE AND ORDURE
Write Python You Won’t Hate in Six Months
Real Python’s live courses are back for 2026. Python for Beginners builds fundamentals correctly from the start. Intermediate Python Deep Dive covers decorators, OOP done right, and Python’s object model. Both include live instruction, hands-on projects, and expert feedback. Learn more at realpython.com/live →
REAL PYTHON sponsor
Top 10 Python Frameworks for IoT
Explore the top 10 Python frameworks for Internet of Things (IoT) development that help with scalable device communication, data processing, and real-time system control.
SAMRADNI
LlamaIndex in Python: A RAG Guide With Examples
Learn how to set up LlamaIndex, choose an LLM, load your data, build and persist an index, and run queries to get grounded, reliable answers with examples.
REAL PYTHON
Reading User Input From the Keyboard With Python
Master taking user input in Python to build interactive terminal apps with clear prompts, solid error handling, and smooth multi-step flows.
REAL PYTHON course
Python 3.15’s Interpreter Potentially 15% Faster
Python 3.15’s interpreter for Windows x86-64 should hopefully be 15% faster based on the implementation of tail calling results.
KEN JIN
SOLID Design Principles to Improve Object-Oriented Code
Learn how to apply SOLID design principles in Python and build maintainable, reusable, and testable object-oriented code.
REAL PYTHON
Projects & Code
daffy: DataFrame Validation Decorators
GITHUB.COM/VERTTI • Shared by Janne Sinivirta
Events
Canberra Python Meetup
January 1, 2026
MEETUP.COM
Sydney Python User Group (SyPy)
January 1, 2026
SYPY.ORG
Melbourne Python Users Group, Australia
January 5, 2026
J.MP
PyBodensee Monthly Meetup
January 5, 2026
PYBODENSEE.COM
Weekly Real Python Office Hours Q&A (Virtual)
January 7, 2026
REALPYTHON.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #715.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
December 30, 2025 07:30 PM UTC
Programiz
Python List
In this tutorial, we will learn about Python lists (creating lists, changing list items, removing items, and other list operations) with the help of examples.
December 30, 2025 04:31 AM UTC
December 29, 2025
Paolo Melchiorre
Django On The Med: A Contributor Sprint Retrospective
A personal retrospective on Django On The Med, three months later. From the first idea to the actual contributor sprint, and how a simple format based on focused mornings and open afternoons created unexpected value for people and the Django open source community.
December 29, 2025 11:00 PM UTC
Hugo van Kemenade
Replacing python-dateutil to remove six
The dateutil library is a popular and powerful Python library for dealing with dates and times.
However, it still supports Python 2.7 by depending on the six compatibility shim, and I’d prefer not to install for Python 3.10 and higher.
Here’s how I replaced three uses of its
relativedelta in a
couple of CLIs that didn’t really need to use it.
One #
norwegianblue was using it to calculate six months from now:
import datetime as dt
from dateutil.relativedelta import relativedelta
now = dt.datetime.now(dt.timezone.utc)
# datetime.datetime(2025, 12, 29, 15, 59, 44, 518240, tzinfo=datetime.timezone.utc)
six_months_from_now = now + relativedelta(months=+6)
# datetime.datetime(2026, 6, 29, 15, 59, 44, 518240, tzinfo=datetime.timezone.utc)
But we don’t need to be so precise here, and 180 days is good enough, using the standard
library’s
datetime.timedelta:
import datetime as dt
now = dt.datetime.now(dt.timezone.utc)
# datetime.datetime(2025, 12, 29, 15, 59, 44, 518240, tzinfo=datetime.timezone.utc)
six_months_from_now = now + dt.timedelta(days=180)
# datetime.datetime(2026, 6, 27, 15, 59, 44, 518240, tzinfo=datetime.timezone.utc)
Two #
pypistats was using it get the last day of a month:
import datetime as dt
first = dt.date(year, month, 1)
# datetime.date(2025, 12, 1)
last = first + relativedelta(months=1) - relativedelta(days=1)
# datetime.date(2025, 12, 31)
Instead, we can use the stdlib’s
calendar.monthrange:
import calendar
import datetime as dt
last_day = calendar.monthrange(year, month)[1]
# 31
last = dt.date(year, month, last_day)
# datetime.date(2025, 12, 31)
Three #
Finally, to get last month as a yyyy-mm string:
import datetime as dt
from dateutil.relativedelta import relativedelta
today = dt.date.today()
# datetime.date(2025, 12, 29)
d = today - relativedelta(months=1)
# datetime.date(2025, 11, 29)
d.isoformat()[:7]
# '2025-11'
Instead:
import datetime as dt
today = dt.date.today()
# datetime.date(2025, 12, 29)
if today.month == 1:
year, month = today.year - 1, 12
else:
year, month = today.year, today.month - 1
# 2025, 11
f"{year}-{month:02d}"
# '2025-11'
Goodbye six, and we also get slightly quicker install, import and run times.
Bonus #
I recommend
Adam Johnson’s tip
to import datetime as dt to avoid the ambiguity of which datetime is the module and
which is the class.
Header photo: Ver Sacrum calendar by Alfred Roller
December 29, 2025 04:53 PM UTC
Talk Python to Me
#532: 2025 Python Year in Review
Python in 2025 is in a delightfully refreshing place: the GIL's days are numbered, packaging is getting sharper tools, and the type checkers are multiplying like gremlins snacking after midnight. On this episode, we have an amazing panel to give us a range of perspectives on what matter in 2025 in Python. We have Barry Warsaw, Brett Cannon, Gregory Kapfhammer, Jodie Burchell, Reuven Lerner, and Thomas Wouters on to give us their thoughts.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/seer-code-review'>Seer: AI Debugging, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Python Software Foundation (PSF)</strong>: <a href="https://www.python.org/psf/?featured_on=talkpython" target="_blank" >www.python.org</a><br/> <strong>PEP 810: Explicit lazy imports</strong>: <a href="https://peps.python.org/pep-0810/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>PEP 779: Free-threaded Python is officially supported</strong>: <a href="https://peps.python.org/pep-0779/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>PEP 723: Inline script metadata</strong>: <a href="https://peps.python.org/pep-0723/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>PyCharm</strong>: <a href="https://www.jetbrains.com/pycharm/?featured_on=talkpython" target="_blank" >www.jetbrains.com</a><br/> <strong>JetBrains</strong>: <a href="https://www.jetbrains.com/company/?featured_on=talkpython" target="_blank" >www.jetbrains.com</a><br/> <strong>Visual Studio Code</strong>: <a href="https://code.visualstudio.com/?featured_on=talkpython" target="_blank" >code.visualstudio.com</a><br/> <strong>pandas</strong>: <a href="https://pandas.pydata.org/?featured_on=talkpython" target="_blank" >pandas.pydata.org</a><br/> <strong>PydanticAI</strong>: <a href="https://ai.pydantic.dev/?featured_on=talkpython" target="_blank" >ai.pydantic.dev</a><br/> <strong>OpenAI API docs</strong>: <a href="https://platform.openai.com/docs/?featured_on=talkpython" target="_blank" >platform.openai.com</a><br/> <strong>uv</strong>: <a href="https://docs.astral.sh/uv/?featured_on=talkpython" target="_blank" >docs.astral.sh</a><br/> <strong>Hatch</strong>: <a href="https://github.com/pypa/hatch?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>PDM</strong>: <a href="https://pdm-project.org/latest/?featured_on=talkpython" target="_blank" >pdm-project.org</a><br/> <strong>Poetry</strong>: <a href="https://python-poetry.org/?featured_on=talkpython" target="_blank" >python-poetry.org</a><br/> <strong>Project Jupyter</strong>: <a href="https://jupyter.org/?featured_on=talkpython" target="_blank" >jupyter.org</a><br/> <strong>JupyterLite</strong>: <a href="https://jupyterlite.readthedocs.io/en/latest/?featured_on=talkpython" target="_blank" >jupyterlite.readthedocs.io</a><br/> <strong>PEP 690: Lazy Imports</strong>: <a href="https://peps.python.org/pep-0690/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>PyTorch</strong>: <a href="https://pytorch.org/?featured_on=talkpython" target="_blank" >pytorch.org</a><br/> <strong>Python concurrent.futures</strong>: <a href="https://docs.python.org/3/library/concurrent.futures.html?featured_on=talkpython" target="_blank" >docs.python.org</a><br/> <strong>Python Package Index (PyPI)</strong>: <a href="https://pypi.org/?featured_on=talkpython" target="_blank" >pypi.org</a><br/> <strong>EuroPython</strong>: <a href="https://tickets.europython.eu/?featured_on=talkpython" target="_blank" >tickets.europython.eu</a><br/> <strong>TensorFlow</strong>: <a href="https://www.tensorflow.org/?featured_on=talkpython" target="_blank" >www.tensorflow.org</a><br/> <strong>Keras</strong>: <a href="https://keras.io/?featured_on=talkpython" target="_blank" >keras.io</a><br/> <strong>PyCon US</strong>: <a href="https://us.pycon.org/?featured_on=talkpython" target="_blank" >us.pycon.org</a><br/> <strong>NumFOCUS</strong>: <a href="https://numfocus.org/?featured_on=talkpython" target="_blank" >numfocus.org</a><br/> <strong>Python discussion forum (discuss.python.org)</strong>: <a href="https://discuss.python.org/?featured_on=talkpython" target="_blank" >discuss.python.org</a><br/> <strong>Language Server Protocol</strong>: <a href="https://microsoft.github.io/language-server-protocol/?featured_on=talkpython" target="_blank" >microsoft.github.io</a><br/> <strong>mypy</strong>: <a href="https://mypy-lang.org/?featured_on=talkpython" target="_blank" >mypy-lang.org</a><br/> <strong>Pyright</strong>: <a href="https://github.com/microsoft/pyright?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Pylance</strong>: <a href="https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance&featured_on=talkpython" target="_blank" >marketplace.visualstudio.com</a><br/> <strong>Pyrefly</strong>: <a href="https://github.com/facebook/pyrefly?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>ty</strong>: <a href="https://github.com/astral-sh/ty?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Zuban</strong>: <a href="https://docs.zubanls.com/?featured_on=talkpython" target="_blank" >docs.zubanls.com</a><br/> <strong>Jedi</strong>: <a href="https://jedi.readthedocs.io/en/latest/?featured_on=talkpython" target="_blank" >jedi.readthedocs.io</a><br/> <strong>GitHub</strong>: <a href="https://github.com/?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>PyOhio</strong>: <a href="https://www.pyohio.org/?featured_on=talkpython" target="_blank" >www.pyohio.org</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=PfRCbeOrUd8" target="_blank" >youtube.com</a><br/> <strong>Episode #532 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/532/2025-python-year-in-review#takeaways-anchor" target="_blank" >talkpython.fm/532</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/532/2025-python-year-in-review" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
December 29, 2025 08:00 AM UTC
Seth Michael Larson
Nintendo GameCube and Switch “Wrapped” 2025 🎮🎁
This is my last blog post for 2025 💜 Thanks for reading, see you in 2026!
One of my goals for 2025 was to play more games! I've been collecting play activity for my Nintendo Switch, Switch 2, and my Nintendo GameCube. I've published a combined SQLite database with this data for 2025 with games, play sessions, and more. Feel free to dig into this data yourself, I've included some queries and my own thoughts, too.
Here are the questions I answered with this data:
- What were my favorite games this year?
- Which game system did I play the most?
- Which games did I play the most?
- Which games did I play most per-session?
- When did I start and stop playing each game?
- Which game was I most consistently playing?
- When did I play games?
- Which day of the week did I play most?
What were my favorite games this year?
Before we get too deep into quantitative analysis, let's start with the games I enjoyed the most and defined this year for me.
My favorite game for the GameCube in 2025 is Pikmin 2. The Pikmin franchise has always held a close place in my heart being a lover of plants, nature, and dandori.
One of my major video-gaming projects in 2025 was to gather every unique treasure in Pikmin 2. I called this the “International Treasure Hoard” following a similar naming to the “National PokéDex”. This project required buying both the Japanese and PAL versions of Pikmin 2 which I was excited to add to my collection.
The project was inspired by a YouTube short from a Pikmin content creator I enjoy named “JessicaIn3D”. The video shows how there are three different treasure hoards in Pikmin 2, one per region, meaning its impossible to collect every treasure in a single game.
“You Can't Collect Every Treasure In Pikmin 2” by JessicaIn3D
The project took 4 months to complete, running from June to October. I published a script which analyzed Pikmin 2 save file data using this documentation on the save file format. From here the HTML table in the project page could be automatically updated as I made more progress. I published regular updates to the project page as the project progressed, too.
Pikmin 2 for the GameCube (NTSC-J, NTSC-M, and PAL) and the Switch.
My favorite game for the Switch family of consoles in 2025 is Kirby Air Riders. This game is pure chaotic fun with a heavy dose of nostalgia for me. I was one of the very few that played the original game on the Nintendo GameCube 22 years ago. I still can't believe this gem of a game only sold 750,000 units in the United States. It's amazing to see what is essentially the game my brother and I dreamed of as a sequel: taking everything great about the original release and adding online play, a more expansive world, and near-infinite customization and unlockables.
This game is fast-paced and fits into a busy life easily: a single play session of City Trail or Air Ride mode lasts less than 7 minutes from start to finish. I'll frequently pick this game up to play a few rounds between work and whatever the plans of the evening are. Each round is different, and you can pursue whichever strategy you prefer (combat, speed, gliding, legendary-vehicle hunting) and expect to have a fighting chance in the end.
Kirby Air Ride for the GameCube (NTSC-J and NTSC-M) and Kirby Air Riders for the Switch 2.
Which game system did I play the most?
Even with the Switch and Switch 2 bundled into one category I played the GameCube more in 2025. This was a big year for me and GameCube: I finally modded a platinum cube and my childhood indigo cube with the FlippyDrive and ETH2GC. I've got a lot more in store for the GameCube next year, too.
| System | Duration |
|---|---|
| GameCube | 41h 55m |
| Switch | 35h 45m |
SQLite querySELECT game_system, SUM(duration) AS d FROM sessions WHERE STRFTIME('%Y', date) = '2025' GROUP BY game_system ORDER BY d DESC;
Here's the same data stylized like the GitHub contributor graph. Blue squares represent days when I played GameCube and red squares I played the Switch or Switch 2, starting in June 2025:
I also played Sonic Adventure on a newly-purchased SEGA Dreamcast for the first time in 2025, too. Unfortunately I don't have a way to track play data for the Dreamcast (yet?) but my experience with the Dreamcast and Sonic Adventure will likely get its own short blog post eventually, stay tuned.
Which games did I play the most?
I played 9 unique titles this year (including region and platform variants), but which ones did I play the most?
| Game | Duration |
|---|---|
| Pikmin 2 | 27h 11m |
| Animal Crossing | 16h 47m |
| Kirby Air Riders | 16h 15m |
| Mario Kart World | 10h 25m |
| Super Mario Odyssey | 4h 45m |
| Pikmin 4 | 1h 5m |
| Overcooked! 2 | 45m |
| Kirby's Airride | 15m |
| Sonic Origins | 10m |
SQLite querySELECT game_name, SUM(duration) AS d FROM sessions WHERE STRFTIME('%Y', date) = '2025' GROUP BY game_name ORDER BY d DESC;
That's a lot of Pikmin 2, huh? This year collected all 245 unique treasures across the three regions of Pikmin 2 (JP, US, and PAL) including a 100% complete save file for the Japanese region. This is the first time I collected all treasures for a single Pikmin 2 play-through.
We can break down how much time was spent in each region and system for Pikmin 2:
| System | Region | Duration |
|---|---|---|
| GameCube | US | 9h 24m |
| GameCube | JP | 9h 17m |
| GameCube | PAL | 6h 9m |
| Switch | US | 2h 20m |
SQLite querySELECT game_system, game_region, SUM(duration) AS d FROM sessions WHERE STRFTIME('%Y', date) = '2025' AND game_name = 'Pikmin 2' GROUP BY game_system, game_region ORDER BY d DESC;
You can see I even started playing the Switch edition of Pikmin 2 but bailed on that idea pretty early. Playing through the same game 3 times in a year was already enough :) The US and JP versions were ~9 hours each with PAL receiving less play time. This is due to PAL only having 10 unique treasures, so I was able to speed-run most of the game.
Which games did I play most per session?
This query sorta indicates “binge-ability”, when I did play a title how long was that play session on average? Super Mario Odyssey just barely took the top spot here, but the two Switch 2 titles I own were close behind.
| Name | Duration |
|---|---|
| Super Mario Odyssey | 57m |
| Mario Kart World | 56m |
| Kirby Air Riders | 51m |
| Pikmin 2 | 49m |
| Animal Crossing | 47m |
| Overcooked! 2 | 45m |
| Pikmin 4 | 32m |
| Kirby's Airride | 15m |
| Sonic Origins | 5m |
SQLite querySELECT game_name, SUM(duration)/COUNT(DISTINCT date) AS d FROM sessions WHERE STRFTIME('%Y', date) = '2025' GROUP BY game_name ORDER BY d DESC;
When did I start and stop playing each game?
I only have enough time to focus on one game at a time, so there is a pretty linear history of which game is top-of-mind for me at any one time. From this query we can construct a linear history:
- Pikmin 2 (June→Oct)
- Animal Crossing (July→Aug)
- Super Mario Odyssey (Oct)
- Pikmin 4 (Nov, “Decor Pikmin”)
- Mario Kart World (July→Nov)
- Kirby Air Riders (Nov→Dec)
I still want to return to Super Mario Odyssey, I was having a great time with the game! It's just that and Kirby Air Riders came out and stole my attention.
| Game | First played | Last played |
|---|---|---|
| Pikmin 2 | 2025-06-01 | 2025-10-06 |
| Mario Kart World | 2025-07-20 | 2025-11-17 |
| Animal Crossing | 2025-07-29 | 2025-09-08 |
| Sonic Origins | 2025-08-11 | 2025-08-25 |
| Super Mario Odyssey | 2025-10-13 | 2025-10-21 |
| Kirby Air Riders | 2025-11-07 | 2025-12-21 |
| Pikmin 4 | 2025-11-10 | 2025-11-12 |
SQLite querySELECT ( game_name, MIN(date) AS fp, MAX(date) ) FROM sessions WHERE STRFTIME('%Y', date) = '2025' GROUP BY game_name ORDER BY fp ASC;
Which game was I most consistently playing?
We can take the data from the “Days” column above and use that as a divisor for the number of unique days each game was played. This will give a sense of how often I was playing a game within the time span that I was “active” for a game:
| Game | % | Days Played | Span |
|---|---|---|---|
| Pikmin 4 | 100% | 2 | 2 |
| Super Mario Odyssey | 63% | 5 | 8 |
| Animal Crossing | 51% | 21 | 41 |
| Kirby Air Riders | 43% | 19 | 44 |
| Pikmin 2 | 26% | 33 | 127 |
| Sonic Origins | 14% | 2 | 14 |
| Mario Kart World | 9% | 11 | 120 |
SQLite querySELECT ( game_name, COUNT(DISTINCT date) AS played, ( STRFTIME('%j', MAX(date)) -STRFTIME('%j', MIN(date)) ) AS days ) FROM sessions WHERE STRFTIME('%Y', date) = '2025' GROUP BY game_name ORDER BY MIN(date) ASC;
If we look at total year gaming “saturation” for 2025 and June-onwards (214 days):
| Days Played | % Days (2025) | % Days (>=June) |
|---|---|---|
| 89 | 24% | 42% |
SQLite querySELECT COUNT(DISTINCT date) FROM sessions WHERE STRFTIME('%Y', date) = '2025';
When did I play games?
Looking at the year, I didn't start playing games on either system this year until June. That lines up with me receiving my GameCube FlippyDrives which I had pre-ordered in 2024. After installing these modifications to my GameCube I began playing games more regularly again :)
| Month | Duration |
|---|---|
| June | 10h 4m |
| July | 9h 40m |
| August | 18h 26m |
| September | 7h 22m |
| October | 10h 0m |
| November | 15h 5m |
| December | 7h 0m |
SQLite querySELECT STRFTIME('%m', date) AS m, SUM(duration) FROM sessions WHERE STRFTIME('%Y', date) = '2025' GROUP BY m ORDER BY m ASC;
August was the month with the most play! This was due entirely to playing Animal Crossing Deluxe (~16 hours), a mod by Cuyler for Animal Crossing on the GameCube. Animal Crossing feels the best when you play for short sessions each day which I why I was playing so often.
| Game | Duration |
|---|---|
| Animal Crossing | 15h 41m |
| Mario Kart World | 2h 15m |
| Pikmin 2 | 19m |
| Sonic Origins | 10m |
SQLite querySELECT game_name, SUM(duration) FROM sessions WHERE STRFTIME('%Y-%m', date) = '2025-08' GROUP BY game_name;
Which day of the week did I play most?
Unsurprisingly, weekends tend to be the days on average with the longest play sessions. Sunday just barely takes the highest average play duration per day. Wednesday, Thursday, and Friday have the lowest play activity as these days are reserved for board-game night, seeing family, and date night respectively :)
| Day | Duration | Days | Average |
|---|---|---|---|
| Sun | 16h 16m | 15 | 1h 5m |
| Mon | 13h 52m | 17 | 48m |
| Tues | 14h 9m | 16 | 53m |
| Wed | 11h 17m | 15 | 45m |
| Thur | 6h 35m | 9 | 43m |
| Fri | 5h 45m | 8 | 43m |
| Sat | 9h 42m | 9 | 1h 4m |
SQLite querySELECT STRFTIME('%w', date) AS day_of_week,SUM(duration),COUNT(DISTINCT date),SUM(duration)/COUNT(DISTINCT date) FROM sessions WHERE STRFTIME('%Y', date) = '2025' GROUP BY day_of_week ORDER BY day_of_week ASC;
Thanks for keeping RSS alive! ♥
December 29, 2025 12:00 AM UTC
December 28, 2025
Mark Dufour
A (biased) Pure Python Performance Comparison
The following is a performance comparison of several (pure) Python implementations, for a large part of the Shed Skin example set. I left out some of the examples, that would result in an unfair comparison (mostly because of randomization), or that were too interactive to easily measure. Obviously this comparison is very biased, and probably unfair in some way to the other projects (though I've tried to be fair, for example by letting PyPy stabilize before measuring).
This first plot shows the speedups versus CPython 3.10, for CPython 3.14, Nuitka, Pypy and Shed Skin.
Shed Skin is able to speed up the examples by an average factor of about 29 times (not percent, times :)), while PyPy is able to speed up the examples by an average factor of about 16 times. Given that PyPy has its hands tied behind its back trying to support unrestricted Python code, and was not optimized specifically for these examples (that I am aware of), that is actually still quite an impressive result.
As for the few cases where PyPy performs better than Shed Skin, this appears to be mainly because of PyPy being able to optimize away heap allocations for short-lived objects (in many cases, custom Vector(x,y,z) instances). In a few cases also, the STL unordered_map that Shed Skin uses to implement dictionaries appears to perform poorly compared to more modern implementations. Of course it is possible for Shed Skin to improve in these areas with future releases.
Note that some of the examples can run even faster with Shed Skin by providing --nowrap/--nobounds options, which disable wrap-around/bounds-checking respectively. I'm not sure if PyPy has any options to make it run faster, at the cost of certain features (in the distant past there was talk of RPython - does that still exist?).
As the CPython 3.14 and Nuitka results are a bit hard to see in the above plot, here is the same plot but with a logarithmic y-scale:
CPython 3.14 is about 60% faster on average for these examples than CPython 3.10, which to me is actually very promising for the future. While Nuitka outperforms CPython 3.10 by about 30% on average, unfortunately it cannot match the improvements in CPython since.
If there are any CS students out there who would like to help improve Shed Skin, please let me know. I think especially memory optimizations (where PyPy still seems to have an edge) would be a great topic for a Master's Thesis!
December 28, 2025 04:31 AM UTC
December 26, 2025
"Michael Kennedy's Thoughts on Technology"
DevOps Python Supply Chain Security

In my last article, “Python Supply Chain Security Made Easy” I talked about how to automate pip-audit so you don’t accidentally ship malicious Python packages to production. While there was defense in depth with uv’s delayed installs, there wasn’t much safety beyond that for developers themselves on their machines.
This follow up fixes that so even dev machines stay safe.
Defending your dev machine
My recommendation is instead of installing directly into a local virtual environment and then running pip-audit, create a dedicated Docker image meant for testing dependencies with pip-audit in isolation.
Our workflow can go like this.
First, we update your local dependencies file:
uv pip compile requirements.piptools --output-file requirements.txt --exclude-newer 1 week
This will update the requirements.txt file, or tweak the command to update your uv.lock file, but it don’t install anything.
Second, run a command that uses this new requirements file inside of a temporary docker container to install the requirements and run pip-audit on them.
Third, only if that pip-audit test succeeds, install the updated requirements into your local venv.
uv pip install -r requirements.txt
The pip-audit docker image
What do we use for our Docker testing image? There are of course a million ways to do this. Here’s one optimized for building Python packages that deeply leverages uv’s and pip-audit’s caching to make subsequent runs much, much faster.
Create a Dockerfile with this content:
# Image for installing python packages with uv and testing with pip-audit
# Saved as Dockerfile
FROM ubuntu:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get autoremove -y
RUN apt-get -y install curl
# Dependencies for building Python packages
RUN apt-get install -y gcc
RUN apt-get install -y build-essential
RUN apt-get install -y clang
RUN apt-get install -y openssl
RUN apt-get install -y checkinstall
RUN apt-get install -y libgdbm-dev
RUN apt-get install -y libc6-dev
RUN apt-get install -y libtool
RUN apt-get install -y zlib1g-dev
RUN apt-get install -y libffi-dev
RUN apt-get install -y libxslt1-dev
ENV PATH=/venv/bin:$PATH
ENV PATH=/root/.cargo/bin:$PATH
ENV PATH=/root/.local/bin/:$PATH
ENV UV_LINK_MODE=copy
# Install uv
RUN curl -LsSf https://astral.sh/uv/install.sh | sh
# set up a virtual env to use for temp dependencies in isolation.
RUN --mount=type=cache,target=/root/.cache uv venv --python 3.14 /venv
# test that uv is working
RUN uv --version
WORKDIR "/"
# Install pip-audit
RUN --mount=type=cache,target=/root/.cache uv pip install --python /venv/bin/python3 pip-audit
This installs a bunch of Linux libraries used for edge-case builds of Python packages. It takes a moment, but you only need to build the image once. Then you’ll run it again and again. If you want to use a newer version of Python later, change the version in uv venv --python 3.14 /venv. Even then on rebuilds, the apt-get steps are reused from cache.
Next you build with a fixed tag so you can create aliases to run using this image:
# In the same folder as the Dockerfile above.
docker build -t pipauditdocker .
Finally, we need to run the container with a few bells and whistles. Add caching via a volume so subsequent runs are very fast: -v pip-audit-cache:/root/.cache. And map a volume so whatever working directory you are in will find the local requirements.txt: -v \"\$(pwd)/requirements.txt:/workspace/requirements.txt:ro\"
Here is the alias to add to your .bashrc or .zshrc accomplishing this:
alias pip-audit-proj="echo '🐳 Launching isolated test env in Docker...' && \
docker run --rm \
-v pip-audit-cache:/root/.cache \
-v \"\$(pwd)/requirements.txt:/workspace/requirements.txt:ro\" \
pipauditdocker \
/bin/bash -c \"echo '📦 Installing requirements from /workspace/requirements.txt...' && \
uv pip install --quiet -r /workspace/requirements.txt && \
echo '🔍 Running pip-audit security scan...' && \
/venv/bin/pip-audit \
--ignore-vuln CVE-2025-53000 \
--ignore-vuln PYSEC-2023-242 \
--skip-editable\""
That’s it! Once you reload your shell, all you have to do is type is pip-audit-proj when you’re in the root of your project that contains your requirements.txt file. You should see something like this below. Slow the first time, fast afterwards.

Protecting Docker in production too
Let’s handle one more situation while we are at it. You’re running your Python app IN Docker. Part of the Docker build configures the image and installs your dependencies. We can add a pip-audit check there too:
# Dockerfile for your app (different than validation image above)
# All the steps to copy your app over and configure the image ...
# After creating a venv in /venv and copying your requirements.txt to /app
# Check for any sketchy packages.
# We are using mount rather than a volume because
# we want to cache build time activity, not runtime activity.
RUN --mount=type=cache,target=/root/.cache uv pip install --python /venv/bin/python3 --upgrade pip-audit
RUN --mount=type=cache,target=/root/.cache /venv/bin/pip-audit --ignore-vuln CVE-2025-53000 --ignore-vuln PYSEC-2023-242 --skip-editable
# ENTRYPOINT ... for your app
Conclusion
There you have it. Two birds, one Docker stone for both. Our first Dockerfile built a reusable Docker image named pipauditdocker to run isolated tests against a requirements file. This second one demonstrates how we can make our docker/docker compose build completely fail if there is a bad dependency saving us from letting it slip into production.
Cheers
Michael
December 26, 2025 11:53 PM UTC
Seth Michael Larson
Getting started with Playdate on Ubuntu 🟨
Trina got me a Playdate for Christmas this year! I've always been intrigued by this console, as it is highly constrained in terms of pixel and color-depth (400x240, 2 colors), but also provides many helpful resources for game development such as a software development kit (SDK) and a simulator to quickly test games during development.
I first discovered software programming as an amateur game developer using BYOND, so “returning to my roots” and doing some game development feels like a fun and fulfilling diversion from the current direction software is taking. Plus, I now have a reason to learn a new programming language: Lua!
Running software on the Playdate!
Getting started with Playdate on Ubuntu
Here's what I did to quickly get started with a Playdate development environment on my Ubuntu 24.04 laptop:
- Unbox the Playdate and start charging the console so it's charged enough for the next steps involving the console.
- Create a Playdate account.
- Download the SDK. For Linux
you need to extract to your desired directory (I chose
~/PlaydateSDK) and run the setup script (sudo ~/PlaydateSDK/setup.sh). - Add the SDK
bintoPATHandPLAYDATE_SDK_PATHenvironment variables to your~/.bashrc. - Start the simulator (
PlaydateSimulator) and register the simulator to your Playdate account when prompted. - Turn on the console and play the startup tutorial. Connect to Wi-Fi and let the console update.
- When prompted by the console, register the console to your Playdate account.
- Download and install VSCode. I used the
.debinstaller for Ubuntu. - Disable all AI features in VSCode by adding
"chat.disableAIFeatures": trueto yoursettings.json. - Copy the
.vscodedirectory from this Playdate template project. The author of this template, SquidGod, has multiple video guides about Playdate development. - Select "Extensions" in VSCode and install the "Lua" and "Playdate Debug" extensions.
- Create two directories:
sourceandbuilds. Within thesourcedirectory create a file calledmain.lua. This file will be the entry-point into your Playdate application.
That's it, your Playdate development environment should be ready to use.
“Hello, world” on the Playdate
Within source/main.lua put the following Lua code:
import "CoreLibs/graphics"
import "CoreLibs/ui"
-- Localizing commonly used globals
local pd <const> = playdate
local gfx <const> = playdate.graphics
function playdate.update()
gfx.drawTextAligned(
"Hello, world",
200, 30, kTextAlignment.center
)
end
Try building and running this with the simulator Ctrl+Shift+B.
You should see our "Hello world" message in the simulator.
Running “Hello, world” on real hardware
Next is getting your game running on an actual Playdate console. Connect the Playdate to your computer using the USB cable and make sure the console is awake.
Start your game in the simulator (Ctrl+Shift+B)
and then once the simulator starts select Device > Upload Game to Device
in the menus or use the hotkey Ctrl+U.
Uploading the game to the Playdate console takes a few seconds, so be patient. The console will show a message like “Sharing DATA segment with USB. Will reboot when ejected”. You can select the "Home" button in the Playdate console menu to stop the game.
Making a network request
One of my initial hesitations with buying a Playdate was that it didn't originally ship with network connectivity within games, despite supporting Wi-Fi. This is no longer the case, as this year Playdate OS 2.7 added support for HTTP and TCP networking.
So immediately after my "Hello world" game, I wanted to try this new feature.
I created the following small application that sends an HTTP request
after pressing the A button:
import "CoreLibs/graphics"
import "CoreLibs/ui"
local pd <const> = playdate
local gfx <const> = playdate.graphics
local net <const> = playdate.network
local networkEnabled = false
function networkHttpRequest()
local host = "sethmlarson.dev"
local port = 443
local useHttps = true
local req = net.http.new(
host, port, useHttps, "Making an HTTP request"
)
local path = "/"
local headers = {}
req:get(path, headers)
end
function networkEnabledCallback(err)
networkEnabled = true
end
function init()
net.setEnabled(true, networkEnabledCallback)
end
function playdate.update()
gfx.clear()
if networkEnabled then
gfx.drawTextAligned(
"Network enabled",
200, 30, kTextAlignment.center
)
if pd.buttonJustPressed(pd.kButtonA) then
networkHttpRequest()
end
else
gfx.drawTextAligned(
"Network disabled",
200, 30, kTextAlignment.center
)
end
end
init()
First I tried running this program with a local HTTP server
on localhost:8080 with useHttps set to false and
was able to capture this HTTP request using Wireshark:
0000 47 45 54 20 2f 20 48 54 54 50 2f 31 2e 31 0d 0a GET / HTTP/1.1..
0010 48 6f 73 74 3a 20 6c 6f 63 61 6c 68 6f 73 74 0d Host: localhost.
0020 0a 55 73 65 72 2d 41 67 65 6e 74 3a 20 50 6c 61 .User-Agent: Pla
0030 79 64 61 74 65 2f 53 69 6d 0d 0a 43 6f 6e 6e 65 ydate/Sim..Conne
0040 63 74 69 6f 6e 3a 20 63 6c 6f 73 65 0d 0a 0d 0a ction: close....
So we can see that Playdate HTTP requests are quite minimal, only sending
a Host, User-Agent and Connection: close header by default.
Keep-Alive and other headers can be optionally configured.
The User-Agent for the Playdate simulator was Playdate/Sim.
I then tested on real hardware and targeting my own website: sethmlarson.dev:443
with useHttps set to true. This resulted in the same request being sent,
with a User-Agent of Playdate/3.0.2.
There's no-doubt lots of experimentation ahead for what's possible with
a networked Playdate. That's all for now, happy cranking!
Thanks for keeping RSS alive! ♥





