Planet Python
Last update: January 04, 2026 01:44 PM UTC
January 04, 2026
EuroPython
Humans of EuroPython: Marina Moro López
EuroPython wouldn’t exist if it weren’t for all the volunteers who put in countless hours to organize it. Whether it’s contracting the venue, selecting and confirming talks & workshops or coordinating with speakers, hundreds of hours of loving work have been put into making
January 03, 2026
Hugo van Kemenade
Localising xkcd
I gave a lightning talk at a bunch of conferences in 2025 about some of the exciting new things coming in Python 3.14, including template strings.
One thing we can use t-strings for is to prevent SQL injection. The user gives you an untrusted T-string, and you can sanitise it, before using it in a safer way.
I illustrated this with xkcd 327, titled “Exploits of a Mom”, but commonly known as “Little Bobby Tables”.
I localised most of the slides for the PyCon I was at, including this comic. Here they are!
PyCon Italia #
May, Bologna
PyCon Greece #
August, Athens
PyCon Estonia #
October, Tallinn
PyCon Finland #
October, Jyväskylä
PyCon Sweden #
October, Stockholm
Thanks #
Thanks to Randall Munroe for licensing the comic under a Creative Commons Attribution-NonCommercial 2.5 License. These adaptations are therefore licensed the same way.
Finally, here’s links for 2026, I recommend them all:
- PyCon Italia, 27-30 May: the CFP is open until 6th January
- PyCon Estonia, 8-9 October
- PyCon Greece, 12-13 October
- PyCon Sweden, TBA
- PyCon Finland, TBA
January 02, 2026
Real Python
The Real Python Podcast – Episode #278: PyCoder's Weekly 2025 Top Articles & Hidden Gems
PyCoder's Weekly included over 1,500 links to articles, blog posts, tutorials, and projects in 2025. Christopher Trudeau is back on the show this week to help wrap up everything by sharing some highlights and uncovering a few hidden gems from the pile.
Glyph Lefkowitz
The Next Thing Will Not Be Big
Disruption, too, will be disrupted.
Seth Michael Larson
New ROM dumping tool for SNES & Super Famicom from Epilogue
January 01, 2026
Python Morsels
Implicit string concatenation
Python automatically concatenates adjacent string literals thanks to implicit string concatenation. This feature can sometimes lead to bugs.
Strings next to each other
Take a look at this line of Python code:
>>> print("Hello" "world!")
It looks kind of like we're passing multiple arguments to the built-in print function.
But we're not:
>>> print("Hello" "world!")
Helloworld!
If we pass multiple arguments to print, Python will put spaces between those values when printing:
>>> print("Hello", "world!")
Hello world!
But Python wasn't doing.
Our code from before didn't have commas to separate the arguments (note the missing comma between "Hello" and "world!"):
>>> print("Hello" "world!")
Helloworld!
How is that possible?
This seems like it should have resulted in a SyntaxError!
Implicit string concatenation
A string literal is the …
Read the full article: https://www.pythonmorsels.com/implicit-string-concatenation/
Seth Michael Larson
Cutting spritesheets like cookies with Python & Pillow 🍪
December 31, 2025
The Python Coding Stack
Mulled Wine, Mince Pies, and More Python
2025 in review here on The Python Coding Stack
December 31, 2025 09:12 PM UTC
Django Weblog
DSF member of the month - Clifford Gama
For December 2025, we welcome Clifford Gama as our DSF member of the month! ⭐
Clifford contributed to Django core with more than 5 PRs merged in few months! He is part of the Triage and Review Team. He has been a DSF member since October 2024.
You can learn more about Clifford by visiting Clifford's website and his GitHub Profile.
Let’s spend some time getting to know Clifford better!
Can you tell us a little about yourself (hobbies, education, etc)
I'm Clifford. I hold a Bachelor's degree in Mechanical Engineering from the University of Zimbabwe.
How did you start using Django?
During my first year in college, I was also exploring open online courses on EDx and I came across CS50's introduction to web development. After watching the introductory lecture -- which introduced me to git and GitHub -- I discovered Django's excellent documentation and got started on the polls tutorial. The docs were so comprehensive and helpful I never felt the need to return to CS50. (I generally prefer comprehensive first-hand, written learning material over summaries and videos.)
At the time, I had already experimented with flask, but I guess mainly because I didn't know SQL and because flask didn't have an ORM, I never quite picked it up. With Django I felt like I was taking a learning fast-track where I'd learn everything I needed in one go!
And that's how I started using Django.
What projects are you working on now?
At the moment, I’ve been focusing on improving my core skills in preparation for remote work, so I haven’t been starting new projects because of that.
That said, I’ve been working on a client project involving generating large, image-heavy PDFs with WeasyPrint, where I’ve been investigating performance bottlenecks and ways to speed up generation time, which was previously around 30 minutes 😱.
What are you learning about these days?
I’ve been reading Boost Your Git DX by Adam Johnson and learning how to boost my Git and shell developer experience, which has been a great read. Aside from that, inspired by some blogs and talks by Haki Benita, I am also learning about software design and performance. Additionally, I am working on improving my general fluency in Python.
What other framework do you know and if there is anything you would like to have in Django if you had magical powers?
I am not familiar with any other frameworks, but if I had magic powers I'd add production-grade static-file serving in Django.
Django libraries are your favorite (core or 3rd party)?
The ORM, Wagtail and Django's admin.
What are the top three things in Django that you like?
- The community
- The documentation
- Djangonaut Space and the way new contributors are welcomed
How did you start contributing to Django?
I started contributing to Django in August last year, which is when I discovered the community, which was a real game changer for me. Python was my first course at university, and I loved it because it was creative and there was no limit to what I could build with it.
Whenever I saw a problem in another course that could be solved programmatically, I jumped at it. My proudest project from that time was building an NxN matrix determinant calculator after learning about recursion and spotting the opportunity in an algebra class.
After COVID lockdown, I gave programming up for a while. With more time on my hands, I found myself prioritizing programming over core courses, so I took a break. Last year, I returned to it when I faced a problem that I could only solve with Django. My goal was simply to build an app quickly and go back to being a non-programmer, but along the way I thought I found a bug in Django, filed a ticket, and ended up writing a documentation PR. That’s when I really discovered the Django community.
What attracted me most was that contributions are held to high standards, but experienced developers are always ready to help you reach them. Contributing was collaborative, pushing everyone to do their best. It was a learning opportunity too good to pass up.
How did you join the Triage and Review team?
About the time after I contributed my first PR, I started looking at open tickets to find more to work on, and keep on learning.
Sometimes a ticket was awaiting triage, in which case the first step was to triage it before assigning it to working on it, and sometimes the ticket I wanted was already taken, in which case I'd look at the PR if available. Reviewing a PR can be a faster way to learn about a particular part of the codebase, because someone has already done most of the investigative part of work, so I reviewed PRs as well.
After a while I got an invitation from Sarah Boyce, one of the fellows, to join the team. I didn't even know that I could join before I got the invitation, so I was thrilled!
How the work is going so far?
It’s been rewarding. I’ve gained familiarity with the Django codebase and real experience collaborating with others, which already exceeds what I expected when I started contributing.
One unexpected highlight was forming a friendship through one of the first PRs I reviewed.
SiHyun Lee and I are now both part of the triage and review team, and I’m grateful for that connection.
What are your hobbies or what do you do when you’re not working?
My main hobby is storytelling in a broad sense. In fact, it was a key reason I returned to programming after a long break. I enjoy discovering enduring stories from different cultures, times, and media—ranging from the deeply personal and literary to the distant and philosophical. I recently watched two Japanese classics and found I quite love them. I wrote about one of the films on my blog, and I also get to practice my Japanese, which I’ve been learning on Duolingo for about two years. I also enjoy playing speed chess.
Do you have any suggestions for people who would like to start triage and review tickets and PRs?
If there’s an issue you care about, or one that touches a part of the codebase you’re familiar with or curious about, jump in. Tickets aren’t always available to work on, but reviews always are, and they’re open to everyone. Reviewing helps PRs move faster, including your own if you have any open, sharpens your understanding of a component, and often clarifies the problem itself.
As Simon Charette puts it:
“Triaging issues and spending time understanding them is often more valuable than landing code itself as it strengthen our common understanding of the problem and allow us to build a consistent experience accross the diverse interfaces Django provides.”
And you can put it on your CV!
Is there anything else you’d like to say?
I’m grateful to everyone who contributes to making every part of Django what it is. I’m particularly thankful to whoever nominated me to be the DSF Member of the month.
I am optimistic about the future of Django. Django 6.1 is already shaping up with new features, and there are new projects like Django Bolt coming up.
Happy new year 🎊!
Thank you for doing the interview, Clifford and happy new year to the Django community 💚!
December 31, 2025 08:42 PM UTC
"Michael Kennedy's Thoughts on Technology"
Python Numbers Every Programmer Should Know
There are numbers every Python programmer should know. For example, how fast or slow is it to add an item to a list in Python? What about opening a file? Is that less than a millisecond? Is there something that makes that slower than you might have guessed? If you have a performance sensitive algorithm, which data structure should you use? How much memory does a floating point number use? What about a single character or the empty string? How fast is FastAPI compared to Django?
I wanted to take a moment and write down performance numbers specifically focused on Python developers. Below you will find an extensive table of such values. They are grouped by category. And I provided a couple of graphs for the more significant analysis below the table.
Acknowledgements: Inspired by Latency Numbers Every Programmer Should Know and similar resources.
Source code for the benchmarks
This article is posted without any code. I encourage you to dig into the benchmarks. The code is available on GitHub at:
https://github.com/mikeckennedy/python-numbers-everyone-should-know
📊 System Information
The benchmarks were run on the sytem described in this table. While yours may be faster or slower, the most important thing to consider is relative comparisons.
| Property | Value |
|---|---|
| Python Version | CPython 3.14.2 |
| Hardware | Mac Mini M4 Pro |
| Platform | macOS Tahoe (26.2) |
| Processor | ARM |
| CPU Cores | 14 physical / 14 logical |
| RAM | 24 GB |
| Timestamp | 2025-12-30 |
TL;DR; Python Numbers
This first version is a quick “pyramid” of growing time/size for common Python ops. There is much more detail below.
Python Operation Latency Numbers (the pyramid)
Attribute read (obj.x) 14 ns Dict key lookup 22 ns 1.5x attr Function call (empty) 22 ns List append 29 ns 2x attr f-string formatting 65 ns 3x function Exception raised + caught 140 ns 10x attr orjson.dumps() complex object 310 ns 0.3 μs json.loads() simple object 714 ns 0.7 μs 2x orjson sum() 1,000 integers 1,900 ns 1.9 μs 3x json SQLite SELECT by primary key 3,600 ns 3.6 μs 5x json Iterate 1,000-item list 7,900 ns 7.9 μs 2x SQLite read Open and close file 9,100 ns 9.1 μs 2x SQLite read asyncio run_until_complete (empty) 28,000 ns 28 μs 3x file open Write 1KB file 35,000 ns 35 μs 4x file open MongoDB find_one() by _id 121,000 ns 121 μs 3x write 1KB SQLite INSERT (with commit) 192,000 ns 192 μs 5x write 1KB Write 1MB file 207,000 ns 207 μs 6x write 1KB import json 2,900,000 ns 2,900 μs 3 ms 15x write 1MB import asyncio 17,700,000 ns 17,700 μs 18 ms 6x import json import fastapi 104,000,000 ns 104,000 μs 104 ms 6x import asyncio
Python Memory Numbers (the pyramid)
Float 24 bytes Small int (cached -5 to 256) 28 bytes Empty string 41 bytes Empty list 56 bytes 2x int Empty dict 64 bytes 2x int Empty set 216 bytes 8x int __slots__ class (5 attrs) 212 bytes 8x int Regular class (5 attrs) 694 bytes 25x int List of 1,000 ints 36,856 bytes 36 KB Dict of 1,000 items 92,924 bytes 91 KB List of 1,000 __slots__ instances 220,856 bytes 216 KB List of 1,000 regular instances 309,066 bytes 302 KB 1.4x slots list Empty Python process 16,000,000 bytes 16 MB
Python numbers you should know (detailed version)
Here is a deeper table comparing many more details.
| Category | Operation | Time | Memory |
|---|---|---|---|
| 💾 Memory | Empty Python process | — | 15.77 MB |
| Empty string | — | 41 bytes | |
| 100-char string | — | 141 bytes | |
| Small int (-5 to 256) | — | 28 bytes | |
| Large int | — | 28 bytes | |
| Float | — | 24 bytes | |
| Empty list | — | 56 bytes | |
| List with 1,000 ints | — | 36.0 KB | |
| List with 1,000 floats | — | 32.1 KB | |
| Empty dict | — | 64 bytes | |
| Dict with 1,000 items | — | 90.7 KB | |
| Empty set | — | 216 bytes | |
| Set with 1,000 items | — | 59.6 KB | |
| Regular class instance (5 attrs) | — | 694 bytes | |
__slots__ class instance (5 attrs) |
— | 212 bytes | |
| List of 1,000 regular class instances | — | 301.8 KB | |
List of 1,000 __slots__ class instances |
— | 215.7 KB | |
| dataclass instance | — | 694 bytes | |
| namedtuple instance | — | 228 bytes | |
| ⚙️ Basic Ops | Add two integers | 19.0 ns (52.7M ops/sec) | — |
| Add two floats | 18.4 ns (54.4M ops/sec) | — | |
| String concatenation (small) | 39.1 ns (25.6M ops/sec) | — | |
| f-string formatting | 64.9 ns (15.4M ops/sec) | — | |
.format() |
103 ns (9.7M ops/sec) | — | |
% formatting |
89.8 ns (11.1M ops/sec) | — | |
| List append | 28.7 ns (34.8M ops/sec) | — | |
| List comprehension (1,000 items) | 9.45 μs (105.8k ops/sec) | — | |
| Equivalent for-loop (1,000 items) | 11.9 μs (83.9k ops/sec) | — | |
| 📦 Collections | Dict lookup by key | 21.9 ns (45.7M ops/sec) | — |
| Set membership check | 19.0 ns (52.7M ops/sec) | — | |
| List index access | 17.6 ns (56.8M ops/sec) | — | |
| List membership check (1,000 items) | 3.85 μs (259.6k ops/sec) | — | |
len() on list |
18.8 ns (53.3M ops/sec) | — | |
| Iterate 1,000-item list | 7.87 μs (127.0k ops/sec) | — | |
| Iterate 1,000-item dict | 8.74 μs (114.5k ops/sec) | — | |
sum() of 1,000 ints |
1.87 μs (534.8k ops/sec) | — | |
| 🏷️ Attributes | Read from regular class | 14.1 ns (70.9M ops/sec) | — |
| Write to regular class | 15.7 ns (63.6M ops/sec) | — | |
Read from __slots__ class |
14.1 ns (70.7M ops/sec) | — | |
Write to __slots__ class |
16.4 ns (60.8M ops/sec) | — | |
Read from @property |
19.0 ns (52.8M ops/sec) | — | |
getattr() |
13.8 ns (72.7M ops/sec) | — | |
hasattr() |
23.8 ns (41.9M ops/sec) | — | |
| 📄 JSON | json.dumps() (simple) |
708 ns (1.4M ops/sec) | — |
json.loads() (simple) |
714 ns (1.4M ops/sec) | — | |
json.dumps() (complex) |
2.65 μs (376.8k ops/sec) | — | |
json.loads() (complex) |
2.22 μs (449.9k ops/sec) | — | |
orjson.dumps() (complex) |
310 ns (3.2M ops/sec) | — | |
orjson.loads() (complex) |
839 ns (1.2M ops/sec) | — | |
ujson.dumps() (complex) |
1.64 μs (611.2k ops/sec) | — | |
msgspec encode (complex) |
445 ns (2.2M ops/sec) | — | |
Pydantic model_dump_json() |
1.54 μs (647.8k ops/sec) | — | |
Pydantic model_validate_json() |
2.99 μs (334.7k ops/sec) | — | |
| 🌐 Web Frameworks | Flask (return JSON) | 16.5 μs (60.7k req/sec) | — |
| Django (return JSON) | 18.1 μs (55.4k req/sec) | — | |
| FastAPI (return JSON) | 8.63 μs (115.9k req/sec) | — | |
| Starlette (return JSON) | 8.01 μs (124.8k req/sec) | — | |
| Litestar (return JSON) | 8.19 μs (122.1k req/sec) | — | |
| 📁 File I/O | Open and close file | 9.05 μs (110.5k ops/sec) | — |
| Read 1KB file | 10.0 μs (99.5k ops/sec) | — | |
| Write 1KB file | 35.1 μs (28.5k ops/sec) | — | |
| Write 1MB file | 207 μs (4.8k ops/sec) | — | |
pickle.dumps() |
1.30 μs (769.6k ops/sec) | — | |
pickle.loads() |
1.44 μs (695.2k ops/sec) | — | |
| 🗄️ Database | SQLite insert (JSON blob) | 192 μs (5.2k ops/sec) | — |
| SQLite select by PK | 3.57 μs (280.3k ops/sec) | — | |
| SQLite update one field | 5.22 μs (191.7k ops/sec) | — | |
| diskcache set | 23.9 μs (41.8k ops/sec) | — | |
| diskcache get | 4.25 μs (235.5k ops/sec) | — | |
| MongoDB insert_one | 119 μs (8.4k ops/sec) | — | |
| MongoDB find_one by _id | 121 μs (8.2k ops/sec) | — | |
| MongoDB find_one by nested field | 124 μs (8.1k ops/sec) | — | |
| 📞 Functions | Empty function call | 22.4 ns (44.6M ops/sec) | — |
| Function with 5 args | 24.0 ns (41.7M ops/sec) | — | |
| Method call | 23.3 ns (42.9M ops/sec) | — | |
| Lambda call | 19.7 ns (50.9M ops/sec) | — | |
| try/except (no exception) | 21.5 ns (46.5M ops/sec) | — | |
| try/except (exception raised) | 139 ns (7.2M ops/sec) | — | |
isinstance() check |
18.3 ns (54.7M ops/sec) | — | |
| ⏱️ Async | Create coroutine object | 47.0 ns (21.3M ops/sec) | — |
run_until_complete(empty) |
27.6 μs (36.2k ops/sec) | — | |
asyncio.sleep(0) |
39.4 μs (25.4k ops/sec) | — | |
gather() 10 coroutines |
55.0 μs (18.2k ops/sec) | — | |
create_task() + await |
52.8 μs (18.9k ops/sec) | — | |
async with (context manager) |
29.5 μs (33.9k ops/sec) | — |
Memory Costs
Understanding how much memory different Python objects consume.
An empty Python process uses 15.77 MB
Strings
The rule of thumb for ASCII strings is the core string object takes 41 bytes, with each additional character adding 1 byte. Note: Python uses different internal representations based on content—strings with Latin-1 characters use 1 byte/char, those with most Unicode use 2 bytes/char, and strings with emoji or rare characters use 4 bytes/char.
| String | Size |
|---|---|
Empty string "" |
41 bytes |
1-char string "a" |
42 bytes |
| 100-char string | 141 bytes |

Numbers
Numbers are surprisingly large in Python. They have to derive from CPython’s PyObject and are subject to reference counting for garabage collection, they exceed our typical mental model many of:
- 2 bytes = short int
- 4 bytes = long int
- etc.
| Type | Size |
|---|---|
| Small int (-5 to 256, cached) | 28 bytes |
| Large int (1000) | 28 bytes |
| Very large int (10**100) | 72 bytes |
| Float | 24 bytes |

Collections
Collections are amazing in Python. Dynamically growing lists. Ultra high-perf dictionaries and sets. Here is the empty and “full” overhead of each.
| Collection | Empty | 1,000 items |
|---|---|---|
| List (ints) | 56 bytes | 36.0 KB |
| List (floats) | 56 bytes | 32.1 KB |
| Dict | 64 bytes | 90.7 KB |
| Set | 216 bytes | 59.6 KB |

Classes and Instances
Slots are an interesting addition to Python classes. They remove the entire concept of a __dict__ for tracking fields and other values. Even for a single instance, slots classes are significantly smaller (212 bytes vs 694 bytes for 5 attributes). If you are holding a large number of them in memory for a list or cache, the memory savings of a slots class becomes meaningful - about 30% less memory usage. Luckily for most use-cases, just adding a slots entry saves memory with minimal effort.
| Type | Empty | 5 attributes |
|---|---|---|
| Regular class | 344 bytes | 694 bytes |
__slots__ class |
32 bytes | 212 bytes |
| dataclass | — | 694 bytes |
@dataclass(slots=True) |
— | 212 bytes |
| namedtuple | — | 228 bytes |
Aggregate Memory Usage (1,000 instances):
| Type | Total Memory |
|---|---|
| List of 1,000 regular class instances | 301.8 KB |
List of 1,000 __slots__ class instances |
215.7 KB |

Basic Operations
The cost of fundamental Python operations: Way slower than C/C++/C# but still quite fast. I added a brief comparison to C# to the source repo.
Arithmetic
| Operation | Time |
|---|---|
| Add two integers | 19.0 ns (52.7M ops/sec) |
| Add two floats | 18.4 ns (54.4M ops/sec) |
| Multiply two integers | 19.4 ns (51.6M ops/sec) |

String Operations
String operations in Python are fast as well. Among template-based formatting styles, f-strings are the fastest. Simple concatenation (+) is faster still for combining a couple strings, but f-strings scale better and are more readable. Even the slowest formatting style is still measured in just nanoseconds.
| Operation | Time |
|---|---|
Concatenation (+) |
39.1 ns (25.6M ops/sec) |
| f-string | 64.9 ns (15.4M ops/sec) |
.format() |
103 ns (9.7M ops/sec) |
% formatting |
89.8 ns (11.1M ops/sec) |

List Operations
List operations are very fast in Python. Adding a single item usually requires 28ns. Said another way, you can do 35M appends per second. This is unless the list has to expand using something like a doubling algorithm. You can see this in the ops/sec for 1,000 items.
Surprisingly, list comprehensions are 26% faster than the equivalent for loops with append statements.
| Operation | Time |
|---|---|
list.append() |
28.7 ns (34.8M ops/sec) |
| List comprehension (1,000 items) | 9.45 μs (105.8k ops/sec) |
| Equivalent for-loop (1,000 items) | 11.9 μs (83.9k ops/sec) |

Collection Access and Iteration
How fast can you get data out of Python’s built-in collections? Here is a dramatic example of how much faster the correct data structure is. item in set or item in dict is 200x faster than item in list for just 1,000 items! This difference comes from algorithmic complexity: sets and dicts use O(1) hash lookups, while lists require O(n) linear scans—and this gap grows with collection size.
The graph below is non-linear in the x-axis.
Access by Key/Index
| Operation | Time |
|---|---|
| Dict lookup by key | 21.9 ns (45.7M ops/sec) |
Set membership (in) |
19.0 ns (52.7M ops/sec) |
| List index access | 17.6 ns (56.8M ops/sec) |
List membership (in, 1,000 items) |
3.85 μs (259.6k ops/sec) |

Length
len() is very fast. Maybe we don’t have to optimize it out of the test condition on a while loop looping 100 times after all.
| Collection | len() time |
|---|---|
| List (1,000 items) | 18.8 ns (53.3M ops/sec) |
| Dict (1,000 items) | 17.6 ns (56.9M ops/sec) |
| Set (1,000 items) | 18.0 ns (55.5M ops/sec) |
Iteration
| Operation | Time |
|---|---|
| Iterate 1,000-item list | 7.87 μs (127.0k ops/sec) |
| Iterate 1,000-item dict (keys) | 8.74 μs (114.5k ops/sec) |
sum() of 1,000 integers |
1.87 μs (534.8k ops/sec) |
Class and Object Attributes
The cost of reading and writing attributes, and how __slots__ changes things. Slots saves ~30% memory on large collections, with virtually identical attribute access speed.
Attribute Access
| Operation | Regular Class | __slots__ Class |
|---|---|---|
| Read attribute | 14.1 ns (70.9M ops/sec) | 14.1 ns (70.7M ops/sec) |
| Write attribute | 15.7 ns (63.6M ops/sec) | 16.4 ns (60.8M ops/sec) |

Other Attribute Operations
| Operation | Time |
|---|---|
Read @property |
19.0 ns (52.8M ops/sec) |
getattr(obj, 'attr') |
13.8 ns (72.7M ops/sec) |
hasattr(obj, 'attr') |
23.8 ns (41.9M ops/sec) |
JSON and Serialization
Comparing standard library JSON with optimized alternatives. orjson handles more data types and is over 8x faster than standard lib json for complex objects. Impressive!
Serialization (dumps)
| Library | Simple Object | Complex Object |
|---|---|---|
json (stdlib) |
708 ns (1.4M ops/sec) | 2.65 μs (376.8k ops/sec) |
orjson |
60.9 ns (16.4M ops/sec) | 310 ns (3.2M ops/sec) |
ujson |
264 ns (3.8M ops/sec) | 1.64 μs (611.2k ops/sec) |
msgspec |
92.3 ns (10.8M ops/sec) | 445 ns (2.2M ops/sec) |

Deserialization (loads)
| Library | Simple Object | Complex Object |
|---|---|---|
json (stdlib) |
714 ns (1.4M ops/sec) | 2.22 μs (449.9k ops/sec) |
orjson |
106 ns (9.4M ops/sec) | 839 ns (1.2M ops/sec) |
ujson |
268 ns (3.7M ops/sec) | 1.46 μs (682.8k ops/sec) |
msgspec |
101 ns (9.9M ops/sec) | 850 ns (1.2M ops/sec) |
Pydantic
| Operation | Time |
|---|---|
model_dump_json() |
1.54 μs (647.8k ops/sec) |
model_validate_json() |
2.99 μs (334.7k ops/sec) |
model_dump() (to dict) |
1.71 μs (585.2k ops/sec) |
model_validate() (from dict) |
2.30 μs (435.5k ops/sec) |
Web Frameworks
Returning a simple JSON response. Benchmarked with wrk against localhost running 4 works in Granian. Each framework returns the same JSON payload from a minimal endpoint. No database access or that sort of thing. This is just how much overhead/perf do we get from each framework itself. The code we write that runs within those view methods is largely the same.
Results
| Framework | Requests/sec | Latency (p99) |
|---|---|---|
| Flask | 16.5 μs (60.7k req/sec) | 20.85 ms (48.0 ops/sec) |
| Django | 18.1 μs (55.4k req/sec) | 170.3 ms (5.9 ops/sec) |
| FastAPI | 8.63 μs (115.9k req/sec) | 1.530 ms (653.6 ops/sec) |
| Starlette | 8.01 μs (124.8k req/sec) | 930 μs (1.1k ops/sec) |
| Litestar | 8.19 μs (122.1k req/sec) | 1.010 ms (990.1 ops/sec) |

File I/O
Reading and writing files of various sizes. Note that the graph is non-linear in y-axis.
Basic Operations
| Operation | Time |
|---|---|
| Open and close (no read) | 9.05 μs (110.5k ops/sec) |
| Read 1KB file | 10.0 μs (99.5k ops/sec) |
| Read 1MB file | 33.6 μs (29.8k ops/sec) |
| Write 1KB file | 35.1 μs (28.5k ops/sec) |
| Write 1MB file | 207 μs (4.8k ops/sec) |

Pickle vs JSON (Serialization)
For more serialization options including orjson, msgspec, and pydantic, see JSON and Serialization above.
| Operation | Time |
|---|---|
pickle.dumps() (complex obj) |
1.30 μs (769.6k ops/sec) |
pickle.loads() (complex obj) |
1.44 μs (695.2k ops/sec) |
json.dumps() (complex obj) |
2.72 μs (367.1k ops/sec) |
json.loads() (complex obj) |
2.35 μs (425.9k ops/sec) |
Database and Persistence
Comparing SQLite, diskcache, and MongoDB using the same complex object.
Test Object
user_data = {
"id": 12345,
"username": "alice_dev",
"email": "alice@example.com",
"profile": {
"bio": "Software engineer who loves Python",
"location": "Portland, OR",
"website": "https://alice.dev",
"joined": "2020-03-15T08:30:00Z"
},
"posts": [
{"id": 1, "title": "First Post", "tags": ["python", "tutorial"], "views": 1520},
{"id": 2, "title": "Second Post", "tags": ["rust", "wasm"], "views": 843},
{"id": 3, "title": "Third Post", "tags": ["python", "async"], "views": 2341},
],
"settings": {
"theme": "dark",
"notifications": True,
"email_frequency": "weekly"
}
}
SQLite (JSON blob approach)
| Operation | Time |
|---|---|
| Insert one object | 192 μs (5.2k ops/sec) |
| Select by primary key | 3.57 μs (280.3k ops/sec) |
| Update one field | 5.22 μs (191.7k ops/sec) |
| Delete | 191 μs (5.2k ops/sec) |
Select with json_extract() |
4.27 μs (234.2k ops/sec) |
diskcache
| Operation | Time |
|---|---|
cache.set(key, obj) |
23.9 μs (41.8k ops/sec) |
cache.get(key) |
4.25 μs (235.5k ops/sec) |
cache.delete(key) |
51.9 μs (19.3k ops/sec) |
| Check key exists | 1.91 μs (523.2k ops/sec) |
MongoDB
| Operation | Time |
|---|---|
insert_one() |
119 μs (8.4k ops/sec) |
find_one() by _id |
121 μs (8.2k ops/sec) |
find_one() by nested field |
124 μs (8.1k ops/sec) |
update_one() |
115 μs (8.7k ops/sec) |
delete_one() |
30.4 ns (32.9M ops/sec) |
Comparison Table
| Operation | SQLite | diskcache | MongoDB |
|---|---|---|---|
| Write one object | 192 μs (5.2k ops/sec) | 23.9 μs (41.8k ops/sec) | 119 μs (8.4k ops/sec) |
| Read by key/id | 3.57 μs (280.3k ops/sec) | 4.25 μs (235.5k ops/sec) | 121 μs (8.2k ops/sec) |
| Read by nested field | 4.27 μs (234.2k ops/sec) | N/A | 124 μs (8.1k ops/sec) |
| Update one field | 5.22 μs (191.7k ops/sec) | 23.9 μs (41.8k ops/sec) | 115 μs (8.7k ops/sec) |
| Delete | 191 μs (5.2k ops/sec) | 51.9 μs (19.3k ops/sec) | 30.4 ns (32.9M ops/sec) |
Note: MongoDB is a victim of network access version in-process access.

Function and Call Overhead
The hidden cost of function calls, exceptions, and async.
Function Calls
| Operation | Time |
|---|---|
| Empty function call | 22.4 ns (44.6M ops/sec) |
| Function with 5 arguments | 24.0 ns (41.7M ops/sec) |
| Method call on object | 23.3 ns (42.9M ops/sec) |
| Lambda call | 19.7 ns (50.9M ops/sec) |
Built-in function (len()) |
17.1 ns (58.4M ops/sec) |
Exceptions
| Operation | Time |
|---|---|
try/except (no exception raised) |
21.5 ns (46.5M ops/sec) |
try/except (exception raised) |
139 ns (7.2M ops/sec) |
Type Checking
| Operation | Time |
|---|---|
isinstance() |
18.3 ns (54.7M ops/sec) |
type() == type |
21.8 ns (46.0M ops/sec) |
Async Overhead
The cost of async machinery.
Coroutine Creation
| Operation | Time |
|---|---|
| Create coroutine object (no await) | 47.0 ns (21.3M ops/sec) |
| Create coroutine (with return value) | 45.3 ns (22.1M ops/sec) |
Running Coroutines
| Operation | Time |
|---|---|
run_until_complete(empty) |
27.6 μs (36.2k ops/sec) |
run_until_complete(return value) |
26.6 μs (37.5k ops/sec) |
| Run nested await | 28.9 μs (34.6k ops/sec) |
| Run 3 sequential awaits | 27.9 μs (35.8k ops/sec) |
asyncio.sleep()
Note: asyncio.sleep(0) is a special case in Python’s event loop—it yields control but schedules an immediate callback, making it faster than typical sleeps but not representative of general event loop overhead.
| Operation | Time |
|---|---|
asyncio.sleep(0) |
39.4 μs (25.4k ops/sec) |
Coroutine with sleep(0) |
41.8 μs (23.9k ops/sec) |
asyncio.gather()
| Operation | Time |
|---|---|
gather() 5 coroutines |
49.7 μs (20.1k ops/sec) |
gather() 10 coroutines |
55.0 μs (18.2k ops/sec) |
gather() 100 coroutines |
155 μs (6.5k ops/sec) |
Task Creation
| Operation | Time |
|---|---|
create_task() + await |
52.8 μs (18.9k ops/sec) |
| Create 10 tasks + gather | 85.5 μs (11.7k ops/sec) |
Async Context Managers & Iteration
| Operation | Time |
|---|---|
async with (context manager) |
29.5 μs (33.9k ops/sec) |
async for (5 items) |
30.0 μs (33.3k ops/sec) |
async for (100 items) |
36.4 μs (27.5k ops/sec) |
Sync vs Async Comparison
| Operation | Time |
|---|---|
| Sync function call | 20.3 ns (49.2M ops/sec) |
Async equivalent (run_until_complete) |
28.2 μs (35.5k ops/sec) |
Methodology
Benchmarking Approach
- All benchmarks run multiple times and with warmup not timed
- Timing uses
timeitorperf_counter_nsas appropriate - Memory measured with
sys.getsizeof()andtracemalloc - Results are median of N runs
Environment
- OS: macOS 26.2
- Python: 3.14.2 (CPython)
- CPU: ARM - 14 cores (14 logical)
- RAM: 24.0 GB
Code Repository
All benchmark code available at: https://github.com/mkennedy/python-numbers-everyone-should-know
Key Takeaways
- Memory overhead: Python objects have significant memory overhead - even an empty list is 56 bytes
- Dict/set speed: Dictionary and set lookups are extremely fast (O(1) average case) compared to list membership checks (O(n))
- JSON performance: Alternative JSON libraries like
orjsonandmsgspecare 3-8x faster than stdlibjson - Async overhead: Creating and awaiting coroutines has measurable overhead - only use async when you need concurrency
__slots__tradeoff:__slots__saves memory (~30% for collections of instances) with virtually no performance impact
Last updated: 2026-01-01
/* Custom table styling for this post only */ table { font-size: 13px; border-collapse: collapse; width: 100%; max-width: 690px; /* margin: 1em auto; */ background-color: #ddd; box-shadow: 0 2px 8px rgba(0, 0, 0, 0.08); border-radius: 6px; overflow: hidden; } table thead { background-color: #444; color: white; } table th { padding: 8px 12px; text-align: left; font-weight: 600; text-transform: uppercase; letter-spacing: 0.5px; font-size: 11px; } table td { padding: 6px 12px; border-bottom: 1px solid #e5e7eb; line-height: 1.4; } table tbody tr { transition: all 0.2s ease-in-out; background-color: #fff; } table tbody tr:hover { background-color: #e3f2fd; transform: scale(1.01); box-shadow: 0 2px 12px rgba(33, 150, 243, 0.15); cursor: default; } table tbody tr:last-child td { border-bottom: none; } /* Code in tables */ table code { background-color: #f3f4f6; padding: 2px 4px; border-radius: 3px; font-size: 12px; }December 31, 2025 07:49 PM UTC
Zero to Mastery
[December 2025] Python Monthly Newsletter 🐍
73rd issue of Andrei's Python Monthly: A big change is coming. Read the full newsletter to get up-to-date with everything you need to know from last month.
December 31, 2025 10:00 AM UTC
December 30, 2025
Paolo Melchiorre
Looking Back at Python Pescara 2025
A personal retrospective on Python Pescara in 2025: events, people, and moments that shaped a growing local community, reflecting on continuity, experimentation, and how a small group connected to the wider Python ecosystem.
December 30, 2025 11:00 PM UTC
PyCoder’s Weekly
Issue #715: Top 5 of 2025, LlamaIndex, Python 3.15 Speed, and More (Dec. 30, 2025)
December 30, 2025 07:30 PM UTC
Programiz
Python List
In this tutorial, we will learn about Python lists (creating lists, changing list items, removing items, and other list operations) with the help of examples.
December 30, 2025 04:31 AM UTC
December 29, 2025
Paolo Melchiorre
Django On The Med: A Contributor Sprint Retrospective
A personal retrospective on Django On The Med, three months later. From the first idea to the actual contributor sprint, and how a simple format based on focused mornings and open afternoons created unexpected value for people and the Django open source community.
December 29, 2025 11:00 PM UTC
Hugo van Kemenade
Replacing python-dateutil to remove six
The dateutil library is a popular and powerful Python library for dealing with dates and times.
However, it still supports Python 2.7 by depending on the six compatibility shim, and I’d prefer not to install for Python 3.10 and higher.
Here’s how I replaced three uses of its
relativedelta in a
couple of CLIs that didn’t really need to use it.
One #
norwegianblue was using it to calculate six months from now:
import datetime as dt
from dateutil.relativedelta import relativedelta
now = dt.datetime.now(dt.timezone.utc)
# datetime.datetime(2025, 12, 29, 15, 59, 44, 518240, tzinfo=datetime.timezone.utc)
six_months_from_now = now + relativedelta(months=+6)
# datetime.datetime(2026, 6, 29, 15, 59, 44, 518240, tzinfo=datetime.timezone.utc)
But we don’t need to be so precise here, and 180 days is good enough, using the standard
library’s
datetime.timedelta:
import datetime as dt
now = dt.datetime.now(dt.timezone.utc)
# datetime.datetime(2025, 12, 29, 15, 59, 44, 518240, tzinfo=datetime.timezone.utc)
six_months_from_now = now + dt.timedelta(days=180)
# datetime.datetime(2026, 6, 27, 15, 59, 44, 518240, tzinfo=datetime.timezone.utc)
Two #
pypistats was using it get the last day of a month:
import datetime as dt
first = dt.date(year, month, 1)
# datetime.date(2025, 12, 1)
last = first + relativedelta(months=1) - relativedelta(days=1)
# datetime.date(2025, 12, 31)
Instead, we can use the stdlib’s
calendar.monthrange:
import calendar
import datetime as dt
last_day = calendar.monthrange(year, month)[1]
# 31
last = dt.date(year, month, last_day)
# datetime.date(2025, 12, 31)
Three #
Finally, to get last month as a yyyy-mm string:
import datetime as dt
from dateutil.relativedelta import relativedelta
today = dt.date.today()
# datetime.date(2025, 12, 29)
d = today - relativedelta(months=1)
# datetime.date(2025, 11, 29)
d.isoformat()[:7]
# '2025-11'
Instead:
import datetime as dt
today = dt.date.today()
# datetime.date(2025, 12, 29)
if today.month == 1:
year, month = today.year - 1, 12
else:
year, month = today.year, today.month - 1
# 2025, 11
f"{year}-{month:02d}"
# '2025-11'
Goodbye six, and we also get slightly quicker install, import and run times.
Bonus #
I recommend
Adam Johnson’s tip
to import datetime as dt to avoid the ambiguity of which datetime is the module and
which is the class.
Header photo: Ver Sacrum calendar by Alfred Roller
December 29, 2025 04:53 PM UTC
Talk Python to Me
#532: 2025 Python Year in Review
Python in 2025 is in a delightfully refreshing place: the GIL's days are numbered, packaging is getting sharper tools, and the type checkers are multiplying like gremlins snacking after midnight. On this episode, we have an amazing panel to give us a range of perspectives on what matter in 2025 in Python. We have Barry Warsaw, Brett Cannon, Gregory Kapfhammer, Jodie Burchell, Reuven Lerner, and Thomas Wouters on to give us their thoughts.
December 29, 2025 08:00 AM UTC
Seth Michael Larson
Nintendo GameCube and Switch “Wrapped” 2025 🎮🎁
December 29, 2025 12:00 AM UTC
December 28, 2025
Mark Dufour
A (biased) Pure Python Performance Comparison
December 28, 2025 04:31 AM UTC
December 26, 2025
"Michael Kennedy's Thoughts on Technology"
DevOps Python Supply Chain Security

In my last article, “Python Supply Chain Security Made Easy” I talked about how to automate pip-audit so you don’t accidentally ship malicious Python packages to production. While there was defense in depth with uv’s delayed installs, there wasn’t much safety beyond that for developers themselves on their machines.
This follow up fixes that so even dev machines stay safe.
Defending your dev machine
My recommendation is instead of installing directly into a local virtual environment and then running pip-audit, create a dedicated Docker image meant for testing dependencies with pip-audit in isolation.
Our workflow can go like this.
First, we update your local dependencies file:
uv pip compile requirements.piptools --output-file requirements.txt --exclude-newer 1 week
This will update the requirements.txt file, or tweak the command to update your uv.lock file, but it don’t install anything.
Second, run a command that uses this new requirements file inside of a temporary docker container to install the requirements and run pip-audit on them.
Third, only if that pip-audit test succeeds, install the updated requirements into your local venv.
uv pip install -r requirements.txt
The pip-audit docker image
What do we use for our Docker testing image? There are of course a million ways to do this. Here’s one optimized for building Python packages that deeply leverages uv’s and pip-audit’s caching to make subsequent runs much, much faster.
Create a Dockerfile with this content:
# Image for installing python packages with uv and testing with pip-audit
# Saved as Dockerfile
FROM ubuntu:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get autoremove -y
RUN apt-get -y install curl
# Dependencies for building Python packages
RUN apt-get install -y gcc
RUN apt-get install -y build-essential
RUN apt-get install -y clang
RUN apt-get install -y openssl
RUN apt-get install -y checkinstall
RUN apt-get install -y libgdbm-dev
RUN apt-get install -y libc6-dev
RUN apt-get install -y libtool
RUN apt-get install -y zlib1g-dev
RUN apt-get install -y libffi-dev
RUN apt-get install -y libxslt1-dev
ENV PATH=/venv/bin:$PATH
ENV PATH=/root/.cargo/bin:$PATH
ENV PATH=/root/.local/bin/:$PATH
ENV UV_LINK_MODE=copy
# Install uv
RUN curl -LsSf https://astral.sh/uv/install.sh | sh
# set up a virtual env to use for temp dependencies in isolation.
RUN --mount=type=cache,target=/root/.cache uv venv --python 3.14 /venv
# test that uv is working
RUN uv --version
WORKDIR "/"
# Install pip-audit
RUN --mount=type=cache,target=/root/.cache uv pip install --python /venv/bin/python3 pip-audit
This installs a bunch of Linux libraries used for edge-case builds of Python packages. It takes a moment, but you only need to build the image once. Then you’ll run it again and again. If you want to use a newer version of Python later, change the version in uv venv --python 3.14 /venv. Even then on rebuilds, the apt-get steps are reused from cache.
Next you build with a fixed tag so you can create aliases to run using this image:
# In the same folder as the Dockerfile above.
docker build -t pipauditdocker .
Finally, we need to run the container with a few bells and whistles. Add caching via a volume so subsequent runs are very fast: -v pip-audit-cache:/root/.cache. And map a volume so whatever working directory you are in will find the local requirements.txt: -v \"\$(pwd)/requirements.txt:/workspace/requirements.txt:ro\"
Here is the alias to add to your .bashrc or .zshrc accomplishing this:
alias pip-audit-proj="echo '🐳 Launching isolated test env in Docker...' && \
docker run --rm \
-v pip-audit-cache:/root/.cache \
-v \"\$(pwd)/requirements.txt:/workspace/requirements.txt:ro\" \
pipauditdocker \
/bin/bash -c \"echo '📦 Installing requirements from /workspace/requirements.txt...' && \
uv pip install --quiet -r /workspace/requirements.txt && \
echo '🔍 Running pip-audit security scan...' && \
/venv/bin/pip-audit \
--ignore-vuln CVE-2025-53000 \
--ignore-vuln PYSEC-2023-242 \
--skip-editable\""
That’s it! Once you reload your shell, all you have to do is type is pip-audit-proj when you’re in the root of your project that contains your requirements.txt file. You should see something like this below. Slow the first time, fast afterwards.

Protecting Docker in production too
Let’s handle one more situation while we are at it. You’re running your Python app IN Docker. Part of the Docker build configures the image and installs your dependencies. We can add a pip-audit check there too:
# Dockerfile for your app (different than validation image above)
# All the steps to copy your app over and configure the image ...
# After creating a venv in /venv and copying your requirements.txt to /app
# Check for any sketchy packages.
# We are using mount rather than a volume because
# we want to cache build time activity, not runtime activity.
RUN --mount=type=cache,target=/root/.cache uv pip install --python /venv/bin/python3 --upgrade pip-audit
RUN --mount=type=cache,target=/root/.cache /venv/bin/pip-audit --ignore-vuln CVE-2025-53000 --ignore-vuln PYSEC-2023-242 --skip-editable
# ENTRYPOINT ... for your app
Conclusion
There you have it. Two birds, one Docker stone for both. Our first Dockerfile built a reusable Docker image named pipauditdocker to run isolated tests against a requirements file. This second one demonstrates how we can make our docker/docker compose build completely fail if there is a bad dependency saving us from letting it slip into production.
Cheers
Michael
December 26, 2025 11:53 PM UTC
Seth Michael Larson
Getting started with Playdate on Ubuntu 🟨

