skip to navigation
skip to content

Planet Python

Last update: March 24, 2026 04:44 PM UTC

March 24, 2026


The Python Coding Stack

3 • 7600 • 33 • 121 • When Python Stacks Up

When I was a child, I used to pace up and down the corridor at home pretending to teach an imaginary group of people. It was my way of learning.

It still is.

I started writing about Python as a learning tool—to help me sort things out in my head, weave a thread through all the disparate bits of information, clarify my thoughts, make sure any knowledge gaps are filled.

I started The Python Coding Stack three years ago. That’s the first of the mystery numbers in the post’s title revealed! I had written elsewhere before, but at the time of starting The Stack, I felt I had found my own “Python voice”. I had been teaching Python for nearly a decade. I had written plenty of articles, but setting up The Python Coding Stack was a deliberate choice to step up. I was still writing articles primarily for my own benefit, but now I was also writing for others, hoping they would want to learn the way I do.

And 7,600 subscribers apparently do. Thank you for joining this journey, whether you were there three years ago or you joined a few days ago. If you just joined, there’s an archive of 121 articles, most of them long-form tutorials or step-by-step guides.

A special thank you to the 33 subscribers who chose to upgrade to premium and join The Club. It may only amount to 3 coffees per month for you, but it makes a difference to me. Thank you! I hope you’ve been enjoying the exclusive content for The Club members.

And perhaps, if a few more decide to join you in The Club (you can surely cut three coffees out of your monthly intake!), then this publication may even become self-sustainable. Your support can make a real difference—if you value these articles and want to see them continue, please consider joining now. At the moment, I give up a lot of my time for free to think about my articles, plan them, draft them, review them technically, review them linguistically, get them ready for publication, and then publish.

Subscribe now


I mentioned my live teaching earlier. My written articles and my live teaching have a lot in common. One of the hardest things about teaching (or communication in general) is to place yourself in the learner’s mindset. I know, it’s obvious. But it’s hard.

A string of words can make perfect sense to someone who already understands the concept, but it’s hard to understand for someone learning it for the first time.

Going from A to B can be a smooth reasoning step for an expert, but requires a few more intermediate steps for a novice.

A trait that helps me in my teaching is my ability to recall the pain points I had when learning a topic. Everything is easy once you know it, but hard when you don’t. Remembering that what comes easily today was once hard is essential for teaching, whatever the format.

I often use my writing to help me with my live teaching. And, just as often, I discover a new angle or insight during live teaching that I then put down in writing. It’s a two-way street. Both forms of communication—live teaching and writing—complement each other.

All this to say that I enjoy writing these articles. They’re useful for me personally, and for my work teaching Python. And I hope they’re useful for you.


121 articles. The cliché would have me say that choosing favourites is like choosing a favourite child. But that’s not the case. There are articles I like less than others. So, I tried to put together a highlights reel of the past three years. Here we go…

The Centre of the Python Universe • Objects

A Stroll Across Python • Fancy and Not So Fancy Tools

Where Do I Store This? • Data Types and Structures

And here are the posts in The Club section of this publication, exclusive for premium subscribers: The Club | The Python Coding Stack


Happy 3rd Birthday to The Python Coding Stack. From just under a hundred people in the first week to 7,600+ today, this community has grown thanks to your enthusiasm.

Let’s keep up the momentum—consider joining The Club today! Your membership can help ensure The Python Coding Stack continues on its path, stronger than ever.

Subscribe now

Photo by Daria Obymaha


For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!

Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.

And you can find out more about me at stephengruppetta.com

March 24, 2026 02:16 PM UTC


Real Python

Understanding CRUD Operations in SQL

CRUD operations are at the heart of nearly every application you interact with. As a developer, you usually want to create data, read or retrieve data, update data, and delete data. Whether you access a database or interact with a REST API, only when all four operations are present are you able to make a complete data roundtrip in your app.

Creating, reading, updating, and deleting are so vital in software development that these methods are widely referred to as CRUD. Understanding CRUD will give you an actionable blueprint when you build applications and help you understand how the applications you use work behind the scenes. So, what exactly does CRUD mean?


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 24, 2026 02:00 PM UTC


Rodrigo Girão Serrão

Ask the LLM to write code for it

This article covers a useful LLM pattern where you ask the LLM to write code to solve a problem instead of asking it to solve the problem directly.

The problem of merging two transcripts

I had two files that contained two halves of the transcript of an audio recording and I wanted to use an LLM to merge the two halves. There were three reasons that stopped me from simply copying part 2 and pasting it after part 1:

  1. the two transcripts overlapped (the end of part 1 was after the start of part 2);
  2. the timestamps for part 2 started from 0, so they were missing an offset; and
  3. speaker identification was not consistent.

I uploaded the two halves into ChatGPT and asked it to merge the two transcripts, fix the timestamps and the speaker identification, but to not change the text.

The result I got back was a ridiculous attempt at providing the full transcript, with two sections that supposedly represented parts of either transcript I could just copy and paste confidently, and a couple of other ridiculous blunders.

Instead of fighting ChatGPT, I decided to use a very useful pattern I learned about last year.

Ask the LLM to write code for it

Instead of asking ChatGPT to merge the transcripts, I could ask it to analyse them, find the solutions to the three problems listed above, and then write code that would merge the transcripts.

Since I was confident that ChatGPT could

  1. identify the overlap between the two files;
  2. use the overlap information to compute the timestamp offset required for part 2; and
  3. figure out you had to swap the two speakers in part 2,

I knew ChatGPT would be able to write a Python script that could read from both files and apply a couple of string operations to the second part.

This yielded much better results in two ways. ChatGPT was able to find the solutions for the three problems above and write a script that fixed them automatically. That was the goal.

On top of that, since ChatGPT had a very clear implicit goal — get the final merged transcript — and since running Python code is something that ChatGPT can do, ChatGPT even ran the script for me and produced two artifacts at the end:

  1. the full Python script I could run against the two halves if I wanted; and
  2. the final, fixed transcript.

This is an example application of a really useful LLM pattern:

Don't ask the LLM to solve a problem. Instead, ask it to write code that solves the problem.

As another visual example, it's much easier to ask an LLM to write a Python script that draws a path that solves a maze (that's just a couple hundred of lines of code) than it is to upload an image and ask the LLM to draw a valid path on the picture of a maze. Try it yourself!

March 24, 2026 01:16 PM UTC


Real Python

Quiz: Python Modules and Packages: An Introduction

In this quiz, you’ll test your understanding of Python Modules and Packages.

By working through this quiz, you’ll revisit how to write and import modules and packages, how to structure code for modular development, and how to combine modules to create larger applications.

This quiz will help you practice organizing projects so they stay easier to maintain and grow.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 24, 2026 12:00 PM UTC


Nicola Iarocci

Eve 2.3.1

I just released Eve v2.3.1. In the unlikely event that you’ve been using JSONP callbacks with Eve, you’ll want to update as this patch improves on their security (changelog).

March 24, 2026 08:15 AM UTC


Seth Michael Larson

LAN Party Calculator (Mario Kart, Kirby Air Riders, F-Zero)

Nintendo has multiple popular racing franchises, including Mario Kart, Kirby Air Ride, and F-Zero. Each of these franchises spans multiple titles and consoles and have ways to play with more than one console in a single shared “game lobby”. This feature makes these games interesting for LAN parties, where you have many players, consoles, and games in one area.

What does it mean to be the most “LAN-party-able” Nintendo racing game? There are three metrics I found interesting for this question: most-players, price-per-player, and “real-estate”-per-player (aka: TVs/consoles). There is a different “best” game according to each of these metrics. I've compiled the data and created a small calculator to compare:





GameMode#PPriceConsolesGamesCablesAdaptersTVs

Which games are the winners?

Why did you build this?

This post was inspired by hosting a LAN party with friends for my birthday. Researching and verifying the limits for each game and console took a lot of work, so I hope this can save someone some time wrangling all these numbers in the future. After all this research, the games I chose for the LAN party are “Mario Kart: Double Dash” on the GameCube, “Mario Kart 8 Deluxe” and “F-Zero 99” on the Nintendo Switch, and “Mario Kart World” on the Nintendo Switch 2.

The data and the script I used to generate this calculator is all open source. If there are mistakes or improvements, please submit a patch. Please note that I don't own a DS, 3DS, or Wii U so the numbers there are more likely to be incorrect. The rest of this blog post will be about the specifics for each console and game.

What features does each game support?

Each game supports one of four multiplayer modes: Local, LAN, Online, and Share. Availability depends on both the console and the game.

GameConsoleYearLocalLANOnlineShare
Super Mario KartSNES1992YESNONONO
F-ZeroSNES1990YESNONONO
Mario Kart 64N641996YESNONONO
Mario Kart: Super CircuitGBA2001NOYESNOYES (1)
F-Zero: Maximum VelocityGBA2003NOYESNOYES (1)
F-Zero: GP LegendGBA2003NOYESNOYES (1)
Mario Kart: Double Dash!!GameCube2003YESYESNONO
Kirby Air RideGameCube2003YESYESNONO
F-Zero GXGameCube2003YESNONONO
Mario Kart DSDS2005NOYESYESYES (2)
Mario Kart WiiWii2008YESNOYESNO
Mario Kart 73DS/2DS2011NOYESYESYES (2)
Mario Kart 8Wii U2014YESNOYESNO
Mario Kart 8 DeluxeSwitch2017YESYESYESNO
F-Zero 99Switch2023NONOYESNO
Mario Kart WorldSwitch 22025YESYESYESNO
Kirby Air RidersSwitch 22025YESYESYESYES (3)

Pricing

Here is a table with costs from March 2026 for each game, console, and accessory:

GameConsoleControllerCableAdapter
F-Zero$18SNES$129$17
Super Mario Kart$34
Mario Kart 64$46N64$87$16
Mario Kart: Super Circuit$17GBA$73$24
F-Zero: GP Legend$30
F-Zero: Maximum Velocity$20
F-Zero GX$61GameCube$119$28$5$25
Kirby Air Ride$71
Mario Kart: Double Dash!!$60
Mario Kart DS$17DS$60
Mario Kart Wii$35Wii$60$13
Mario Kart 7$143DS/2DS$94
Mario Kart 8$10Wii U$129$30
Mario Kart 8 Deluxe$35Switch$135$36$5
F-Zero 99$0
Kirby Air Riders$50Switch 2$395$36$5$20
Mario Kart World$58

Game Boy Advance

The Game Boy Advance (GBA) had multiple features that made the console perfect for multi-console multiplayer. These features being the new GBA Link Cables, allowing more than two consoles to connect, and the Single-Pak Link Play that all the GBA titles on this list support.

GBA Link Cables have three terminals per cable, a larger grey plug, a smaller blue plug, and a smaller blue socket in the middle of the cable. The grey and blue plugs both fit into a GBA console, but the larger grey plug does not fit into the small blue socket on the cable. To connect four players, two players connect like normal, and then player three connects their blue plug into the blue socket between the existing connected consoles. In the end, this means you only need N-1 cables for N consoles and that a single player (player 1) ends up with a blue plug in their console.

The second feature “Single-Pak Link Play” allowed a single player to own a cartridge and to share the game with other connected consoles if the game supports the mode. This mode is also sometimes called “Multiboot” or “Joyboot”. Because the game ROM data itself is transferred to the other consoles, this often made for load-times during startup and meant all content wasn't playable by all players. For example, in Mario Kart: Super Circuit only a subset of maps and characters were available in Single-Pay Link Play mode.

GameCube

The GameCube was Nintendo’s first internet-enabled console, even if only 8 titles supported the feature. Only three titles supported LAN play, that being Kirby Air Ride, Mario Kart: Double Dash!!, and 1080° Avalanche.

The GameCube Broadband Adapter is legendarily expensive now due to how few games supported the feature at all. Nowadays, it's advised to modify your GameCube with a method to boot into Swiss and using Swiss’s “Emulate Broadband Adapter” feature with an ETH2GC adapter. These adapters are cheap, even if you don't assemble them yourself. There are a few variants, ETH2GC Sidecar, ETH2GC Lite, and ETH2GC Card Slot. I am currently running a ETH2GC Sidecar and ETH2GC Card Slot and both work together with Mario Kart: Double Dash!! and Kirby Air Ride.

DS, Wii, Wii U, 3DS

The DS supported a feature called DS Download Play, which similar to Single-Pak Link play for the GBA, allowed a playing single game cartridge with up to 8 consoles. The 3DS also supported this feature.

Beyond this I didn't have a lot to say about these consoles, as they aren't my interest. If you have more to say, maybe write your own blog post and send it to me after!

Nintendo Switch

The first-generation Switch is a quirky console being both portable and dockable. Because the console itself has a screen on it, this means you can play games without a TV. However, to access “LAN” modes in Mario Kart 8 Deluxe you need to be physically linked via Ethernet.

Problem is... the original Switch doesn't have an Ethernet port. And depending on whether you're playing docked or undocked: you'll need a different adapter! If you're playing undocked, using the Switch screen as your “TV” you'll need to buy a USB-C to Ethernet adapter. If you're playing docked you'll need to buy a USB-A to Ethernet adapter, as the dock itself doesn't have a USB-C port except for power delivery. Switch OLED docks do have an Ethernet port, so if you have one of those models then you won't need an adapter in docked mode.

Test your adapters before your LAN party, as not every adapter will be accepted by the Switch!

Both the first-generation Switch and Switch 2 also come with two controllers (“Joycons”) per console, meaning you'll have to buy fewer controllers to reach high player counts.

Nintendo Switch 2

The Switch 2 is similar to the Switch 1, being both portable and dockable. Nintendo included an Ethernet port on the Switch 2 dock, and also USB-A and USB-C ports, too. So if you're playing without a TV, you'll still need a USB-C to Ethernet adapter for your Switch 2.

The Switch 2 adds support for a new mode: Game Share. This mode is similar to DS Download Play and Single-Pak Link in terms of functionality but in terms of implementation: it's local game streaming! Even cooler, this feature means that first-generation Switch consoles can “play” some Switch 2 games like Kirby Air Riders without sacrificing any features.

Mario Kart: Double Dash!!

The game supports up to 16 players, however you can only have 8 total karts per race. Double Dash allows two players to share a single kart, with one player driving and other throwing items. LAN mode also doesn't allow selecting which character or kart you are, you are assigned which kart you will be driving.

Kirby Air Ride

Despite supporting LAN mode and having 8 Kirby colors, you are only allowed to have 4 players maximum within City Trial or Air Ride modes. So, the LAN mode only allows having fewer people sharing a TV.

F-Zero GX

I saw a reference online to a Nintendo-hosted online leaderboard via passwords or ghosts for Time Attack, but wasn't able to find an actual source that this happened. If you have a reference or video, please send it my way! Otherwise, I may be mis-remembering something else that I read in the past.

F-Zero 99

First F-Zero game in over 20 years (sorry F-Zero fans, us Kirby Air Ride fans know how you feel). Allows up to 99 players in a private lobby. Game is free, but you need a Switch or Switch 2 console per player. There isn't any local or LAN multiplayer, so once Nintendo Switch Online is sunsetted this game won't be playable with multiplayer.

Mario Kart 8 Deluxe, Mario Kart World

In both Mario Kart 8 Deluxe and Mario Kart World the LAN play mode is hidden behind holding L+R and pressing the left joystick down on the "Wireless Play" option.

Kirby Air Riders

There is very little information online about Kirby Air Riders LAN multiplayer mode. The official Nintendo documentation doesn't describe the allowed number of players per console. If anyone has more definitive data, please reach out. Nintendo Switch GameShare allows playing online with four players with only one cartridge. Nintendo Switch GameShare for Kirby Air Riders is also compatible with Nintendo Switch consoles.

Note that on launch Kirby Air Riders was very disappointing with online play only allowing one player per console. An update added support for more than one player per console for wireless play. LAN mode still requires 1 console per player.

Local Multiplayer

Multiple players play the game on a single console with different controllers.

Game1234
Super Mario Kart$163$180
F-Zero$147$164
Mario Kart 64$133$149$165$181
Mario Kart: Double Dash!!$179$207$235$263
Kirby Air Ride$190$218$246$274
F-Zero GX$180$208$236$264
Mario Kart Wii$95$108$121$134
Mario Kart 8$139$169$199$229
Mario Kart 8 Deluxe$170$170$206$242
Mario Kart World$453$453$489$525
Kirby Air Riders$445$445$481$517

LAN Multiplayer

Consoles directly communicate to each other through wired, short-range wireless, or “local” internet connections, such as ethernet running to an internet switch/router or are directly wired together through ethernet or console-specific link cable. What distinguishes this mode from “Wireless” is that this mode will continue to work even after Nintendo servers have been discontinued.

Game23456789101112131415161718192021222324
Mario Kart: Super Circuit$204$318$432
F-Zero: Maximum Velocity$210$327$444
F-Zero: GP Legend$230$357$484
Mario Kart: Double Dash!!$418$446$474$502$530$558$586$795$823$851$879$1088$1116$1144$1172
Kirby Air Ride$440$468$496
Mario Kart DS$154$231$308$385$462$539$616
Mario Kart 7$216$324$432$540$648$756$864
Mario Kart 8 Deluxe$390$390$390$585$585$780$780$975$975$1170$1170
Mario Kart World$956$956$956$1434$1434$1912$1912$2390$2390$2868$2868$3346$3346$3824$3824$4302$4302$4780$4780$5258$5258$5736$5736
Kirby Air Riders$940$1410$1880$2350$2820$3290$3760$4230$4700$5170$5640$6110$6580$7050$7520

Online Multiplayer

Multiplayer where you can play against your friends or other players without needing to be on the same local network. This uses either Wi-Fi or Ethernet but connected to the global internet. This mode relies on a central service so once discontinued will either not be possible or will require modifications to your console, such as wiimmfi for the Nintendo Wii.

Game23456789101112131415161718192021222324
Mario Kart DS$154$231$308
Mario Kart Wii$190$203$216$311$324$419$432$527$540$635$648
Mario Kart 7$216$324$432
Mario Kart 8$278$308$338$477$507$646$676
Mario Kart 8 Deluxe$340$340$340$510$510$680$680
F-Zero 99$270$405$540$675$810$945$1080$1215$1350$1485$1620$1755$1890$2025$2160$2295$2430$2565$2700$2835$2970$3105$3240
Mario Kart World$906$906$906$1359$1359$1812$1812$2265$2265$2718$2718$3171$3171$3624$3624
Kirby Air Riders$890$890$890$1335$1335$1780$1780

Mario Kart Wii, Mario Kart DS have mods you can apply to play online in private servers. Mario Kart: Double Dash!! and Kirby Air Ride also have mods that allow wireless play which wasn't possible when the games were first released.

Share Multiplayer (Single-Pak Link, DS Download Play, Game Share)

This multiplayer mode allows playing with local players that own a console, but not the game. This usually results in a degraded experience for players that don't own the game, such as a reduced number of playable characters, karts, or racetracks. Nintendo Switch “GameShare” uses game streaming between consoles.

Game2345678
Mario Kart: Super Circuit$187$284$381
F-Zero: Maximum Velocity$190$287$384
F-Zero: GP Legend$200$297$394
Mario Kart DS$137$197$257$317$377$437$497
Mario Kart 7$202$296$390$484$578$672$766
Kirby Air Riders$840$1235$1630

All Multiplayers

Here is a table comparing all multiplayer modes and their costs:

GameMode123456789101112131415161718192021222324
Super Mario KartLOCAL$163$180
F-ZeroLOCAL$147$164
Mario Kart 64LOCAL$133$149$165$181
Mario Kart: Super CircuitLOCAL$90
LAN$90$204$318$432
SHARE$90$187$284$381
F-Zero: Maximum VelocityLOCAL$93
LAN$93$210$327$444
SHARE$93$190$287$384
F-Zero: GP LegendLOCAL$103
LAN$103$230$357$484
SHARE$103$200$297$394
Mario Kart: Double Dash!!LOCAL$179$207$235$263
LAN$179$418$446$474$502$530$558$586$795$823$851$879$1088$1116$1144$1172
Kirby Air RideLOCAL$190$218$246$274
LAN$190$440$468$496
F-Zero GXLOCAL$180$208$236$264
Mario Kart DSLOCAL$77
LAN$77$154$231$308$385$462$539$616
ONLINE$77$154$231$308
SHARE$77$137$197$257$317$377$437$497
Mario Kart WiiLOCAL$95$108$121$134
ONLINE$95$190$203$216$311$324$419$432$527$540$635$648
Mario Kart 7LOCAL$108
LAN$108$216$324$432$540$648$756$864
ONLINE$108$216$324$432
SHARE$108$202$296$390$484$578$672$766
Mario Kart 8LOCAL$139$169$199$229
ONLINE$139$278$308$338$477$507$646$676
Mario Kart 8 DeluxeLOCAL$170$170$206$242
LAN$170$390$390$390$585$585$780$780$975$975$1170$1170
ONLINE$170$340$340$340$510$510$680$680
F-Zero 99LOCAL$135
ONLINE$135$270$405$540$675$810$945$1080$1215$1350$1485$1620$1755$1890$2025$2160$2295$2430$2565$2700$2835$2970$3105$3240
Mario Kart WorldLOCAL$453$453$489$525
LAN$453$956$956$956$1434$1434$1912$1912$2390$2390$2868$2868$3346$3346$3824$3824$4302$4302$4780$4780$5258$5258$5736$5736
ONLINE$453$906$906$906$1359$1359$1812$1812$2265$2265$2718$2718$3171$3171$3624$3624
Kirby Air RidersLOCAL$445$445$481$517
LAN$445$940$1410$1880$2350$2820$3290$3760$4230$4700$5170$5640$6110$6580$7050$7520
ONLINE$445$890$890$890$1335$1335$1780$1780
SHARE$445$840$1235$1630


Thanks for keeping RSS alive! ♥

March 24, 2026 12:00 AM UTC

March 23, 2026


Talk Python Blog

Updates from Talk Python - March 2026

There have been a bunch of changes to make the podcast and courses at Talk Python just a little bit better. And I wrote a few interesting articles that might pique your interest. So I thought it was time to send you all a quick little update and let you know what’s new and improved.

Talk Python Courses

Account Dashboard for courses

I spoke to a lot of users who said that it’s a bit difficult to jump back into your account and see which courses you were last taking. I can certainly appreciate that, especially if you have the bundle with every class is available. So I added this cool new dashboard that sorts and displays your progress through your most recent course activity as well as courses that you have finished.

March 23, 2026 09:04 PM UTC


"Michael Kennedy's Thoughts on Technology"

Replacing Flask with Robyn wasn't worth it

TL;DR; I converted Python Bytes from Quart/Flask to the Rust-backed Robyn framework and benchmarked it with Locust. There was no meaningful speed or memory improvement - and Robyn actually used more memory. Framework maturity, ecosystem depth, and app server flexibility still matter more than raw benchmark numbers.

Last week I played with the idea of replacing Quart (async Flask ) with Robyn for our bigger web apps. Robyn is built almost entirely in Rust, and in the benchmarks, it looks dramatically better. Not just a little bit faster, but 25 times faster. However, if you’ve been around the block for a while, you know that benchmarks and how things work for your app and your situation are not always the same thing.

So I picked the simplest complex app that I run, Python Bytes, and converted it entirely to run on the Robyn framework. This took a few hours of careful work and experimenting, and I even had to create a Python package to allow Robyn to run the Chameleon template language.

When I was done, it was time to fire up Locust and see if there was any dramatic performance improvements. I certainly wasn’t expecting 25x, but 2x? 1.5x? That would have been really impressive.

Did Robyn improve speed or memory over Flask?

The results were in and the answer was just about no difference in RPS or latency. It turns out that almost all the computational time is in the logic of our app, which of course doesn’t change and I never intended to change it.

Another area I was hoping to optimize is memory. Our web apps use a lot of memory for what they are. They’re certainly not trivial. But running a couple of copies of the app in a web garden was using way more than I expected that they should. And I thought moving closer to Rust might have positive influences for memory too.

It turns out the Robyn fork actually used more memory, not less, than the current setup. After all, our web apps run on Granian, which is mostly Rust right up to the Flask framework itself already.

Why Flask’s maturity still beats Robyn’s speed

So our fun little spike to explore the Robyn framework is going to remain just that. I’m sticking with Flask. I’ve talked about this before, but maturity in a library or framework is a big plus. The ecosystem for Flask/Quart is much bigger and more polished than for the smaller Robyn framework.

More than that, the app server runtime for Robyn is much less polished than some of the pluggable app servers out there. Think Granian, Gunicorn, uvicorn, etc. For example, Robyn does not support web garden process recycling. In many servers you can say after five hours or 10,000 requests or something like that, just slowly take the request out of a process, spin up a new one and shut down the old one just to keep things fresh. This helps if you’re using some library that holds on to too many caches or some other weird memory thing.

Was the Robyn experiment a waste of time?

Even though I spent maybe close to six hours working on this exploration and decided not to use it, I still found it super valuable. I created the fun Chameleon Robyn package to help people using Robyn have a greater choice of template languages. I got to see my apps from multiple perspectives. I built out some tooling for Claude that I’m going to write about later that is generally really awesome. And I ended up saving significant memory for some of my biggest web apps by just spending more time thinking about how I’m running them currently in Granian and Flask.

March 23, 2026 04:31 PM UTC


Antonio Cuni

My first OSS commit turns 20 today

My first OSS commit turns 20 today

Some time ago I realized that it was 20 years since I started to contribute toOpen Source. It's easy to remember, because I started to work on PyPy as part of mymaster's thesis and I graduated in 2006.

So, I did a bit of archeology to find the first commit:

$ cd ~/pypy/pypy && git show 1a086d45d9 --no-patchcommit 1a086d45d9Author: Antonio Cuni <anto.cuni@gmail.com>Date: Wed Mar 22 14:01:42 2006 +0000 Initial commit of the CLI backend

!!! note "svn, hg, git"

Funny thing, the original commit was not in `git`, which was just a few months oldat the time. In 2006 PyPy was using `subversion`, then a few years later [migratedto mercurial](../../2010/12/pypy-migrates-to-mercurial-3308736161543832134.md), and many years later[migrated to git](https://pypy.org/posts/2023/12/pypy-moved-to-git-github.html).I managed to find traces of the original `svn` commit in the archives of the[pypy-svn](https://marc.info/?l=pypy-svn&m=118495688023240) mailing list.

March 23, 2026 04:09 PM UTC


PyCharm

OpenAI Acquires Astral: What It Means for PyCharm Users

On March 19, OpenAI announced that it would acquire Astral, the company behind uv, Ruff, and ty. The Astral team, led by founder Charlie Marsh, will join OpenAI’s Codex team. The deal is subject to regulatory approval.

First and foremost: congratulations to Charlie Marsh and the entire Astral team. They shipped some of the most beloved tools in the Python ecosystem and raised the bar for what developer tooling can be. This acquisition is a reflection of the impact they’ve had.

This is big news for the Python ecosystem, and it matters to us at JetBrains. Here’s our perspective.

What Astral built

In just two years, Astral transformed Python tooling. Their tools now see hundreds of millions of downloads every month, and for good reason:

This is foundational infrastructure that millions of developers rely on every day. We’ve integrated both Ruff and uv into PyCharm because they substantially make Python development better.

The risks are real, but manageable

Change always carries risk, and acquisitions are no exception. The main concern here is straightforward: if Astral’s engineers get reassigned to OpenAI’s more commercial priorities, these tools could stagnate over time.

The good news is that Astral’s tools are open-source under permissive licenses. The community can fork them if it ever comes to that. As Armin Ronacher has noted, uv is “very forkable and maintainable.” There’s no possible future where these tools go backwards.

Both OpenAI and Astral have committed to continued open-source development. We take them at their word, and we hope for the best.

Our commitment hasn’t changed

JetBrains already has great working relationships with both the Astral and the Codex teams. We’ve been integrating Ruff and uv into PyCharm, and we will continue to do so. We’ve submitted some upstream improvements to ty. Regardless of who owns these tools, our commitment to supporting the best Python tooling for our users stays the same. We’ll keep working with whoever maintains them.

The Python ecosystem is stronger because of the work Astral has done. We hope this acquisition amplifies that work, not diminishes it. We’ll be watching closely, and we’ll keep building the best possible experience for Python developers in PyCharm.

March 23, 2026 04:04 PM UTC


James Bennett

Rewriting a 20-year-old Python library

Way back in 2005, lots of people (ordinary people, not just people who work in tech) used to have personal blogs where they wrote about things, rather than using third-party short-form social media sites. I was one of those people (though I wasn’t yet blogging on this specific site, which launched the following year). And back in 2005, and even earlier, people liked to have comment sections on their blogs where readers could leave their thoughts on posts. And that was an absolute magnet for spam.

There were a few attempts to do something about this. One of them was Akismet, which launched that year and provided a web service you could send a comment (or other user-generated-content) submission to, and get back a classification of spam or not-spam. It turned out to be moderately popular, and is still around today.

The folks behind Akismet also documented their API and set up an API key system so people could write their own clients/plugins for various programming languages and blog engines and content-management systems. And so pretty quickly after the debut of the Akismet service, Michael Foord, who the Python community, and the world, tragically lost at the beginning of 2025, wrote and published a Python library, which he appropriately called akismet, that acted as an API client for it.

He published a total of five releases of his Python Akismet library over the next few years, and people started using it. Including me, because I had several use cases for spam filtering as a service. And for a while, things were good. But then Python 3 was released, and people started getting serious about migrating to it, and Michael, who had been promoted into the Python core team, didn’t have a ton of time to work on it. So I met up with him at a conference in 2015, and offered to maintain the Akismet library, and he graciously accepted the offer, imported a copy of his working tree into a GitHub repository for me, and gave me access to publish new packages.

In the process of porting the code to support both Python 2 and 3 (as was the fashion at the time), I did some rewriting and refactoring, mostly focused on simplifying the configuration process and the internals. Some configuration mechanisms were deprecated in favor of either explicitly passing in the appropriate values, or else using the 12-factor approach of storing configuration in environment variables, and the internal HTTP request stack, based entirely on the somewhat-cumbersome (at that time) Python standard library, was replaced with a dependency on requests. The result was akismet 1.0, published in 2017.

Over the next six years, I periodically pushed out small releases of akismet, mostly focused on keeping up with upstream Python version support (and finally going Python-3-only, in 2020 when Python 2.7 reached its end of upstream support). But beginning in 2024, I embarked on a more ambitious project which spanned multiple releases and turned into a complete rewrite of akismet which finished a few months ago. So today I’d like to talk about why I chose to do that, how the process went, and what it produced.

Why?

Although I’m not generally a believer in the concept of software projects being “done” and thus no longer needing active work (in the same sense as “a person isn’t really dead as long as their name is still spoken”, I believe a piece of software isn’t really “done” as long as it has at least one user), a major rewrite is still something that needs a justification. In the case of akismet, there were two specific things I wanted to accomplish that led me to this point.

One was support for a specific feature of the Akismet API. The akismet Python client’s implementation of the most important API method—the one that tells you whether Akismet thinks content is spam, called comment-check—had, since the very first version, always returned a bool. Which at first sight makes sense, because the Akismet web service’s response body for that endpoint is plain text and is either the string true (Akismet thinks the content is spam) or the string false (Akismet thinks it isn’t spam). Except actually Akismet supports a third option: “blatant” spam, meaning Akismet is so confident in its determination that it thinks you can throw away the content without further review (while a normal “spam” determination might still need a human to look at it and double-check). It signals this by returning the true text response and also setting a custom HTTP response header (X-Akismet-Pro-Tip: discard). But the akismet Python client couldn’t usefully expose this, since the original API design of the client chose to have this method return a two-value bool instead of some other type that could handle a three-value situation. And any attempt to fix it would necessarily change the return type, which would be a breaking change.

The other big motivating factor for a rewrite was the rise of asynchronous Python via async and await, originally introduced in Python 3.5. The async Python ecosystem has grown tremendously, and I wanted to have a version of akismet that could support async/non-blocking HTTP requests to the Akismet web service.

Keep it classy?

The first thing I did was spend a bit of time exploring whether I could replace the entire class-based design of the library. Since the very first version back in 2005, the akismet library had always provided its client as a class (named Akismet) with one method for each supported Akismet HTTP API method. But it’s always worth asking if a class is actually the right abstraction. Very often it’s not! And while Python is an object-oriented language and allows you to write classes, it doesn’t require you to write them. So I spent a little while sketching out a purely function-based API.

One immediate issue with this was how to handle the API credentials. Akismet requires you to obtain an API key and to register one or more sites which will use that API key, and most Akismet web API operations require that both the API key and the current site be sent with the request. There’s also a verify-key API operation which lets you submit a key and site and tells you if they’re valid; if you don’t use this, and accidentally start trying to use the rest of the Akismet API with an invalid key and/or site, the other Akismet API operations send back responses with a body of invalid.

As noted above, the 1.0 release already nudged users of akismet in the direction of putting config in the environment, so reading the key and site from env variables was already well-supported. But some people probably can’t, or won’t want to, use environment variables for configuration. For example: they might have multiple sets of Akismet credentials in a multi-tenant application, and need to explicitly pass different sets of credentials depending on which site they’re performing checks for. So in any function-based interface, all the functions would not only need to be able to read configuration from the environment (which at least could be factored out into a helper function), they’d also need to explicitly accept credentials as optional arguments. That complicates the argument signatures (which are already somewhat gnarly because of all the optional information you can provide to Akismet to help with spam determinations), and makes the API start to look cumbersome.

This was a clue that the function-based approach was probably not the right one: if a bunch of functions all have to accept extra arguments for a common piece of data they all need, it’s a sign that they may really want to be a class which just has the necessary data available internally.

The other big sticking point was how to handle credential verification. It requires an HTTP request/response to Akismet, so ideally you’d do this once (per set of credentials per process). Say, if you’re using Akismet in a web application, you’d want to check your credentials at process startup, and then just treat them as known-good for the lifetime of the process after that. Which is what the the existing class-based code did: it performed a verify-key on instantiation and then could re-use the verified credentials after that point (or raise an immediate exception if the credentials were missing or invalid). I really like the ergonomics of that, since it makes it much more difficult to create an Akismet client in an invalid/misconfigured state, but it basically requires some sort of shared state. Even if the API key and site URL are read from the environment or passed as arguments every time, there needs to be some sort of additional information kept by the client code to indicate they’ve been validated.

It still would be possible to do this in a function-based interface. It could implicitly verify each new key/site pair on first use, and either keep a full list of ones that had been verified or maybe some sort of LRU cache of them. Or there could be an explicit function for introducing a new key/site pair and verifying them. But the end result of that is a secretly-stateful module full of functions that rely on (and in some cases act on) the state; at that point the case for it being a class is pretty overwhelming.

As an aside, I find that spending a bit of time thinking about, or perhaps even writing sample documentation for, how to use a hypothetical API often uncovers issues like this one. Also, for a lot of people it’s seemingly a lot easier, psychologically, to throw away documentation than to throw away even barely-working code.

One class or two?

Another idea that I rejected pretty quickly was trying to stick to a single Akismet client class. There is a trend of libraries and frameworks providing both sync and async code paths in the same class, often using a naming scheme which prefixes the async versions of the methods with an a (like method_name() for the sync version and amethod_name() for async), but it wasn’t really compatible with what I wanted to do. As mentioned above, I liked the ergonomics of having the client automatically validate your API key and site URL, but doing that in a single class supporting both sync and async has a problem: which code path to use to perform the automatic credential validation? Users who want async wouldn’t be happy about a synchronous/blocking request being automatically issued. And trying to choose the async path by default would introduce issues of how to safely obtain a running event loop (and not just any event loop, but an instance of the particular event loop implementation the end user of the library actually wants).

So I made the decision to have two client classes, one sync and one async. As a nice bonus, this meant I could do all the work of rewriting in new classes with new names. That would let me mark the old Akismet class as deprecated but not have to immediately remove it or break its API, giving users of akismet plenty of notice of what was going on and a chance to migrate to the new clients. So I started working on the new client classes, calling them akismet.SyncClient and akismet.AsyncClient to be as boringly clear as possible about what they’re for.

How to handle async, part one

Unfortunately, the two-class solution didn’t fully solve the issue of how to handle the automatic credential validation. On the old Akismet client class it had been easy, and on the new SyncClient class it would still be easy, because the __init__() method could perform a verify-key operation before returning, and raise an exception if the credentials weren’t found or were invalid.

But in Python, __init__() cannot be (usefully) async, which posed the tricky question of how to perform automatic credential validation at instantiation time for AsyncClient.

As I dug into this I considered a few different options, and at one point even thought about going back to the one-class approach just to be able to issue a single HTTP request at instantiation without needing an event loop. But I wanted AsyncClient to be truly and thoroughly async, so I ended up settling for a compromise solution, implemented in two phases:

  1. Both SyncClient and AsyncClient were given an alternate constructor method named validated_client(). Alternate constructors can be usefully async, so the AsyncClient version could be implemented as an async method. I documented that if you’re directly constructing a client instance you intend to keep around for a while, this is the preferred constructor since it will perform automatic credential validation for you (direct instantiation via __init__() will not, on either class). And then…
  2. I implemented the context-manager protocol for SyncClient and the async context-manager protocol for AsyncClient. This allows constructing the sync client in a with statement, or an async with statement for AsyncClient. And since async with is an async execution context, it can issue an async HTTP request for credential validation.

So you can get automatic credential validation from either approach, depending on your needs:

import akismet


# Long-lived client object you'll keep around:
sync_client = akismet.SyncClient.validated_client()
async_client = await akismet.AsyncClient.validated_client()

# Or for the duration of a "with" block, cleaned up at exit:
with akismet.SyncClient() as sync_client:
    # Do things...

async with akismet.AsyncClient() as async_client:
    # Do things...

Most Python libraries can benefit from these sorts of conveniences, so I’d recommend investing time into learning how to implement them. If you’re looking for ideas, Lynn Root’s “The Design of Everyday APIs” covers a lot of ways to make your own code easier to use.

How to handle async, part deux

The other thing about writing code that supports both sync and async operations is how to handle the things they have in common. There are a few different ways to do this: you can write one implementation and have the other one call it. Or you can write two full implementations and live with the duplication. Or you can try to separate the I/O and the pure logic as much as possible, and reuse the logic while duplicating only the I/O code (or, since the two implementations aren’t perfect duplicates, writing two I/O implementations which heavily rhyme).

For akismet, I went with a hybrid of the last two of these approaches. I started out with my two classes each fully implementing everything they needed, including a lot of duplicate code between them (in fact, the first draft was just one class which was then copy/pasted and async-ified to produce the other). Then I gradually extracted the non-I/O bits into a common module they could both import from and use, building up a library of helpers for things like validating arguments, preparing requests, processing the responses, and so on.

One final object-oriented design decision here (or, I guess, not object-oriented decision): that common code is a set of functions in a module. It’s not a class. It’s not stateful the way the clients themselves are: turning an Akismet web API response into the desired Python return value, or validating a set of arguments and turning them into the correct request parameters (to pick a couple examples) are literally pure functions, whose outputs are dependent solely on their inputs.

And the common code also isn’t some sort of abstract base class that the two concrete clients would inherit from. An akismet.SyncClient and an akismet.AsyncClient are not two different subtypes of a parent “Akismet client” class or interface! Because of the different calling conventions of sync and async Python, there is no public parent interface that they share or could be substitutable for.

The current code of akismet still has some duplication, primarily around error handling since the try/except blocks need to wrap the correct version of their respective I/O operations, and I might be able to achieve some further refactoring to reduce that to the absolute minimum (for example, by splitting out a bunch of duplicated except clauses into a single common pattern-matching implementation now that Python 3.10 is the minimum supported version). But I’m not in a big hurry to do that; the current code is, I think, in a pretty reasonable state.

Enumerating the options

As I mentioned back at the start of this post, the akismet library historically used a Python bool to indicate the result of a spam-checking operation: either the content was spam (True) or it wasn’t (False). Which makes a lot of sense at first glance, and also matches the way the Akismet web service behaves: for content it thinks is spam, the HTTP response has a body consisting of the string true, and for content that it doesn’t think is spam the response body is the string false.

But for many years now, the Akismet web service has actually supported three possible values, with the third option being “blatant” spam, spam so obvious that it can simply be thrown away with no further human review. Akismet signals this by returning the true response body, and then adding a custom HTTP header to the response: X-Akismet-Pro-Tip, with a value of discard.

Python has had support for enums (via the enum module in the standard library) since Python 3.4, so that seemed the most natural way to represent the possible results. The enum module lets you use lots of different data types for enum values, but I went with an integer-valued enum (enum.IntEnum) for this, because it lets developers still work with the result as a pseudo-boolean type if they don’t care about the extra information from the third option (since in Python 0 is false and all other integers are true).

Python historical trivia

Originally, Python did not have a built-in boolean type, and the typical convention was similar to C, using the integers 0 and 1 to indicate false/true.

Python phased in a real boolean type early in the Python 2 days. First, the Python 2.2 release series (technically, Python 2.2.1) assigned the built-in names False and True to the integer values 0 and 1, and introduced a built-in bool() function which returned the integer truth value of its argument. Then in Python 2.3, the bool type was formally introduced, and was implemented as a subclass of int, constrained to have only two instances. Those instances are bound to the names False and True and have the integer values 0 and 1.

That’s how Python’s bool still works today: it’s still a subclass of int, and so you can use a bool anywhere an int is called for, and do arithmetic with booleans if you really want to, though this isn’t really useful except for writing deliberately-obfuscated code.

For more details on the history and decision process behind Python’s bool type, check out PEP 285 and this blog post from Guido van Rossum.

The only tricky thing here was how to name the third enum member. The first two were HAM and SPAM to match the way Akismet describes them. The third value is described as “blatant spam” in some documentation, but is represented by the string “discard” in responses, so BLATANT_SPAM and DISCARD both seemed like reasonable options. I ended up choosing DISCARD; it probably doesn’t matter much, but I like having the name match the actual value of the response header.

The enum itself is named CheckResponse since it represents the response values of the spam-checking operation (Akismet actually calls it comment-check because that’s what its original name was, despite the fact Akismet now supports sending other types of content besides comments).

Bring your own HTTP client

Back when I put together the 1.0 release, akismet adopted the requests library as a dependency, which greatly simplified the process of issuing HTTP requests to the Akismet web API. As part of the more recent rewrite, I switched instead to the Python HTTPX library, which has an API broadly compatible with requests but also, importantly, provides both sync and async implementations.

Async httpx requires the use of a client object (the equivalent of a requests.Session), so the Akismet client classes each internally construct the appropriate type of httpx object: httpx.Client for akismet.SyncClient, and httpx.AsyncClient for akismet.AsyncClient.

And since the internal usage was switching from directly calling the function-based API of requests to using HTTP client objects, it seemed like a good idea to also allow passing in your own HTTP client object in the constructors of the Akismet client classes. These are annotated as httpx.Client/httpx.AsyncClient, but as a practical matter anything with a compatible API will work.

One immediate benefit of this is it’s easier to accommodate situations like HTTP proxies, and server environments where all outbound HTTP requests must go through a particular proxy. You can just create the appropriate type of HTTP client object with the correct proxy settings, and pass it to the constructor of the Akismet client class:

import akismet
import httpx

from your_app.config import settings

akismet_client = akismet.SyncClient.validated_client(
    http_client=httpx.Client(
        proxy=settings.PROXY_URL,
        headers={"User-Agent": akismet.USER_AGENT}
    )
)

But an even bigger benefit came a little bit later on, when I started working on improvements to akismets testing story.

Testing should be easy

Right here, right now, I’m not going to get into a deep debate about how to define “unit” versus “integration” tests or which types you should be writing. I’ll just say that historically, libraries which make HTTP requests have been some of my least favorite code to test, whether as the author of the library or as a user of it verifying my usage. Far too often this ends up with fragile piles of patched-in mock objects to try to avoid the slowdowns (and other potential side effects and even dangers) of making real requests to a live, remote service during a test run.

I do think some fully end-to-end tests making real requests are necessary and valuable, but they probably should not be used as part of the main test suite that you run every time you’re making changes in local development.

Fortunately, httpx offers a feature that I wrote about a few years ago, which greatly simplifies both akismets own test suite, and your ability to test your usage of it: swappable HTTP transports which you can drop in to affect HTTP client behavior, including a MockTransport that doesn’t make real requests but lets you programmatically supply responses.

So akismet ships with two testing variants of its API clients: akismet.TestSyncClient and akismet.TestAsyncClient. They’re subclasses of the real ones, but they use the ability to swap out HTTP clients (covered above) to plug in custom HTTP clients with MockTransport and hard-coded stock responses. This lets you write code like:

import akismet


class AlwaysSpam(akismet.TestSyncClient):
    comment_check_response = akismet.CheckResponse.SPAM

and then use it in tests. That test client above will never issue a real HTTP request, and will always label any content you check with it as spam. You can also set the attribute verify_key_response to False on a test client to have it always fail API key verification, if you want to test your handling of that situation.

This means you can test your use of akismet without having to build piles of custom mocks and patch them in to the right places. You can just drop in instances of appropriately-configured test clients, and rely on their behavior.

If I ever became King of Programming, with the ability to issue enforceable decrees, requiring every network-interacting library to provide this kind of testing-friendly version of its core constructs would be among them. But since I don’t have that power, I do what I can by providing it in my own libraries.

(py)Testing should be easy

In the Python ecosystem there are two major testing frameworks:

For a long time I stuck to unittest, or unittest-derived testing tools like the ones that ship with Django. Although I understand and appreciate the particular separation of concerns pytest is going for, I found its fixture system a bit too magical for my taste; I personally prefer dependency injection to use explicit registration so I can know what’s available, versus the implicit way pytest discovers fixtures based on their presence or absence in particularly-named locations.

But pytest pretty consistently shows up as more popular and more broadly used in surveys of the Python community, and every place I’ve worked for the last decade or so has used it. So I decided to port akismet’s tests to pytest, and in the process decided to write a pytest plugin to help users of akismet with their own tests.

That meant writing a pytest plugin to automatically provide a set of dependency-injection fixtures. There are four fixtures: two sync and two async, with each flavor getting a fixture to provide a client class object (which lets you test instantiation-time behavior like API key verification failures), and a fixture to provide an already-constructed client object. Configuration is through a custom pytest mark called akismet_client, which accepts arguments specifying the desired behavior. For example:

import akismet
import pytest

@pytest.mark.akismet_client(comment_check_response=akismet.CheckResponse.DISCARD)
def test_akismet_discard_response(akismet_sync_client: akismet.SyncClient):
    # Inside this test, akismet_sync_client's comment_check() will always
    # return DISCARD.

@pytest.mark.akismet_client(verify_key_response=False)
def test_akismet_fails_key_verification(akismet_sync_class: type[akismet.SyncClient]):
    # API key verification will always fail on this class.
    with pytest.raises(akismet.APIKeyError):
        akismet_sync_class.validated_client()

Odds and ends

Python has had the ability to add annotations to function and method signatures since 3.0, and more recently gained the ability to annotate attributes as well; originally, no specific use case was mandated for this feature, but everybody used it for type hints, so now that’s the official use case for annotations. I’ve had a lot of concerns about the way type hinting and type checking have been implemented for Python, largely around the fact that idiomatic Python really wants to be a structurally-typed language, or as some people have called it “interfacely-typed”, rather than nominally-typed. Which is to say: in Python you almost never care about the actual exact type name of something, you care about the interfaces (nowadays, called “protocols” in Python typing-speak) it implements. So you don’t care whether something is precisely an instance of list, you care about it being iterable or indexable or whatever.

On top of which, some design choices made in the development of type-hinted Python have made it (as I understand it) impossible to distribute a single-file module with type hints and have type checkers actually pick them up. Which was a problem for akismet, because traditionally it was a single-file module, installing a file named akismet.py containing all its code.

But as part of the rewrite I was reorganizing akismet into multiple files, so that objection no longer held, and eventually I went ahead and began running mypy as a type checker as part of the CI suite for akismet. The type annotations had been added earlier, because I find them useful as inline documentation even if I’m not running a type checker (and the Sphinx documentation tool, which all my projects use, will automatically extract them to document argument signatures for you). I did have to make some changes to work around mypy, though It didn’t find any bugs, but did uncover a few things that were written in ways it couldn’t handle, and maybe I’ll write about those in more detail another time.

As part of splitting akismet up into multiple files, I also went with an approach I’ve used on a few other projects, of prefixing most file names with an underscore (i.e., the async client is defined in a file named _async_client.py, not async_client.py). By convention, this marks the files in question as “private”, and though Python doesn’t enforce that, many common Python linters will flag it. The things that are meant to be supported public API are exported via the __all__ declaration of the akismet package.

I also switched the version numbering scheme to Calendar Versioning. I don’t generally trust version schemes that try to encode information about API stability or breaking changes into the version number, but a date-based version number at least tells you how old something is and gives you a general idea of whether it’s still being actively maintained.

There are also a few dev-only changes:
 * Local dev environment management and packaging are handled by PDM and its package-build backend. Of the current crop of clean-sheet modern Python packaging tools, PDM is my personal favorite, so it’s what my personal projects are using. * I added a Makefile which can execute a lot of common developer tasks, including setting up the local dev environment with proper dependencies, and running the full CI suite or subsets of its checks. * As mentioned above, the test suite moved from unittest to pytest, using AnyIO’s plugin for supporting async tests in pytest. There’s a lot of use of pytest parametrization to generate test cases, so the number of test cases grew a lot, but it’s still pretty fast—around half a second for each Python version being tested, on my laptop. The full CI suite, testing every supported Python version and running a bunch of linters and packaging checks, takes around 30 seconds on my laptop, and about a minute and a half on GitHub CI.

That’s it (for now)

In October of last year I released akismet 25.10.0 (and then 25.10.1 to fix a documentation error, because there’s always something wrong with a big release), which completed the rewrite process by finally removing the old Akismet client class. At this point I think akismet is feature-complete unless the Akismet web service itself changes, so although there were more frequent releases over a period of about a year and a half as I did the rewrite, it’s likely the cadence will settle down now to one a year (to handle supporting new Python versions as they come out) unless someone finds a bug.

Overall, I think the rewrite was an interesting process, because it was pretty drastic (I believe it touched literally every pre-existing line of code, and added a lot of new code), but also… not that drastic? If you were previously using akismet with your configuration in environment variables (as recommended), I think the only change you’d need to make is rewriting imports from akismet.Akismet to akismet.SyncClient. The mechanism for manually passing in configuration changed, but I believe that and the new client class names were the only actual breaking changes in the entire rewrite; everything else was adding features/functionality or reworking the internals in ways that didn’t affect public API.

I had hoped to write this up sooner, but I’ve struggled with this post for a while now, because I still have trouble with the fact that Michael’s gone, and every time I sat down to write I was reminded of that. It’s heartbreaking to know I’ll never run into him at a conference again. I’ll miss chatting with him. I’ll miss his energy. I’m thankful for all he gave to the Python community over many years, and I wish I could tell him that one more time. And though it’s a small thing, I hope I’ve managed to honor his work and to repay some of his kindness and his trust in me by being a good steward of his package. I have no idea whether Akismet the service will still be around in another 20 years, or whether I’ll still be around or writing code or maintaining this Python package in that case, but I’d like to think I’ve done my part to make sure it’s on sound footing to last that long, or longer.

March 23, 2026 02:09 PM UTC


Real Python

How to Use Note-Taking to Learn Python

Learning Python can be genuinely hard, and it’s normal to struggle with fundamental concepts. Research has shown that note-taking is invaluable when learning new things. This guide will help you get the most out of your learning efforts by showing you how to take better notes as you walk through an existing tutorial and keep handwritten notes on the side:

Photo of handwritten Python Learning Notes

In this guide, you’ll begin by briefly learning about the benefits of note-taking. Then, you’ll follow along with an existing Real Python tutorial as you perform note-taking steps to help make the information in the tutorial really stick. To help you stay organized as you practice, download the Python Note-Taking Worksheet below. It outlines the process you’ll learn here and provides a repeatable framework you can use with future tutorials:

Get Your PDF: Click here to download your free Python Note-Taking Worksheet that outlines that note-taking process.

Take the Quiz: Test your knowledge with our interactive “How to Use Note-Taking to Learn Python” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

How to Use Note-Taking to Learn Python

Test your understanding of note-taking techniques that help you learn Python more effectively and retain what you study.

What Is Python Note-Taking?

In the context of learning, note-taking is the process of recording information from a source while you’re consuming it. A traditional example is a student jotting down key concepts during a lecture. Another example is typing out lines of code or unfamiliar words while watching a video course, listening to a presentation, or reading a learning resource.

In this guide, Python note-taking refers to taking notes specific to learning Python.

People take notes for a variety of reasons. Usually, the intent is to return to the notes at a later time to remind the note-taker of the information covered during the learning session.

In addition to the value of having a physical set of notes to refer back to, studies have found that the act of taking notes alone improves a student’s ability to recall information on a topic.

This guide focuses on handwritten note-taking—that is, using a writing utensil and paper. Several studies suggest that this form of note-taking is especially effective for understanding a topic and remembering it later. If taking notes by hand isn’t viable for you, don’t worry! The concepts presented here should be applicable to other forms of note-taking as well.

Prerequisites

Since this guide focuses on taking notes while learning Python programming, you’ll start by referencing the Real Python tutorial Python for Loops: The Pythonic Way. This resource is a strong choice because it clearly explains a fundamental programming concept that you’ll use throughout your Python journey.

Once you have the resource open in your browser, set aside a few pieces of paper and have a pen or pencil ready. Alternatively, you can take notes on a tablet with a stylus or another writing tool.

Generally, taking notes by hand has a stronger impact on learning than other methods, such as typing into a text document. For more information on the effectiveness of taking notes by hand versus typing, see this article from the Harvard Graduate School of Education.

Step 1: Write Down Major Concepts

With your note-taking tools ready, start by skimming the learning resource. Usually, you want to look at the major headings to see what topics the material covers. For Real Python content, you can instead just look at the table of contents at the top of the page, since this lists the main sections.

The major headings for your example resource are as follows:

The list above doesn’t include subheadings like “Sequences: Lists, Tuples, Strings, and Ranges” under “Traversing Built-In Collections in Python”. For now, stick to top-level headings.

Read the full article at https://realpython.com/python-note-taking-guide/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 23, 2026 02:00 PM UTC

Quiz: Strings and Character Data in Python

In this quiz, you’ll test your understanding of Python Strings and Character Data.

This quiz helps you deepen your understanding of Python’s string and byte data types. You’ll explore core concepts like string immutability, interpolation with f-strings, Unicode handling, key string methods, and working with bytes objects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 23, 2026 12:00 PM UTC


PyPy

Using Claude to fix PyPy3.11 test failures securely

I got access to Claude Max for 6 months, as a promotional move Anthropic made to Open Source Software contributors. My main OSS impact is as a maintainer for NumPy, but I decided to see what claude-code could to for PyPy's failing 3.11 tests. Most of these failures are edge cases: error messages that differ from CPython, or debugging tools that fail in certain cases. I was worried about letting an AI agent loose on my development machine. I noticed a post by Patrick McCanna (thanks Patrick!) that pointed to using bubblewrap to sandbox the agent. So I set it all up and (hopefully securely) pointed claude-code at some tests.

Setting up

There were a few steps to make sure I didn't open myself up to obvious gotchas. There are stories about agents wiping out data bases, or deleting mail boxes.

Bubblewrap

First I needed to see what bubblewrap does. I followed the instructions in the blog post to set things up with some minor variations:

sudo apt install bubblewrap

I couldn't run bwrap. After digging around a bit, I found I needed to add an exception for appamor on Ubuntu 24.04:

sudo bash -c 'cat > /etc/apparmor.d/bwrap << EOF
abi <abi/4.0>,
include <tunables/global>

profile bwrap /usr/bin/bwrap flags=(unconfined) {
  userns,
}
EOF'
sudo apparmor_parser -r /etc/apparmor.d/bwrap

Then bwrap would run. It is all locked down by default, so I opened up some exceptions. The arguments are pretty self-explanatory. Ubuntu spreads the executables around the operating system, so I needed access to various directories. I wanted a /tmp for running pytest. I also wanted the prompt to reflect the use of bubblewrap, so changed the hostname:

cat << 'EOL' >> ./run_bwrap.sh
  function call_bwrap() {
    bwrap \
      --ro-bind /usr /usr \
      --ro-bind /etc /etc \
      --ro-bind /run /run \
      --symlink usr/lib /lib \
      --symlink usr/lib64 /lib64 \
      --symlink usr/bin /bin \
      --proc /proc \
      --dev /dev \
      --bind $(pwd) $(pwd) \
      --chdir $(pwd) \
      --unshare-user --unshare-pid --unshare-ipc --unshare-uts --unshare-cgroup \
      --die-with-parent \
      --hostname bwrap \
      --tmpfs /tmp \
      /bin/bash "$@"
  }
EOL

source ./run_bwrap.sh
call_bwrap
# now I am in a sandboxed bash shell
# play around, try seeing other directories, getting sudo, or writing outside
# the sandbox
exit

I did not do --unshare-network since, after all, I want to use claude and that needs network access. I did add rw access to $(pwd) since I want it to edit code in the current directory, that is the whole point.

Basic claude

After trying out bubblewrap and convincing myself it does actually work, I installed claude code

curl -fsSL https://claude.ai/install.sh | bash

Really Anthropic, this is the best way to install claude? No dpkg?

I ran claude once (unsafely) to get logged in. It opened a webpage, and saved the login to the oathAccount field in ~/.claude.json. Now I changed my bash script to this to get claude to run inside the bubblewrap sandbox:

cat << 'EOL' >> ./run_claude.sh
  claude-safe() {
    bwrap \
      --ro-bind /usr /usr \
      --ro-bind /etc /etc \
      --ro-bind /run /run \
      --ro-bind "$HOME/.local/share/claude" "$HOME/.local/share/claude" \
      --symlink usr/lib /lib \
      --symlink usr/lib64 /lib64 \
      --symlink usr/bin /bin \
      --symlink "$HOME/.local/share/claude/versions/2.1.81" "$HOME/.local/bin/claude" \
      --proc /proc \
      --dev /dev \
      --bind $(pwd) $(pwd) \
      --bind "$HOME/.claude" "$HOME/.claude" \
      --bind "$HOME/.claude.json" "$HOME/.claude.json" \
      --chdir $(pwd) \
      --unshare-user --unshare-pid --unshare-ipc --unshare-uts --unshare-cgroup \
      --die-with-parent \
      --hostname bwrap \
      --tmpfs /tmp \
      --setenv PATH "$HOME/.local/bin:$PATH" \
      claude "$@"
  }
EOL

source ./run_claude.sh
claude-safe

Now I can use claude. Note it needs some more directories in order to run. This script hard-codes the version, in the future YMMV. I want it to be able to look at github, and also my local checkout of cpython so it can examine differences. I created a read-only token by clicking on my avatar in the upper right corner of a github we page, then going to Settings → Developer settings → Personal access tokens → Fine-grained tokens → Generate new token. Since pypy is in the pypy org, I used "Repository owner: pypy", "Repository access: pypy (only)" and "Permissions: Contents". Then I made doubly sure the token permissions were read-only. And checked again. Then I copied the token to the bash script. I also added a ro-bind to the cpython checkout, so I could tell claude code where to look for CPython implementations of missing PyPy functionality.

--ro-bind "$HOME/oss/cpython" "$HOME/oss/cpython" \
--setenv GH_TOKEN "hah, sharing my token would not have been smart" \

Claude /sandbox

Claude comes with its own sandbox, configured by using the /sandbox command. I chose the defaults, which prevents malicious code in the repo from accessing the file system and the network. I was missing some packages to get this to work. Claude would hang until I installed them, and I needed to kill it with kill.

sudo apt install socat
sudo npm install -g @anthropic-ai/sandbox-runtime

Final touches

One last thing that I discovered later: I needed to give claude access to some grepping and git tools. While git should be locked down externally so it cannot push to the repo, I do want claude to look at other issues and pull requests in read-only mode. So I added a local .claude/settings.json file inside the repo (see below for which directory to do this):

{
  "permissions": {
    "allow": [
      "Bash(sed*)",
      "Bash(grep*)",
      "Bash(cat*)",
      "Bash(find*)",
      "Bash(rg*)",
      "Bash(python*)",
      "Bash(pytest*)"
    ]
  }
}

Then I made git ignore it, even when doing a git clean, in a local (not part of the repo) configuration

echo -n .claude >> ~/.config/git/ignore

What about git push?

I don't want claude messing around with the upstream repo, only read access. But I did not actively prevent git push. So instead of using my actual pypy repo, I cloned it to a separate directory and did not add a remote pointing to github.com.

Fixing tests - easy

Now that everything is set up (I hope I remembered everything), I could start asking questions. The technique I chose was to feed claude the whole test failure from the buildbot. So starting from the buildbot py3.11 summary, click on one of the F links and copy-paste all that into the claude prompt. It didn't take long for claude to come up with solutions for the long-standing ctype error missing exception which turned out to be due to an missing error trap when already handling an error.

Also a CTYPES_MAX_ARGCOUNT check was missing. At first, claude wanted to change the ctypes code from CPython's stdlib, and so I had to make it clear that claude was not to touch the files in lib-python. They are copied verbatim from CPython and should not be modified without really good reasons.

The fix to raise TypeError rather than Attribute Error for deleting ctype object's value was maybe a little trickier: claude needed to create its own property class and use it in assignments.

The fix for a failing test for a correct repr of a ctypes array was a little more involved. Claude needed to figure out that newmemoryview was raising an exception, dive into the RPython implementation and fix the problem, and then also fix a pure-python __buffer__ shape edge case error.

There were more, but you get the idea. With a little bit of coaching, and by showing claude where the CPython implementation was, more tests are now passing.

Fixing tests - harder

PyPy has a HPy backend. There were some test failures that were easy to fix (a handle not being closed, an annotation warning). But the big one was a problem with the context tracking before and after ffi function calls. In debug mode there is a check that the ffi call is done using the correct HPy context. It turns out to be tricky to hang on to a reference to a context in RPython since the context RPython object is pre-built. The solution, which took quite a few tokens and translation cycles to work out, was to assign the context on the C level, and have a getter to fish it out in RPython.

Conclusion

I started this journey not more than 24 hours ago, after some successful sessions using claude to refactor some web sites off hosting platforms and make them static pages. I was impressed enough to try coding with it from the terminal. It helps that I was given a generous budget to use Anthropic's tool.

Claude seems capable of understanding the layers of PyPy: from the pure python stdlib to RPython and into the small amount of C code. I even asked it to examine a segfault in the recently released PyPy7.3.21, and it seems to have found the general area where there was a latent bug in the JIT.

Like any tool, agentic programming must be used carefully to make sure it cannot do damage. I hope I closed the most obvious foot-guns, if you have other ideas of things I should do to protect myself while using an agent like this, I would love to hear about them.

March 23, 2026 10:27 AM UTC


Tryton News

Release 0.8.0 of mt940

We are proud to announce the release of the version 0.8.0 of mt940.

mt940 is a library to parse MT940 files. MT940 is a specific SWIFT message type used by the SWIFT network to send and receive end-of-day bank account statements.

In addition to bug-fixes, this release contains the following improvements:

mt940 is available on PyPI: mt940 0.8.0.

1 post - 1 participant

Read full topic

March 23, 2026 09:33 AM UTC

Release 0.4.0 of febelfin-coda

We are proud to announce the release of the version 0.4.0 of febelfin-coda.

febelfin-coda is a library to parse CODA files. This bank standard (also called CODA) specifies the lay-out for the electronic files, by banks to customers, of the account transactions and the information concerning the enclosures in connection with the movement.

In addition to bug-fixes, this release contains the following improvements:

febelfin-coda is available on PyPI: febelfin-coda 0.4.0.

1 post - 1 participant

Read full topic

March 23, 2026 09:29 AM UTC

Release 0.2.0 of aeb43

We are proud to announce the release of the version 0.2.0 of aeb43.

aeb43 is a library to parse AEB43 files. AEB43 is a standard, fixed-length 80-character file format used by Spanish banks for transmitting bank statements, transaction details, and account balances.

In addition to bug-fixes, this release contains the following improvements:

aeb43 is available on PyPI: aeb43 0.2.0.

1 post - 1 participant

Read full topic

March 23, 2026 09:24 AM UTC

Release 0.12.0 of Relatorio

We are proud to announce the release of Relatorio version 0.12.0.

Relatorio is a templating library mainly for OpenDocument using also OpenDocument as source format.

In addition to bug-fixes, this release contains the following improvements:

The package is available at relatorio · PyPI
The documentation is available at Relatorio — A templating library able to output odt and pdf files

1 post - 1 participant

Read full topic

March 23, 2026 09:16 AM UTC


Python Bytes

#474 Astral to join OpenAI

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://starlette.dev/release-notes/#100rc1-february-23-2026">Starlette 1.0.0</a></strong></li> <li><strong><a href="https://astral.sh/blog/openai?featured_on=pythonbytes">Astral to join OpenAI</a></strong></li> <li><strong>uv audit</strong></li> <li><strong><a href="https://mkennedy.codes/posts/fire-and-forget-or-never-with-python-s-asyncio/?featured_on=pythonbytes">Fire and forget (or never) with Python’s asyncio</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=k8BJzKSMwvQ' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="474">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a> <strong>Connect with the hosts</strong></li> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky) Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</li> </ul> <p><strong>Brian #1: <a href="https://starlette.dev/release-notes/#100rc1-february-23-2026">Starlette 1.0.0</a></strong></p> <ul> <li>As a reminder, Starlette is the foundation for FastAPI</li> <li><a href="https://marcelotryle.com/blog/2026/03/22/starlette-10-is-here/?featured_on=pythonbytes">Starlette 1.0 is here!</a> - fun blog post from Marcello Trylesinski</li> <li>“The changes in 1.0 were limited to removing old deprecated code that had been on the way out for years, along with a few bug fixes. From now on we'll follow SemVer strictly.”</li> <li>Fun comment in the “What’s next?” section: <ul> <li>“Oh, and Sebastián, Starlette is now out of your way to release FastAPI 1.0. 😉”</li> </ul></li> <li>Related: <a href="https://simonwillison.net/2026/Mar/22/starlette/?featured_on=pythonbytes">Experimenting with Starlette 1.0 with Claude skills</a> <ul> <li>Simon Willison</li> <li>example of the new lifespan mechanism, very pytest fixture-like <div class="codehilite"> <pre><span></span><code><span class="nd">@contextlib</span><span class="o">.</span><span class="n">asynccontextmanager</span> <span class="k">async</span> <span class="k">def</span><span class="w"> </span><span class="nf">lifespan</span><span class="p">(</span><span class="n">app</span><span class="p">):</span> <span class="k">async</span> <span class="k">with</span> <span class="n">some_async_resource</span><span class="p">():</span> <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Run at startup!&quot;</span><span class="p">)</span> <span class="k">yield</span> <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Run on shutdown!&quot;</span><span class="p">)</span> <span class="n">app</span> <span class="o">=</span> <span class="n">Starlette</span><span class="p">(</span> <span class="n">routes</span><span class="o">=</span><span class="n">routes</span><span class="p">,</span> <span class="n">lifespan</span><span class="o">=</span><span class="n">lifespan</span> <span class="p">)</span> </code></pre> </div></li> </ul></li> </ul> <p><strong>Michael #2: <a href="https://astral.sh/blog/openai?featured_on=pythonbytes">Astral to join OpenAI</a></strong></p> <ul> <li>via John Hagen, thanks</li> <li>Astral has agreed to join <a href="https://openai.com/?featured_on=pythonbytes"><strong>OpenAI</strong></a> as part of the <a href="https://chatgpt.com/codex?featured_on=pythonbytes"><strong>Codex</strong></a> team</li> <li>Congrats Charlie and team</li> <li>Seems like <a href="https://github.com/astral-sh/ruff?featured_on=pythonbytes">**Ruff</a>** and <a href="https://github.com/astral-sh/uv?featured_on=pythonbytes"><strong>uv</a></strong> play an important roll.</li> <li>Perhaps <a href="https://github.com/astral-sh/ty?featured_on=pythonbytes"><strong>ty</strong></a> holds the most value to directly boost Codex (understanding codebases for the AI)</li> <li>All that said, these were open source so there is way more to the motivations than just using the tools.</li> <li>After joining the Codex team, we'll continue building our open source tools.</li> <li><a href="https://simonwillison.net/2026/Mar/19/openai-acquiring-astral/?featured_on=pythonbytes">Simon Willison has thoughts</a></li> <li><a href="http://discuss.python.org?featured_on=pythonbytes">d</a><a href="https://discuss.python.org/t/openai-to-acquire-astral/106605?featured_on=pythonbytes">iscuss.python.org also has thoughts</a></li> <li>The <a href="https://arstechnica.com/ai/2026/03/openai-is-acquiring-open-source-python-tool-maker-astral/?featured_on=pythonbytes">Ars Technica article</a> has interesting comments too</li> <li>It’s probably the death <a href="https://astral.sh/pyx?featured_on=pythonbytes">pyx</a> <ul> <li>Simon points out “pyx is notably absent from both the Astral and OpenAI announcement posts.”</li> </ul></li> </ul> <p><strong>Brian #3: uv audit</strong></p> <ul> <li>Submitted by Owen Lemont</li> <li>Pieces of <code>uv audit</code> have been trickling in. <a href="https://github.com/astral-sh/uv/releases?featured_on=pythonbytes">uv 0.10.12 exposes it to the cli help</a></li> <li>Here’s the <a href="https://github.com/astral-sh/uv/issues/18506?featured_on=pythonbytes">roadmap for uv audit</a></li> <li>I tried it out on a package and found a security issue with a dependency <ul> <li>not of the project, but of the testing dependencies</li> <li>but only if using Python &lt; 3.10, even though I’m using 3.14</li> </ul></li> <li>Kinda cool</li> <li>Looks like it generates a uv.lock file, which includes dependencies for all project supported versions of Python and systems, which is a very thorough way to check for vulnerabilities.</li> <li>But also, maybe some pointers on how to fix the problem would be good. No <code>--fix</code> yet.</li> </ul> <p><strong>Michael #4: <a href="https://mkennedy.codes/posts/fire-and-forget-or-never-with-python-s-asyncio/?featured_on=pythonbytes">Fire and forget (or never) with Python’s asyncio</a></strong></p> <ul> <li>Python’s <code>asyncio.create_task()</code> can silently garbage collect your fire-and-forget tasks starting in Python 3.12</li> <li>Formerly fine async code can now stop working, so heads up</li> <li>The fix? Use a set to upgrade to a strong ref and a callback to remove it</li> <li>Is there a chance of task-based memory leaks? Yeah, maybe.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://terriblesoftware.org/2026/03/03/nobody-gets-promoted-for-simplicity/?featured_on=pythonbytes">Nobody Gets Promoted for Simplicity</a> - interesting read and unfortunate truth in too many places.</li> <li><a href="https://github.com/okken/pytest-check?featured_on=pythonbytes">pytest-check</a> - All built-in check helper functions in this list also accept an optional <code>xfail</code> reason. <ul> <li>example: <code>check.equal(actual, expected, xfail="known issue #123")</code></li> <li>Allows some checks to still cause a failure to happen because you no longer have to mark the whole test as xfail Michael:</li> </ul></li> <li><a href="https://x.com/rachpradhan/status/2034191434182738096?featured_on=pythonbytes">TurboAPI</a> - FastAPI + Pydantic compatible framework in Zig (see <a href="https://x.com/rachpradhan/status/2035928730242371716?featured_on=pythonbytes">follow up</a>)</li> <li><a href="https://docs.pylonsproject.org/projects/pyramid/en/2.1-branch/whatsnew-2.1.html?featured_on=pythonbytes">Pyramid 2.1</a> is out (yes really! :) first release in 3 years)</li> <li><a href="https://vivaldi.com/blog/vivaldi-on-desktop-7-9/?featured_on=pythonbytes">Vivaldi 7.9 adds</a> minimalist hide mode.</li> <li>Migrated <a href="http://pythonbytes.fm">pythonbytes.fm</a> and <a href="http://talkpython.fm?featured_on=pythonbytes">talkpython.fm</a> to <a href="https://mkennedy.codes/posts/raw-dc-the-orm-pattern-of-2026/?featured_on=pythonbytes">Raw+DC design pattern</a></li> <li><a href="https://mkennedy.codes/posts/use-chameleon-templates-in-the-robyn-web-framework/?featured_on=pythonbytes">Robyn + Chameleon package</a></li> </ul> <p><strong>Joke: We now have <a href="https://translate.kagi.com?featured_on=pythonbytes">translation services</a></strong></p>

March 23, 2026 08:00 AM UTC


Antonio Cuni

Inside SPy, part 1: Motivations and Goals

Inside SPy🥸, part 1: Motivations and Goals

This is the first of a series of posts in which I will try to give a deep explanation ofSPy, including motivations, goals, rules of thelanguage, differences with Python and implementation details.

This post focuses primarily on the problem space: why Python is fundamentally hardto optimize, what trade-offs existing solutions require, and where current approachesfall short. Subsequent posts in this series will explore the solutions in depth. Fornow, let's start with the essential question: what is SPy?

!!! Success "" Before diving in, I want to express my gratitude to my employer, Anaconda, for giving me the opportunity to dedicate 100% of my time to this open-source project.

March 23, 2026 07:58 AM UTC

March 22, 2026


Reuven Lerner

Do you teach Python? Then check out course-setup

TL;DR: If you teach Python, then you should check out course-setup (https://pypi.org/project/course-setup/) at PyPI!

I’ve been teaching Python and Pandas for many years. And while I started my teaching career like many other instructors, with slides, I quickly discovered that it was better for my students — and for me! — to replace them with live coding.

Every day I start teaching, I open a new Jupyter or Marimo notebook, and I type. I type the day’s agenda. I type the code that I want to demonstrate, and then do it right there, in front of people. I type the instructions for each exercise we’re going to use. I type explanatory notes. If people have questions, I type those, along with my answers.

In other words, every day’s notebook contains a combination of documentation, demonstration, explanation, and exercise solutions. That combination is unique to the group I’m teaching. If we get sidetracked with someone’s question, that’s OK — I include whatever I can in each day’s notebook.

Teaching in this way raises some issues. Among the biggest: If I’m working on my own computer, then how can someone see the notebook that I’m writing? Obviously, I could scroll my screen up and down, but that’s frustrating for everyone, especially when we’re doing an exercise.

I was thus delighted to learn, years ago, about “gitautopush” (https://pypi.org/project/gitautopush/), a simple PyPI project that takes a local Git repository and monitors it for any changes. When something changes, it commits those changes to Git and then pushes them to a remote repository. The fact that GitHub renders Jupyter notebooks into HTML made this a perfect solution for me.

For years, then, my setup has been:

This worked fine for many years, but it took about 10 minutes of prep before each class. I finally realized that this was silly: I’m a programmer, and shouldn’t I be automating repetitive tasks that take a long time?

That’s where course-setup started. I wrote two Python programs that would let me create a new course (doing all of the setup tasks I mentioned above) or retire an existing one. Did it do everything I wanted? No, but it was good enough.

Once I started to use uv, I turned these programs into uv tools, always available in my shell. I made some additional progress with course-setup, but most of my thoughts about improvements stayed on the back burner.

And then? I started to use Claude Code. I decided to see just how far I could improve course-setup with Claude Code — and to be honest, the improvements were beyond my wildest dreams:

It’s hard to exaggerate how much of this work was done by Claude Code. I supervised, checked things, added new functionality, pushed back on a number of things it suggested, and am ultimately responsible. But really, the code itself was largely written by Claude, often using a number of agents working in parallel, and I couldn’t be happier with the result. I’ve included the CLAUDE.md file in the GitHub repo, if you’re interested in learning from it and/or using it.

This suite of utilities is now available on PyPI as “course-setup” (https://pypi.org/project/course-setup/). It includes a ton of functionality, and I’m always looking to improve it — so tell me how, or send a PR my way at https://github.com/reuven/course-setup!

The post Do you teach Python? Then check out course-setup appeared first on Reuven Lerner.

March 22, 2026 03:10 PM UTC


EuroPython

Humans of EuroPython: Niklas Mertsch

EuroPython runs on people power—real people giving their time to make it happen. No flashy titles, just real work: setting up rooms, guiding speakers, helping attendees find their way, or making sure everyone feels welcome. Some help run sessions, others support accessibility needs or troubleshoot the Wi-Fi. 

It’s all about showing up, pitching in, and sharing a passion for Python. This is what a community looks like.

Today we’d like to introduce you to Niklas Mertsch, member of the Operations team at EuroPython 2025. Check out what he has to say about the volunteering experience.

altNiklas Mertsch, member of the Operations team at EuroPython 2025

EP: What&aposs one thing about the programming community that made you want to give back by volunteering?

For me, it is not about “giving back” but about “participating”. I started volunteering out of curiosity, and continued because of the people and interactions. It started with a conversation, and it led to many more.

EP: Did you learn any new skills while volunteering at EuroPython? If so, which ones?

I can&apost name a “new” skill, but working with an intrinsically motivated, international and intercultural team definitely improved my social and communication skills.

EP: Did you have any unexpected or funny experiences during the EuroPython?

Tons of them, you never know what happens before or during the event. One time I just tried to print a WiFi QR code, then spent the next hours talking to someone I now call a good friend. And some months later that friend nudged me to answer these questions. You never know what you get and where it will lead you, but you know it will be good.

EP: Thank you for your work, Niklas!

March 22, 2026 01:53 PM UTC


Tryton News

Release 1.7.0 of python-sql

We are proud to announce the release of the version 1.7.0 of python-sql.

python-sql is a library to write SQL queries in a pythonic way. It is mainly developed for Tryton but it has no external dependencies and is agnostic to any framework or SQL database.

In addition to bug-fixes, this release contains the following improvements:

python-sql is available on PyPI: python-sql 1.7.0.

1 post - 1 participant

Read full topic

March 22, 2026 09:18 AM UTC

March 20, 2026


Real Python

The Real Python Podcast – Episode #288: Automate Exploratory Data Analysis & Invent Python Comprehensions

How do you quickly get an understanding of what's inside a new set of data? How can you share an exploratory data analysis with your team? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 20, 2026 12:00 PM UTC

Quiz: Python Decorators 101

In this quiz, you’ll test your understanding of Python Decorators 101.

Work through this quiz to review first-class functions, inner functions, and decorators, and learn how to create, reuse, and apply them to extend behavior cleanly in Python.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 20, 2026 12:00 PM UTC