Planet Python
Last update: May 13, 2026 09:44 PM UTC
May 13, 2026
"Michiel's Blog"
httpx2!
It’s six weeks after we forked httpx and named our package httpxyz. Yesterday, the Pydantic people started their own fork, httpx2.
TL;DR: while we think httpxyz was definitely needed, we welcome httpx2 and think it should be the ‘blessed’ fork.

About httpx2
Our fork
We did a bunch of work on httpx, merging old open pull requests, forking httpcore, and making serious improvements fixing performance and other issues.
The Pydantic fork
Straight after we made our fork, I contacted Kludex, who is among other things maintainer of Starlette, about our fork. He said that he had also been thinking about doing a fork, but that he might prefer to do one himself, and also that he thought that ours could not get popular because it’s on Codeberg instead of on GitHub.
I’m not really sure about that last one. While it’s true that there are still
no big examples of popular Python packages on Codeberg, more and more projects
are currently moving there. Also, even though we are on Codeberg, every single
day we were still gaining ‘stars’ and if the Pydantic team would have backed
our fork, with their power we definitely could have made it a success. The
majority of users don’t care at what forge the code is hosted, they install
from PyPI, via pip or uv. Where the code is hosted is not really a factor
in the popularity.
The way forward
The reason I started httpxyz was because of the impasse httpx was in, and that I felt something had to be done. It’s not that I wanted to be the maintainer of an HTTP library per se ;-)
So now that Pydantic, with their skillful team and their powerful ecosystem of packages, is creating their own fork, there is no point really in trying to compete with them. We’ll keep httpxyz up; but we will support httpx2 and will urge anyone who is trying to switch away from httpx to consider httpx2.
The current situation
As it stands, httpx2 is lacking the performance improvements we added to httpxyz. But it will not be long before they will add those, too.
Also they already made some smart decisions I had been unsure about:
- they are switching from certifi to truststore
- they are switching to compression.zstd on Python 3.14+, enabling zstd compression by default
- they merged httpcore and vendored it in their repository
I have great trust in their stewardship of the module. We don’t need ‘competing’ forks; we’ll fully support httpx2 and will encourage the community to do the same!
Thanks, and have fun!
Python Software Foundation
PSF Welcomes Hudson River Trading (HRT) as a Visionary Sponsor
[May 13, 2026] – The Python Software Foundation (PSF) is excited to announce that Hudson River Trading (HRT), a global leader in quantitative trading, has made a commitment to support Python and the PSF as a Visionary Sponsor.
HRT’s "Visionary" sponsorship—our highest tier—will help to support the foundation’s core work of advancing and protecting the Python programming language and supporting a diverse and international community of Python programmers. HRT is the first quantitative trading firm to become a PSF Visionary Sponsor, alongside companies including NVIDIA, Google, Fastly, Bloomberg, Meta, and Anthropic. Contributions at this level directly fund the critical work that keeps Python thriving, including:
- CPython Development: Ensuring the core language remains fast, stable, and modern.
- PyPI Infrastructure: Maintaining the Python Package Index, which serves billions of downloads to developers worldwide.
- Community Programs: Supporting Python workshops, events, and user groups globally, as well as hosting PyCon US each year.
- Security Initiatives: Hardening the ecosystem against supply chain vulnerabilities.
A Shared Commitment to Python
Hudson River Trading is no stranger to the power of Python. As a leading multi-asset class quantitative trading firm, HRT relies on Python for research, data analysis, and engineering workflows. With this donation, HRT is giving back to the tools that empower their engineers and helping to ensure that Python remains flexible, effective, and welcoming in the ways that have made it one of the most popular programming languages in the world. Read more about Open Source at HRT on this page.
“Python is a cornerstone of HRT’s research and trading infrastructure. Our engineers use Python extensively to build cutting-edge tooling that enhances our developer workflows, and we believe strongly in contributing to the open source software that makes our work possible. We are proud to support the PSF as a Visionary Sponsor helping to safeguard Python as a robust, accessible, and community-driven language for years to come.” – Prashant Lal, Partner at Hudson River Trading
“Part of HRT's edge is our engineering, and one of our core values is 'Make It Better'. Our support of the Python Software Foundation – alongside our contributions to many other open source projects – reflects our desire to remain active, collaborative participants in the OSS engineering community over the long term, for the benefit of all.” – Hashem, Lead Software Engineer at Hudson River Trading
“At HRT, we’ve always believed that the best way to advance Python is by working hand-in-hand with the community. Our internal work on lazy imports gave us deep expertise in the problem space, and we channeled that experience directly into open collaboration by contributing to the development of PEP 810. We pride ourselves on being exemplary participants in both the trading markets and the open source community, and our sponsorship of the Python Software Foundation reflects that genuine spirit of collaboration.” – Pablo Galindo Salgado, Lead Software Engineer at Hudson River Trading
As part of its ongoing participation in the Python ecosystem, HRT will be open sourcing some of its own projects and announcing additional OSS contributions later this year. To learn more about HRT’s open engineering, research, and data science roles, visit https://www.hudsonrivertrading.com/careers/.
The PSF is grateful for Hudson River Trading’s support, alongside that of each of our Visionary Sponsors, and we hope you will join us in thanking them for their commitment to the PSF and the Python community!
About Hudson River Trading (HRT)
Hudson River Trading (HRT) is a leading quantitative trading firm at the forefront of technical innovation in global financial markets. Every day, we bring together the world’s sharpest minds to collaboratively solve challenging problems and build technology that will drive the future of trading. Leveraging one of the world’s most sophisticated computing environments for research and development, we trade across asset classes and time horizons on more than 200 markets worldwide. We are a leading voice advocating for fair and transparent markets everywhere and dedicated to creating a better trading landscape for all. For more information, visit www.hudsonrivertrading.com.
About the Python Software Foundation (PSF)
The Python Software Foundation is a US non-profit whose mission is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers. The PSF supports the Python community using corporate sponsorships, grants, and donations. Are you interested in sponsoring or donating to the PSF so we can continue supporting Python and its community? Check out our sponsorship program, donate directly, or contact our team at sponsors@python.org!
Real Python
How to Use OpenCode for AI-Assisted Python Coding
OpenCode is an open-source AI coding agent that runs in your terminal and lets you analyze and refactor a Python project through conversational commands. In this guide, you’ll install it on your system, set it up with a free Google Gemini API key, and learn the basics of how to use it in your daily programming work.
Here’s what OpenCode’s main interface looks like:
OpenCode's Initial Screen
OpenCode works as a conversational assistant you explicitly direct. Ask it to analyze functions, refactor code, or explain issues. Press Enter to send your query, and you’ll get a response with full awareness of your project context. It supports more than seventy-five AI providers, including Anthropic, OpenAI, and Google Gemini.
If you’re a Python developer who prefers working in the terminal, OpenCode offers deliberate, context-aware assistance and a customizable AGENTS.md configuration file.
Take the Quiz: Test your knowledge with our interactive “How to Use OpenCode for AI-Assisted Python Coding” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Use OpenCode for AI-Assisted Python CodingQuiz yourself on OpenCode: install it, connect an AI provider, and use it to analyze and refactor Python from your terminal.
Prerequisites
Before you start working with OpenCode, you’ll need to fulfill the following prerequisites regarding your current system and working environment:
- Python 3.11 or higher for the sample project
- A modern terminal emulator
You also need an AI provider account. In this guide, you’ll use Google AI Studio to get a free Gemini API key. The free Gemini tier lets you follow along without any additional costs. However, you can also use Anthropic, OpenAI, or GitHub Copilot if you already have subscriptions to those services.
This guide uses a sample project consisting of a dice-rolling script. You’ll find the full source code in a collapsible block at the start of Step 2. The download below includes the starting script and the final refactored version so you can compare your work when you’re done:
Get Your Code: Click here to download the free sample code you’ll use to learn about AI-assisted Python coding with OpenCode.
You’ll also need some background knowledge of Python programming and basic experience with your operating system’s terminal or command line.
Step 1: Install and Set Up OpenCode
It’s time to install OpenCode and get it talking to a model. You’ll install the tool on your system, authenticate with Gemini using a free API key, configure a default model, and verify that OpenCode responds correctly to your Python questions before you start coding with it.
Install and Launch OpenCode
The quickest way to install OpenCode is to use the official installation script, which you can do with the following command:
$ curl -fsSL https://opencode.ai/install | bash
This script detects your platform, downloads the appropriate binary, installs the tool, and adds it to your PATH.
If you prefer a package manager, you can also install OpenCode with Homebrew on macOS or Linux:
$ brew install anomalyco/tap/opencode
Note that the Homebrew team maintains the official formula and updates it less frequently than the installation script above.
Alternatively, you can install it as a Node.js package using npm if you already have this tool on your system:
$ npm install -g opencode-ai
If you’re on Windows, the best experience comes from using WSL (Windows Subsystem for Linux). Set up WSL first by following Microsoft’s WSL installation guide, then open a WSL terminal and run the curl command above. For optimal performance, you should store your project within the WSL filesystem rather than on a Windows drive.
Read the full article at https://realpython.com/opencode-guide/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm
Support for uv, Poetry, and Hatch Workspaces (Beta)
Workspaces are increasingly the go-to choice for companies and open-source teams aiming to manage shared code, enforce consistency, and simplify dependency management across multiple services. Working within massive codebases often means juggling many interdependent Python projects simultaneously.
To streamline this experience, PyCharm 2026.1.1 introduced built-in support for uv workspaces, as well as those managed by Poetry and Hatch. This new functionality – currently in Beta – allows the IDE to automatically manage dependencies and environments across your entire workspace.
Intelligent workspace detection
When you open a workspace, PyCharm can now derive its entire structure and all its dependencies directly from your pyproject.toml files. This allows the IDE to understand relationships between projects deeply, significantly reducing the amount of configuration you have to do manually.
Because this is a fundamental change to how PyCharm handles your workspace, we’ve implemented it as an opt-in feature. Here is what you need to know about the transition:
- Opt-in dialog: When you open a project, PyCharm may suggest enabling automatic detection for uv workspaces and Poetry/Hatch setups.
- Manual configuration: You can toggle workspace detection in Settings | Project Structure.
- Configuration note: If you previously manually edited settings in .idea files, those settings may be reset when you agree to the new model.
Managing workspaces and their projects
PyCharm now provides an integrated experience that handles the complexities of multi-package setups in uv workspaces automatically. When you open a uv workspace, the IDE identifies the individual projects and their interdependencies, ensuring the project structure is ready for you to work with.
Visualizing workspace dependencies
Once the workspace is loaded, you can verify how your projects relate to one another. PyCharm presents these dependencies in Settings | Project Dependencies.
These relationships are derived directly from your configuration and are shown as read-only in the UI. To make changes to the dependency graph, you can edit the pyproject.toml file manually – PyCharm will then update its internal model.
Automatic environment configuration
PyCharm prioritizes a zero-config approach to your Python SDK. When you open a .py or pyproject.toml file within a project, the IDE performs an immediate check.
If a compatible environment already exists on your system, PyCharm automatically configures it as the SDK for that project. If no environment is detected, a file-level notification will appear suggesting that you create a new uv environment and install the necessary dependencies for that project.
Maintaining environment consistency
Beyond the initial setup, PyCharm continuously monitors the health of your environment to ensure it stays in sync with your defined requirements.
If a dependency is not defined in your pyproject.toml file but is imported in your code, PyCharm will trigger a warning with a Sync project quick-fix to resolve these discrepancies.
Import management
PyCharm also assists when you are actively writing code by identifying gaps in your project configuration.
If you import a package that isn’t present in the environment and is not yet listed in the project’s pyproject.toml, the IDE will detect the omission. A quick-fix will suggest adding the package to the environment and updating the corresponding .toml file simultaneously.
Transparency via the Python Process Output tool window
While PyCharm automates the backend execution of commands – such as uv sync –all-packages – it still remains fully transparent.
You can track all executed commands and their live output in the Python Process Output tool window. If synchronization fails for an environment, you can analyze the specific error logs to quickly identify the root cause.
Poetry and Hatch workspaces
The logic for Poetry and Hatch workspaces follows this exact same workflow. PyCharm detects projects via their pyproject.toml files and manages the environments with the same automated precision.
The only minor difference is in tool selection – the suggested environment tool is determined by what you have specified in your pyproject.toml. If no tool is specified, PyCharm will prioritize uv (if installed) or a standard virtual environment to get you up and running quickly.
Looking ahead
This Beta version of the functionality is just the beginning of our focus on supporting complex workspace structures. We are already working on expanding the UI to allow creating new projects, linking dependencies, and activating the terminal for specific projects.
As we refine these features, your feedback is our best guide – please share your thoughts or report any issues on our YouTrack issue tracker.
Python GUIs
How to Add Custom Widgets to Qt Designer — Use widget promotion to integrate your own Python widgets into Qt Designer layouts
Can I use custom widgets in Qt Designer?
When you're building Python GUI applications with PyQt6 and Qt Designer, you'll reach a point where the built-in widgets aren't enough. Maybe you've created a custom plotting widget or a specialized input control in Python, and you want to place it into your Qt Designer layouts alongside all the standard widgets.
The good news is that Qt Designer supports exactly this through a feature called widget promotion. In this tutorial, you'll learn how to take any custom Python widget and integrate it into your Qt Designer .ui files, so you can position and size it visually just like any built-in widget.
The bad news is that since Qt Designer is a C++ application, it can't run your Python code. That means you won't see your custom widget rendered in the Designer preview. Instead, you'll see a placeholder (the base widget type you promoted from). Once you load the .ui file in your running Python application, your custom widget appears in all its glory.
With that caveat aside, let's look at how we can use custom widgets in Qt Designer.
What is Widget Promotion?
Widget promotion is Qt Designer's way of letting you swap a standard widget for a custom one. You start by placing a regular widget on your form, a plain QWidget for example, and then tell Qt Designer: "When this UI is actually used, replace this placeholder with my custom widget class instead."
Behind the scenes, this adds some extra information to the .ui file. When you load that file in Python using uic.loadUi() or compile it with pyuic6, the loader knows to import your custom class and use it in place of the base widget.
Creating a Custom Widget
Before we get into Qt Designer, let's create a simple custom widget in Python. We'll make a basic colored widget that draws a gradient background—something you'd never get from a standard widget.
Create a new file called custom_widgets.py:
from PyQt6.QtWidgets import QWidget
from PyQt6.QtGui import QPainter, QLinearGradient, QColor
from PyQt6.QtCore import Qt
class GradientWidget(QWidget):
"""A custom widget that displays a gradient background."""
def __init__(self, parent=None):
super().__init__(parent)
def paintEvent(self, event):
painter = QPainter(self)
gradient = QLinearGradient(0, 0, self.width(), self.height())
gradient.setColorAt(0.0, QColor("#2c3e50"))
gradient.setColorAt(1.0, QColor("#3498db"))
painter.fillRect(self.rect(), gradient)
painter.end()
This widget overrides paintEvent to draw a diagonal gradient from dark blue to lighter blue. It's a straightforward example, but the same promotion process works for any custom widget—complex plotting canvases, custom controls, or anything else you build by subclassing a Qt widget.
Setting Up Your Project Structure
For widget promotion to work, the Python file containing your custom widget needs to be importable when your application runs. The simplest way to achieve this is to keep everything in the same directory:
my_project/
&boxvr&boxh&boxh custom_widgets.py # Your custom widget classes
&boxvr&boxh&boxh mainwindow.ui # Your Qt Designer file
&boxur&boxh&boxh main.py # Your application entry point
The file name and class name matter here—you'll need to tell Qt Designer both of these during the promotion step.
Promoting a Widget in Qt Designer
Now we can open Qt Designer and set up the promotion.
Place a base widget on your form
Open Qt Designer and create a new Main Window (or open your existing .ui file). From the widget box on the left, drag a plain Widget (QWidget) onto your form. Position and resize it however you like—this is where your custom widget will appear when the application runs.
You can use any base widget class as your starting point. If your custom widget subclasses QPushButton, promote a QPushButton. If it subclasses QLabel, promote a QLabel. For our GradientWidget, which subclasses QWidget, a plain QWidget is the right choice.
Open the Promote Widgets dialog
Right-click on the widget you just placed. In the context menu, select Promote to.... This opens the Promoted Widgets dialog.

Fill in the promotion details
In the dialog, you'll see fields for three pieces of information:
-
Base class name — This should already be filled in with the type of widget you right-clicked on (e.g.,
QWidget). Leave this as is. -
Promoted class name — Enter the name of your custom Python class. For our example, type
GradientWidget. -
Header file — This is where Qt Designer's C++ heritage shows through. In C++, this would be a header file path. For Python, you enter the module import path for your widget, without the
.pyextension. Since our class lives incustom_widgets.py, typecustom_widgets.

Leave the Global include checkbox unchecked.
Add and promote
Click Add to add your class to the list of known promoted widgets. Then, with your class selected in the list, click Promote. The dialog closes, and you'll notice the widget's class name in the Object Inspector (top-right panel) now shows GradientWidget instead of QWidget.
That's it for the Designer side. Save your .ui file.
Promoting additional widgets
Once you've added a promoted class through this dialog, it becomes available for reuse. The next time you want to promote a widget to GradientWidget, just right-click the widget and you'll see it listed directly in the Promote to submenu—no need to open the full dialog again.
Loading the UI in Python
Now let's write the Python code to load the .ui file and see our custom widget in action. Create main.py:
import sys
from PyQt6.QtWidgets import QApplication, QMainWindow
from PyQt6 import uic
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
uic.loadUi("mainwindow.ui", self)
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
When you run this, uic.loadUi() reads the .ui file and sees that one of the widgets has been promoted to GradientWidget from the custom_widgets module. It automatically does the equivalent of:
from custom_widgets import GradientWidget
...and creates an instance of GradientWidget wherever you placed that promoted widget in your layout. Instead of a blank QWidget, you'll see your gradient background.
Using Compiled UI Files
If you prefer to compile your .ui files to Python using pyuic6 rather than loading them at runtime, promotion works the same way. Run:
pyuic6 mainwindow.ui -o ui_mainwindow.py
If you open the generated ui_mainwindow.py, you'll find an import line near the bottom:
from custom_widgets import GradientWidget
The compiled code creates your GradientWidget instance in the right place automatically. You can then use the generated file in your application:
import sys
from PyQt6.QtWidgets import QApplication, QMainWindow
from ui_mainwindow import Ui_MainWindow
class MainWindow(QMainWindow, Ui_MainWindow):
def __init__(self):
super().__init__()
self.setupUi(self)
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
Both approaches—runtime loading and compiled files—handle promoted widgets in the same way.
A More Practical Example: Embedding PyQtGraph
One of the most common reasons to promote widgets is to embed third-party plotting libraries like PyQtGraph into your Designer layouts. PyQtGraph's PlotWidget is a subclass of QGraphicsView, so you'd promote a QGraphicsView in Designer.
Here's how you'd fill in the promotion dialog for PyQtGraph:
- Base class name:
QGraphicsView - Promoted class name:
PlotWidget - Header file:
pyqtgraph
That's all it takes. When your application runs, the placeholder QGraphicsView becomes a fully functional PlotWidget that you can plot data on.
import sys
from PyQt6.QtWidgets import QApplication, QMainWindow
from PyQt6 import uic
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
uic.loadUi("mainwindow.ui", self)
# self.graphWidget is the promoted PlotWidget
# (use the objectName you set in Designer)
self.graphWidget.plot([1, 2, 3, 4, 5], [10, 20, 15, 30, 25])
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
Promoting Widgets from Submodules
If your custom widget lives in a submodule or package, you can use dotted import paths in the Header file field. For example, if your project structure looks like this:
my_project/
&boxvr&boxh&boxh widgets/
&boxv &boxvr&boxh&boxh __init__.py
&boxv &boxur&boxh&boxh gradient.py # contains GradientWidget
&boxvr&boxh&boxh mainwindow.ui
&boxur&boxh&boxh main.py
You would enter widgets.gradient as the header file in the promotion dialog. The loader will then do:
from widgets.gradient import GradientWidget
This keeps things organized as your project grows.
Troubleshooting Common Issues
"No module named 'custom_widgets'" — This means Python can't find the file containing your custom widget class. Make sure the module file is in the same directory as your script (or somewhere on your Python path), and that the name in the promotion dialog matches the file name exactly (without .py).
The widget appears blank or as a plain QWidget — Double-check that the promoted class name matches your Python class name exactly, including capitalization. GradientWidget and gradientwidget are different classes as far as Python is concerned.
The widget doesn't resize properly — Make sure you've added the promoted widget to a layout in Qt Designer. Widgets outside of layouts won't resize with the window, regardless of whether they're promoted or not.
Changes to your custom widget don't appear in Designer — Remember, Qt Designer can't render Python widgets. You'll always see the base widget type in the Designer preview. Run your application to see your custom widget.
Summary
Widget promotion is a straightforward way to bridge the gap between Qt Designer's visual layout tools and your custom Python widgets. The process is always the same:
- Place a base widget of the appropriate type in Qt Designer.
- Right-click and promote it, specifying your custom class name and module path.
- Save the
.uifile and load it in your Python application.
Your custom widget won't be visible in the Designer preview—that's expected. But when your application runs, the promoted widget is swapped in seamlessly, giving you the best of both worlds: visual layout design with the full power of custom Python widgets.
For an in-depth guide to building Python GUIs with PyQt6 see my book, Create GUI Applications with Python & Qt6.
May 12, 2026
PyCoder’s Weekly
Issue #734: Dunder-Gets, Django Tasks in Prod, Codex CLI, and More (2026-05-12)
#734 – MAY 12, 2026
View in Browser »
Do You Get It Now?
Learn about Python’s .__getitem__(), .__getattr__(), .__getattribute__(), and .__get__(): how they’re different and where to use them.
STEPHEN GRUPPETTA
Using Django Tasks in Production
Django added a generic API for dealing with concurrent tasks in version 6. This post talks about how it has been used in production.
TIM SCHILLING
Use Codex CLI to Enhance Your Python Projects
Learn how to use Codex CLI to add features to Python projects directly from your terminal, without needing a browser or IDE plugins.
REAL PYTHON course
Depot CI: Built for the Agent era
Depot CI: A new CI engine. Fast by design. Your GitHub Actions workflows, running on a fundamentally faster engine — instant job startup, parallel steps, full debuggability, per-second billing. One command to migrate →
DEPOT sponsor
Articles & Tutorials
Handling Schema Issues in Polars
You’ve got this great data pipeline going until one day it stops working. A schema error causes by a column upstream has stopped you in your tracks. This post talks about the four different causes of schema errors and what to do about them.
THIJS NIEUWDORP
Textual: An Intro to DOM Queries (Part II)
Textual is a TUI framework library for building terminal applications. It uses a DOM to represent the widgets in the application, and that DOM is queryable. This is part 2 in a series on how to find things in your Textual DOM.
MIKE DRISCOLL
Everything You Always Wanted to Know About PyCon Sprints!
PyCon US includes coding sprints to work on CPython itself, or projects in the ecosystem like Django, Flask, and BeeWare. This post tells you all about sprints and how you can join in on the fun.
DEB NICHOLSON
Why TUIs Are Back
Terminal User Interfaces are seeing a resurgence in the tools space. This opinion piece briefly talks about the history of interfaces and why we are where we are now.
ALCIDES FONSECA
Parallel Python at Anyscale With Ray
Talk Python interviews Richard Liaw and Edward Oakes. They talk about Ray, an open source Python framework a distributed execution engine for AI workloads.
TALK PYTHON podcast
Python 3.14.5 Release Candidate
Normally nobody fusses over a release candidate of a point release, but 3.14.5 includes a major change: rolling back of the incremental garbage collector.
HUGO VAN KEMENADE
Wagtail 7.4: Custom Page Explorer, Preview Checks & More
Between autosave improvements, new ways to sort your pages, and a content checker upgrade, you’ll have a lot of reasons to move to Wagtail 7.4
MEAGEN VOSS
The Simplest MCP Example Possible in Python
This guide introduces you to connecting your code to a local LLM model. It covers Ollama and FastMCP and what you can do with these tools.
AL SWEIGART
ChatterBot: Build a Chatbot With Python
Build a Python chatbot with the ChatterBot library. Clean real conversation data, train on custom datasets, and add local AI with Ollama.
REAL PYTHON
Hardening Firefox With Claude Mythos Preview
New details about what Mozilla found and how agentic harnesses helped them reproduce real bugs and dismiss false positives.
MOZILLA
Projects & Code
Pymetrica: A Codebase Analysis Tool
GITHUB.COM/JUANJFARINA • Shared by Juan José Farina
secure: HTTP Security Headers for FastAPI, Flask, Django
GITHUB.COM/TYPEERROR • Shared by Caleb Kinney
Kirokyu: Modular Task Management System
GITHUB.COM/AMRYOUNIS • Shared by Amr Younis
Events
PyCon US 2026
May 13 to May 20, 2026
PYCON.ORG
Python Atlanta
May 14 to May 15, 2026
MEETUP.COM
Chattanooga Python User Group
May 15 to May 16, 2026
MEETUP.COM
PyDelhi User Group Meetup
May 16, 2026
MEETUP.COM
PyData London
June 5 to June 7, 2026
PYDATA.ORG • Shared by Tomara Youngblood
Happy Pythoning!
This was PyCoder’s Weekly Issue #734.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Marcos Dione
Monitoring Apache with SQL and Grafana
Ever since my last job I have been wanting to make this. I think it's not the first time I do it, but for one reason or another, I did it (again?) in two evenings only.
In that job we had an internet facing API with Apache as the router in front of several services. All
our metrics and even our billing was based on the Apache logs. We had a system that ingested the logs
into a PostgreSQL database, and we tried to create Grafana panels and alerts based on that info. At
the same time, I wanted to reproduce awstats in Grafana, and found it was
almost impossible.
Another problem is that the usual tools to solve this, Loki or Prometheus, have big problems to handle this
type of too arbitrary data (think of the referer or user_agent columns) or whose space is too big
(client is an IPv4, with 4Bi different values). They effectively suffer (in principle) of what they
call "cardinality bomb": since they build one time series database (TSDB) per combination of fields
(they call them "labels"), storage use is big and aggregation operations inter TSDBs are expensive.
Last night I sat down to reimplement the ingestion side. Instead of PostgreSQL I used SQLite mostly
because almost all of my services (with low traffic and mostly only me as user) already use it. To be fair,
and one really can't expect anything else, the script is quite straight forward. It uses regexps to
parse the logs, which for the moment is good enough. I'm "releasing" it as is, because I'm tired, but you'll
find some surprises around parsing the request line (see request_re and its handling); some janky ways
to convert from str to int or datetime; and an iteration trick to use dataclasses as execute()
argument. I omited some comments and all the testing:
#! /usr/bin/env python3 from dataclasses import dataclass from datetime import datetime, timedelta, timezone, tzinfo import pathlib import re import sqlite3 import sys # I miss dinant # no 0-255 range check since this is written by apache # if the number is not in that range, we have bigger problems octet_re = r'\d{1,3}' ip_re = r'\.'.join([ octet_re ] * 4) word_re = r'[^ ]+' identd_user_re = word_re # it can be '-' user_id_re = word_re # it can be '-' month_names = [ 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec' ] day_re = r'\d{1,2}' month_re = f"(?:{'|'.join(month_names)})" year_re = r'\d{4}' date_re = f"({day_re})/({month_re})/({year_re})" time_re = r'(\d{2}):(\d{2}):(\d{2})' utc_offset_re = r'(?:\+|\-)\d{4}' # no capture # fscking double escaping :( date_time_re = f"\\[{date_re}:{time_re} ({utc_offset_re})\\]" method_re = word_re url_re = word_re # technically not a word, but word_re is too generic proto_re = r'HTTP' # who are we kidding version_re = r'\d\.\d' # who are we kidding proto_and_version_re = f"({proto_re})/({version_re})" # idiot skrip kidz send no method or proto/version! # and re is silly? enough to produce empty matches for the ()s here # oh, but re.compile().match().groups() returns things like # (None, None, None, None, '', '\\x16\\x03\\x02\\x01o\\x01', '', '') # so we gained nothing request_re = f'"(?:({method_re}) ({url_re}) {proto_and_version_re}|()({url_re})()())"' number_re = r'\d+' http_status_re = number_re bytes_rx_re = number_re bytes_tx_re = number_re ttfb_re = f"(?:{number_re}|-)" response_time_re = number_re double_quoted_text_re = r'"([^"]+)"' referer_re = double_quoted_text_re user_agent_re = double_quoted_text_re log_line_re = re.compile(f"^({ip_re}) ({identd_user_re}) ({user_id_re}) {date_time_re} {request_re} ({http_status_re}) ({bytes_rx_re}) ({bytes_tx_re}) ({ttfb_re}) ({response_time_re}) {referer_re} {user_agent_re}$") @dataclass class LogRecord: client_ip: str # 0 indent_user: str user_id: str date_time: datetime method: str url: str # 5 protocol: str protocol_version: str # could be float, but we don't really care; besides, x.y.z? status: int bytes_rx: int bytes_tx: int # 10 ttfb: int # maybe -! response_time: int referer: str user_agent: str @classmethod def from_log_line(cls, line): match = log_line_re.match(line) if match is None: raise ValueError(f"Malformed line: {line.strip()}") data = list(match.groups()) new_data = [] group_index = 0 for field_index, (name, field) in enumerate(cls.__dataclass_fields__.items()): if field.type == datetime: # [11/May/2026:20:15:28 +0200] # convert month str to number data[group_index+1] = month_names.index(data[group_index+1]) + 1 # convert to ints data[group_index:group_index+6] = [ int(x) for x in data[group_index:group_index+6] ] new_data.append( datetime(data[group_index+2], data[group_index+1], data[group_index], data[group_index+3], data[group_index+4], data[group_index+5], 0, utc_offset2tzinfo(data[group_index+6])) ) group_index += 7 continue # handle ttfb as - if field_index == 11 and data[group_index] == '-': # data[group_index] = data[group_index+1] if data[group_index:group_index+4] == [ None, None, None, None ]: if group_index in (10, 14): # handle (None, None, None, None, '', '\\x16\\x03\\x02\\x01o\\x01', '', '') # handle ('GET', '/', 'HTTP', '1.0', None, None, None, None) # no need to add anything, it's handled by the fallback # but we still need to skip this cruft group_index += 4 else: raise ValueError(f"Got confused: {(field_index, field.type, group_index, data[group_index], new_data)}") # convert ints if field.type == int: data[group_index] = int(data[group_index]) # fallback new_data.append(data[group_index]) group_index += 1 return cls(*new_data) # implement the iterator protocol so we can mostly be passed as argument to execute() def __iter__(self): return self def __iter__(self): for value in self.__dict__.values(): # the whole protocol could be replaced with .__dataclass_fields__.values() :shrug: # but this way I can do further conversions if type(value) == datetime: value = int(value.timestamp()) yield value def utc_offset2tzinfo(offset: str) -> tzinfo: # +0200 hours = int(offset[:3]) # +02 minutes = int(offset[3:]) # 00 return timezone(timedelta(hours=hours, minutes=minutes), offset) def connect(): # if we test after sqlite3.connect(), the file is already created create = not pathlib.Path('./apache_logs.db').exists() conn = sqlite3.connect('./apache_logs.db') if create: conn.cursor().execute(''' CREATE TABLE "logs" ( "client" TEXT, "indent_user" TEXT, "user_id" TEXT, "timestamp" INTEGER, "method" TEXT, "url" TEXT, "protocol" TEXT, "protocol_version" TEXT, "status" INTEGER, "bytes_rx" INTEGER, "bytes_tx" INTEGER, "ttfb" INTEGER, "response_time" INTEGER, "referer" TEXT, "user_agent" TEXT );''') return conn def main(): conn = connect() cursor = conn.cursor() for line in sys.stdin: try: log_record = LogRecord.from_log_line(line) except ValueError as e: print(e.args[0]) continue cursor.execute('''INSERT INTO logs VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''', tuple(log_record)) conn.commit() if __name__ == '__main__': main()
One of the things I didn't do was to further play with the URLs. One could make list of different apps based on whether there is any routing to different services, like in the cases of my previous job and my own server; or even different subdivisions on a single app, like for NextCloud:
ocs/v2.php/apps/serverinfo remote.php/dav/files/USER remote.php/dav/calendars/USER/CALENDAR/
etc. I haven't really thought about it; it could be implemented either as more columns or extra tables.
Today I managed to finish the rest.
The next step is to install this so it runs constantly with the output of tail --follow=name --retry
piped into its stdin1. Left as an exercise for the reader; use SystemD units :)
Next is installing Grafana's plugin to read SQLite and declare the new Grafana datasource.
The hard part was to query the data in a way that was useful for Grafana. I managed to get a query like:
-- round to the minute SELECT timestamp/60*60 AS time, status, COUNT(status) as "count" FROM logs -- $__from and $__to are defined by Grafana based on the dashboards's time range WHERE timestamp >= $__from / 1000 and timestamp < $__to / 1000 GROUP BY timestamp/60, status
to get the count of different status codes per minute2. But this returns a table that looks like:
time | status | count 1778533620 | 200 | 30 1778533620 | 207 | 3 1778533620 | 403 | 1
while Grafana is expecting one line per sample (but remember we're aggregating data) and one column per data series:
time | 200 | 207 | 403 1778533620 | 30 | 3 | 1
I read how to pivot this in SQL, but it mostly works only if you know the different values for the
status column beforehand. This might be feasible with
HTTP status codes (I count
63 standard ones, including the joke 418 I'm a teapot), but that would be impossible for the referer
or user_agent columns.
Thanks to iRobbery#postgresql@libera.chat I found about
Grafana's Partition by values data transformation.
Aplying it to the column that defines the time series (status, etc), it gives us exactly what we want!

And one can even include a pure table with all the logs to inspect when one finds weird spikes or values. I made almost impossible queries like transferred bytes per URL, methods per client and more! One missing piece, i possible, would be to implement histograms, like last time we looked into this.
-
One could cite the UNIX philosophy, but seriously, who wants to reimplement all the corner cases of that
tailinvocation? See for instance the 113 bugs found in the coreutils Rust reimplementaiton ↩ -
One could use a dashboard variable to control this arbitrarily. One could get granularity per second! ↩
Real Python
Building Type-Safe LLM Agents With Pydantic AI
Pydantic AI is a Python framework for building LLM agents that return validated, structured outputs using Pydantic models. Instead of parsing raw strings from LLMs, you get type-safe objects with automatic validation.
If you’ve used FastAPI or Pydantic before, then you’ll recognize the familiar pattern of defining schemas with type hints and letting the framework handle the type validation for you.
By the end of this video course, you’ll understand that:
- Pydantic AI uses
BaseModelclasses to define structured outputs that guarantee type safety and automatic validation. - The
@agent.tooldecorator registers Python functions that LLMs can invoke based on user queries and docstrings. - Dependency injection with
deps_typeprovides type-safe runtime context like database connections without using global state. - Validation retries automatically rerun queries when the LLM returns invalid data, which increases reliability but also API costs.
- Google Gemini, OpenAI, and Anthropic models support structured outputs best, while other providers have varying capabilities.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
2026 Django Developers Survey
The Django Software Foundation is once again partnering with JetBrains to run the 2026 Django Developers Survey 🌈
It’s an important metric of Django usage and is immensely helpful to guide future technical and community decisions.
After the survey closes, we will publish the aggregated results. JetBrains will also randomly select 10 winners (from those who complete the survey in full with meaningful answers) who will each receive a $100 Amazon voucher or the equivalent in local currency.
How you can help
Once you’ve done the survey, take a moment to re-share on socials and with your communities. The more diverse the answers, the better the results for all of us.
Please use the following links:
-
Bluesky
https://surveys.jetbrains.com/s3/bs-django-developers-survey-2026 -
Django Forum
https://surveys.jetbrains.com/s3/df-django-developers-survey-2026 -
LinkedIn
https://surveys.jetbrains.com/s3/li-django-developers-survey-2026 -
Mastodon
https://surveys.jetbrains.com/s3/md-django-developers-survey-2026 -
Reddit
https://surveys.jetbrains.com/s3/r-django-developers-survey-2026 -
X / Twitter
https://surveys.jetbrains.com/s3/x-django-developers-survey-2026
Real Python
Quiz: Building Type-Safe LLM Agents With Pydantic AI
In this quiz, you’ll test your understanding of Building Type-Safe LLM Agents With Pydantic AI.
By working through this quiz, you’ll revisit how Pydantic AI returns structured outputs from LLMs, how validation retries improve reliability, how tools and function calling work, how dependency injection flows through RunContext, and what trade-offs to expect when running agents in production.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: The LEGB Rule & Understanding Python Scope
In this quiz, you’ll test your understanding of The LEGB Rule & Understanding Python Scope.
By working through this quiz, you’ll revisit how Python resolves names using the LEGB rule, what the local, enclosing, global, and built-in scopes look like in practice, and how the global and nonlocal statements let you reach across scope boundaries.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Software Foundation
Announcing PSF Community Service Award Recipients!
The PSF Community Service Awards (CSAs) are a formal way for the PSF Board of Directors to offer recognition of work which, in its opinion, significantly improves the Foundation's fulfillment of its mission to build a vibrant, welcoming, global Python community. These awards shine a light on the incredible people who are the heart and soul of our community– those whose dedication, creativity, and generosity help the PSF fulfill its mission. The PSF CSAs celebrate individuals who have been truly invaluable, inspiring others through their example, and demonstrates that service to the Python community leads to recognition and reward. If you know of someone in the Python community deserving of a PSF CSA award, please submit them to the PSF Board via psf@python.org at any time. You can read more about PSF CSA’s on our website.
The PSF Board is excited to announce 5 new CSAs, awarded to Inessa Pawson, Kafui Alordo, Kalyan Prasad, Maria Jose Molina Contreras, and Paul Everitt, for their contributions to the Python community. Read more about their work and impact below.
Inessa Pawson
Inessa Pawson has been a tireless and dedicated contributor to the Python ecosystem for over eight years. She has led the PyCon US Maintainers Summit since 2020, not only shaping the event but actively opening doors for others to participate–onboarding new contributors and supporting attendees with characteristic warmth and care.
Beyond PyCon US, Inessa has spearheaded the Maintainers and Community Track, the mentorship program, and the Teen Track at the SciPy Conference, and co-founded the Contributor Experience project, reflecting her deep commitment to making the Python community more inclusive and accessible. She brings that same dedication to her roles on the NumPy Steering Committee, the scikit-learn survey team, and the SPEC (Scientific Python Ecosystem Coordination) Steering Committee. As a leader on the pyOpenSci Advisory Council, Inessa has been instrumental in advancing the organization's mission to support open and reproducible science.
Kafui Alordo
Kafui Alordo has spent years building and nurturing the Python community in Ho, in the Volta Region of Ghana. What began for Kafui as volunteer coaching at the first Django Girls Ho workshop grew into co-organizing the second and third editions, and eventually leading the workshop as its primary organizer, while also lending his expertise as a coach and co-organizer at Django Girls events across Ghana. Recognizing that sustainable community growth starts with welcoming total beginners, Kafui introduced a coding bootcamp initiative for his user group that has broadened participation and helped new learners find their footing in Python.
Kafui’s landmark achievement came with the organization of PyHo, the first-ever regional Python conference in Ho, which drew attendees from diverse backgrounds across the country. His impact has also extended well beyond Ghana, most recently stepping into the role of remote chair on the PyCascades organizing team.
Kalyan Prasad
Kalyan Prasad's journey in the Python community began in 2019 as a volunteer with the Hyderabad Python User Group (HydPy), one of India's largest Python communities, and he has grown steadily into one of its most consequential leaders. His dedication to PyConf Hyderabad has been especially remarkable–contributing across the CFP, program, and sponsorship teams, serving as co-chair in 2022, and stepping up as chair in both 2025 and 2026, representing four consecutive years of conference leadership at the regional and national level.
At the national scale, Kalyan also served as co-chair for PyCon India 2023. Kalyan's commitment extends well beyond India, as he actively contributes to the broader Python ecosystem as a reviewer, mentor, and program committee member for conferences around the world. His care for community safety is further reflected in two years of service on the NumFOCUS Code of Conduct squad, ensuring that Python spaces remain welcoming and respectful for everyone. Kalyan has also joined the PSF Diversity & Inclusion Working Group this year, contributing to inclusion efforts.
Maria Jose Molina Contreras
Maria Jose Molina Contreras has been a dedicated and wide-ranging contributor to the Python community, with deep roots in both Spanish-language and PyLadies initiatives. She has been a core organizer of PyLadiesCon since its inaugural edition in 2023, serving as co-chair in 2024 and 2025, and her tireless leadership helped make the most recent edition the most successful in the conference's history, raising over $55,000 in funds to support PyLadies members and chapters around the world.
Maria’s commitment to Spanish-speaking Pythonistas is equally impressive: she contributes to the Python Docs ES initiative, coordinates events for Python en Español on Discord, and co-founded the PyLadies en Español initiative, including leading the PyLadies presence at PyCon US. At EuroPython, Maria has volunteered since 2023 and taken on growing responsibility, leading community booths, PyLadies events, and community organizer efforts in 2024 and 2025. She has also served as a reviewer for PyCon US Charlas since 2020 and has been a speaker at numerous conferences including PyCon US, EuroPython, and PyConES, sharing her expertise with audiences across the global community.
Paul Everitt
Paul Everitt's relationship with Python stretches back to the very beginning! Paul was present at the early PyCons and played a foundational role as an incorporating member and director on the PSF's first Board of Directors, helping to establish the organization that supports Python to this day. Decades later, his commitment to the community remains as strong as ever, demonstrated through his long tenure as a Developer Advocate at JetBrains/PyCharm, where he has championed the company's sustained investment in Python open source.
Paul’s advocacy extends beyond any one project, as he has provided support to smaller but important ecosystem projects like HTMX and remained a regular, encouraging presence at Python conferences and on podcasts. Most recently, Paul proved that his contributions are not merely historical–he co-authored PEP 750, introducing template strings (t-strings) as a significant new feature in Python 3.14, demonstrating a continued willingness to roll up his sleeves and shape the language itself. Whether writing PEPs, giving conference talks, or simply championing the people who make Python great, Paul’s generous and enthusiastic spirit is an invaluable gift to the Python community.
May 11, 2026
PyCon
Introducing the 8 Companies on Startup Row at PyCon US 2026
Each year at PyCon US, Startup Row highlights a select group of early-stage companies building ambitious products with Python at their core. The 2026 cohort reflects a rapidly evolving landscape, where advances in AI, data infrastructure, and developer tooling are reshaping how software is built, deployed, and secured.
This year’s companies aim to solve an evolving set of problems facing independent developers and large-scale organizations alike: securing AI-driven applications, managing multimodal data, orchestrating autonomous agents, automating complex workflows, and extracting insight from increasingly unstructured information. Across these domains, Python continues to serve as a unifying layer: encouraging experimentation, enabling systems built to scale, and connecting open-source innovation with real-world impact.
Startup Row brings these emerging teams into direct conversation with the Python community at PyCon US. Throughout the conference, attendees can meet founders, explore new tools, and see firsthand how these companies are applying Python to solve meaningful problems. For the startups in attendance, it’s an opportunity to share their work, connect with users and collaborators, and contribute back to the ecosystem that helped shape them. Register now to experience Startup Row and much more at PyCon US 2026.
Supporting Startups at PyCon US
There are many ways to support Startup Row companies, during PyCon US and long after the conference wraps:- Stop by Startup Row: Spend a few minutes with each team, ask what they’re building, and see their products in action.
- Try their tools: Whether it’s an open-source library or a hosted service, hands-on usage (alongside constructive feedback) is one of the most valuable forms of support. If a startup seems compelling, consider a pilot project and become a design partner.
- Share feedback: Early-stage teams benefit enormously from thoughtful questions, real-world use cases, and honest perspectives from the community.
- Contribute to their open source projects: Many Startup Row companies are deeply rooted in open source. Startup Row companies with open-source roots welcome bug reports, documentation improvements, and pull requests. Contributions and constructive feedback are always appreciated.
- Help spread the word: If you find something interesting, tell a friend, post about it, or share it with your team. (And if you're posting to social media, consider using tags like #PyConUS and #StartupRow to share the love.)
- Explore opportunities to work together: Many of these companies are hiring, looking for design partners, or open to collaborations; don’t hesitate to ask.
- But, most importantly, be supportive. Building a startup is hard, and every team is learning in real time. Curiosity, patience, and encouragement make a meaningful difference.
Meet Startup Row at PyCon US 2026
We’re excited to introduce the companies selected for Startup Row at PyCon US 2026.Arcjet
Embedding security directly into application code is fast becoming as indispensable as logging, especially as AI services open new attack surfaces. Arcjet offers a developer‑first platform that lets teams add bot detection, rate limiting and data‑privacy checks right where the request is processed.The service ships open‑source JavaScript and Python SDKs that run a WebAssembly module locally before calling Arcjet’s low‑latency decision API, ensuring full application context informs every security verdict. Both SDKs are released under a permissive open‑source license, letting developers integrate the primitives without vendor lock‑in while scaling usage through Arcjet’s SaaS tiered pricing.
The JavaScript SDK alone has earned ≈1.7 k GitHub stars and the combined libraries have attracted over 1,000 developers protecting more than 500 production applications. Arcjet offers a free tier and usage‑based paid plans, mirroring Cloudflare’s model to serve startups and enterprises alike.
Arcjet is rolling out additional security tools and deepening integrations with popular frameworks such as FastAPI and Flask, aiming to broaden adoption across AI‑enabled services. In short, Arcjet aims to be the security‑as‑code layer every modern app ships with.
CapiscIO
As multi‑agent AI systems become the backbone of emerging digital workflows, developers lack a reliable way to verify agent identities and enforce governance. CapiscIO steps into that gap, offering an open‑core trust layer built for the nascent agent economy.CapiscIO offers cryptographic Trust Badges, policy enforcement, and tamper‑evident chain‑of‑custody wrapped in a Python SDK. Released under Apache 2.0, it ships a CLI, LangChain integration, and an MCP SDK that let agents prove identity without overhauling existing infrastructure.
The capiscio‑core repository on GitHub hosts the open‑source core and SDKs under Apache 2.0, drawing early contributors building agentic pipelines.
Beon de Nood, Founder & CEO, brings two decades of enterprise development experience and a prior successful startup to the table. “AI governance should be practical, not bureaucratic. Organizations need visibility into what they have, confidence in what they deploy, and control over how agents behave in production,” he says.
CapiscIO is continuously adding new extensions, expanding its LangChain and MCP SDKs, and preparing a managed agent‑identity registry for enterprises. In short, CapiscIO aims to be the passport office of the agent economy, handing each autonomous component an unspoofable ID and clear permissions.
Chonkie
The explosion of retrieval‑augmented generation (RAG) is unlocking AI’s ability to reason over ever‑larger knowledge bases. Yet the first step of splitting massive texts into meaningful pieces still lags behind.Chonkie offers an open‑core suite centered on Memchunk, a Python library with Cython acceleration that delivers up to 160 GB/s throughput and ten chunking strategies under a permissive license. It also ships Catsu, a unified embeddings client for nine providers, and a lightweight ingestion layer; the commercial Chonkie Labs service combines them into a SaaS that monitors the web and synthesises insights.
Co‑founder and CEO Shreyash Nigam, who grew up in India and met his business partner in eighth grade, reflects the team’s open‑source ethos, saying “It’s fun to put a project on GitHub and see a community of developers crowd around it.” That enthusiasm underpins Chonkie’s decision to release its core tooling openly while building a commercial deep‑research service.
Backed by Y Combinator’s Summer 2025 batch, Chonkie plans to grow from four to six engineers and launch the next version of Chonkie Labs later this year, adding real‑time web crawling and multi‑modal summarization. In short, Chonkie aims to be the Google of corporate intelligence.
Phemeral
Running production‑grade Python services used to mean wrestling with containers, VMs, or complex CI pipelines.
Phemeral, launched in April 2026, offers Python developers a managed hosting platform that turns a GitHub repo into an instantly deployable, scale‑to‑zero backend.
Phemeral provides builds for popular frameworks (like Django, Flask, and FastAPI), integrations with popular package managers (e.g. uv, Pip, and Poetry), as well as continuous deployment on every push while charging only for actual request execution under a usage‑based model.
Founder & CEO Chinmaya Joshi says, "Building with Python is easier than ever, but hosting and deployment remain a pain. Phemeral is building the easiest way to deploy Python web apps."
Joshi is focused on expanding framework support and refining the platform so that Python developers (from vibe-coders and solo devs, to agencies and enterprises) can enjoy the same zero‑config experience modern front‑end platforms provide.
Pixeltable
Multimodal generative AI is turning simple datasets into sprawling collections of video, images, audio and text, forcing engineers to stitch together ad‑hoc pipelines just to keep data flowing. That complexity has created a new bottleneck for teams trying to move from prototype to production.The open‑source Python library from Pixeltable offers a declarative table API that lets developers store, query and version multimodal assets side by side while embedding custom Python functions. Built with incremental update capabilities, combined lineage and schema tracking, and a development‑to‑production mirror, the platform also provides orchestration capabilities that keep pipelines reproducible without rewriting code.
The project has earned ≈1.6 k GitHub stars and a growing contributor base, closed a $5.5 million seed round in December 2024, and is already used by early adopters such as Obvio and Variata to streamline computer‑vision workflows.
Co‑founder and CTO Marcel Kornacker, who previously founded Apache Impala and co-founded Apache Parquet, says “Just as relational databases revolutionized web development, Pixeltable is transforming AI application development.”
The company's roadmap centers on launching Pixeltable Cloud, a serverless managed service that will extend the open core with collaborative editing, auto‑scaling storage and built‑in monitoring. In short, Pixeltable aims to be the relational database of multimodal AI data.
SubImage
The sheer complexity of modern multi‑cloud environments turns security visibility into a labyrinth, and SubImage offers a graph‑first view that cuts through the noise.It builds an infrastructure graph using the open‑source Cartography library (Apache‑2.0, Python), then highlights exploit chains as attack paths and applies AI models to prioritize findings based on ownership and contextual risk.
Cartography, originally developed at Lyft and now a Cloud Native Computing Sandbox project, has ≈3.7 k GitHub stars, is used by over 70 organizations, and SubImage’s managed service already protects security teams at Veriff and Neo4j; the company closed a $4.2 million seed round in November 2025.
Co‑founder Alex Chantavy, an offensive‑security engineer, says “The most important tool was our internal cloud knowledge graph because it showed us a map of the easiest attack paths … One of the most effective ways to defend an environment is to see it the same way an attacker would.”
The startup is focusing on scaling its managed service and deepening AI integration as it targets larger enterprise customers. In short, SubImage aims to be the map of the cloud for defenders.
Tetrix
Private‑market data pipelines still rely on manual downloads and spreadsheet gymnastics, leaving analysts chasing yesterday’s numbers. Tetrix’s AI investment intelligence platform is part of a wave that brings automation to this lagging workflow.Built primarily in Python, Tetrix automates document collection from fund portals and other sources, extracts structured data from PDFs and other unstructured sources using tool-using language models, then presents exposures, cash flows, and benchmarks through an interactive dashboard that also accepts natural‑language queries.
The company is growing quickly, doubling revenue quarter over quarter and, at least so far, maintains an impressive record of zero customer churn. In the coming year or so, Tetrix plans to triple its headcount from fifteen to forty‑five employees.
TimeCopilot
Time‑series forecasting has long been a tangled mix of scripts, dashboards, and domain expertise, and the recent surge in autonomous agents is finally giving it a unified voice. Enter TimeCopilot, an open‑source framework that brings agentic reasoning to the heart of forecasting.The platform, built in Python under a permissive open‑source license, lets users request forecasts in plain English. It automatically orchestrates more than thirty models from seven families, including Chronos and TimesFM, while weaving large language model reasoning into each prediction. Its declarative API was born from co‑founder Azul Garza‑Ramírez’s economics background and her earlier work on TimeGPT for Nixtla (featured SR'23), evolving from a weekend experiment started nearly seven years ago.
The TimeCopilot/timecopilot repository has amassed roughly 420 stars on GitHub, with the release of OpenClaw marking a notable spike in community interest.
Upcoming plans include a managed SaaS offering with enterprise‑grade scaling and support, the rollout of a benchmarking suite to measure agentic forecast quality, and targeted use cases such as predicting cloud‑compute expenses for AI workloads.
Thank You's and Acknowledgements
Startup Row is a volunteer-driven program, co-led by Jason D. Rowley and Shea Tate-Di Donna (SR'15; Zana, acquired Startups.com), in collaboration with the PyCon US organizing team. Thanks to everyone who makes PyCon US possible.We also extend a gracious thank-you to all startup founders who submitted applications to Startup Row at PyCon US this year. Thanks again for taking the time to share what you're building. We hope to help out in whatever way we can.
Good luck to everyone, and see you in Long Beach, CA!
Talk Python to Me
#548: Event Sourcing Design Pattern
What if your database worked more like Git? Every change captured as an immutable event you can replay, instead of a single mutating row that quietly forgets its own history. That's event sourcing, and Chris May is back on Talk Python, fresh off our Datastar panel, to walk us through what it actually looks like in Python. We'll cover the core patterns, the libraries to reach for, when not to use it, and why event sourcing turns out to be a surprisingly good fit for AI-assisted coding.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code talkpython26</a><br> <a href='https://talkpython.fm/temporal'>Temporal</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guest</strong><br/> <strong>Chris May</strong>: <a href="https://everydaysuperpowers.dev?featured_on=talkpython" target="_blank" >everydaysuperpowers.dev</a><br/> <br/> <strong>Intro to event sourcing e-book</strong>: <a href="https://everydaysuperpowers.gumroad.com/l/es_intro?featured_on=talkpython" target="_blank" >everydaysuperpowers.gumroad.com</a><br/> <br/> <strong>Domain-Driven Design: The Power of CQRS and Event Sourcing: How CQRS/ES Redefine Building Scalable System</strong>: <a href="https://ricofritzsche.me/cqrs-event-sourcing-projections/?featured_on=talkpython" target="_blank" >ricofritzsche.me</a><br/> <strong>DDD</strong>: <a href="https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215?featured_on=talkpython" target="_blank" >www.amazon.com</a><br/> <strong>Understanding Eventsourcing (Martin Dilger)</strong>: <a href="https://www.amazon.com/Understanding-Eventsourcing-Planning-Implementing-Eventmodeling/dp/B0DNXQJM9Z/ref=sr_1_1?dib=eyJ2IjoiMSJ9.LqdaOIXJSPbgGuz_Akil-snFyMZVys1Y2IhnqvPv_CGK3R6Vwvu6AN1PHBi6twz-c3bPG5mdbhLJQyYs30LXh2pT6wiqXPrz0RKmfeYzq_sT18tc2UAWVG8rFBN1C-H46AHiiDqusp6SyDm2W15n4ZBKn11xW4yNvazjq3pg369c53KDFONnWqe9AB4xzAF2VeQ4n64hOk30-GmG_1K6_zIPBw4PXkVX9UDYq0QDIAQ.0Kvsl2V8aqDO4Av47g881GGoRPCpF0gCrbF6GJZbjRE&dib_tag=se&keywords=understanding+event+sourcing&qid=1777078561&sbo=RZvfv%2F%2FHxDF%2BO5021pAnSA%3D%3D&sr=8-1&featured_on=talkpython" target="_blank" >www.amazon.com</a><br/> <strong>Event Sourcing Explained using Football Video</strong>: <a href="https://www.youtube.com/watch?v=xPmQxYIi5fA" target="_blank" >www.youtube.com</a><br/> <strong>Why I finally embraced event sourcing and why you should too article</strong>: <a href="https://everydaysuperpowers.dev/articles/why-i-finally-embraced-event-sourcingand-why-you-should-too/?featured_on=talkpython" target="_blank" >everydaysuperpowers.dev</a><br/> <strong>valkey</strong>: <a href="https://valkey.io/?featured_on=talkpython" target="_blank" >valkey.io</a><br/> <strong>diskcache</strong>: <a href="https://talkpython.fm/episodes/show/534/diskcache-your-secret-python-perf-weapon" target="_blank" >talkpython.fm</a><br/> <strong>eventsourcing package</strong>: <a href="https://github.com/pyeventsourcing/eventsourcing?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>eventsourcing docs</strong>: <a href="https://eventsourcing.readthedocs.io/en/stable/topics/tutorial/part1.html?featured_on=talkpython" target="_blank" >eventsourcing.readthedocs.io</a><br/> <strong>John Bywater</strong>: <a href="https://github.com/johnbywater?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Datastar</strong>: <a href="https://data-star.dev/?featured_on=talkpython" target="_blank" >data-star.dev</a><br/> <strong>Microconf</strong>: <a href="https://microconf.com/?featured_on=talkpython" target="_blank" >microconf.com</a><br/> <strong>Event Modeling & Event Sourcing Podcast</strong>: <a href="https://podcast.eventmodeling.org?featured_on=talkpython" target="_blank" >podcast.eventmodeling.org</a><br/> <strong>Python Package Guides for AI Agents</strong>: <a href="https://github.com/mikeckennedy/python-package-guides-for-agents?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Iodine tablets AI joke</strong>: <a href="https://x.com/pr0grammerhum0r/status/2046650199930458334?s=46&featured_on=pythonbytes" target="_blank" >x.com</a><br/> <strong>KurrentDb</strong>: <a href="https://www.kurrent.io?featured_on=talkpython" target="_blank" >www.kurrent.io</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=s37d6yN2P70" target="_blank" >youtube.com</a><br/> <strong>Episode #548 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/548/event-sourcing-design-pattern#takeaways-anchor" target="_blank" >talkpython.fm/548</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/548/event-sourcing-design-pattern" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
Real Python
How to Flatten a List of Lists in Python
Flattening a list in Python involves converting a nested list structure into a single, one-dimensional list. A common approach to flatten a list of lists is to use a for loop to iterate through each sublist. Then you can add each item to a new list with the .extend() method or the augmented concatenation operator (+=). This will “unlist” the list, resulting in a flattened list.
Python’s standard library offers other tools to achieve similar results. You can also use a list comprehension for a concise one-liner solution. Each method has its own performance characteristics, but for loops and list comprehensions are generally more efficient.
By the end of this tutorial, you’ll understand that:
- Flattening a list involves converting nested lists into a single list.
- You can use a
forloop and.extend()or a list comprehension to flatten lists in Python. - Standard-library functions like
itertools.chain()andfunctools.reduce()can also flatten lists. - A custom
flatten()function, either recursive or iterative, handles arbitrarily nested lists. - The
.flatten()method in NumPy efficiently flattens arrays for data science tasks.
To better illustrate what it means to flatten a list, say that you have the following matrix of numeric values:
>>> matrix = [
... [9, 3, 8, 3],
... [4, 5, 2, 8],
... [6, 4, 3, 1],
... [1, 0, 4, 5],
... ]
The matrix variable holds a Python list that contains four nested lists. Each nested list represents a row in the matrix. The rows store four items or numbers each. Now say that you want to turn this matrix into the following list:
[9, 3, 8, 3, 4, 5, 2, 8, 6, 4, 3, 1, 1, 0, 4, 5]
How do you manage to flatten your matrix and get a one-dimensional list like the one above? In this tutorial, you’ll learn how to do that in Python.
Free Bonus: Click here to download the free sample code that showcases and compares several ways to flatten a list of lists in Python.
Take the Quiz: Test your knowledge with our interactive “How to Flatten a List of Lists in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Flatten a List of Lists in PythonTest your understanding of how to flatten a list of lists in Python using for loops, list comprehensions, itertools, recursion, and NumPy.
How to Flatten a List of Lists With a for Loop
How can you flatten a list of lists in Python? In general, to flatten a list of lists, you can run the following steps either explicitly or implicitly:
- Create a new empty list to store the flattened data.
- Iterate over each nested list or sublist in the original list.
- Add every item from the current sublist to the list of flattened data.
- Return the resulting list with the flattened data.
You can follow several paths and use multiple tools to run these steps in Python. The most natural and readable way to do this is to use a for loop, which allows you to explicitly iterate over the sublists.
Then you need a way to add items to the new flattened list. For that, you have a couple of valid options. First, you’ll turn to the .extend() method from the list class itself, and then you’ll give the augmented concatenation operator (+=) a go.
To continue with the matrix example, here’s how you would translate these steps into Python code using a for loop and the .extend() method:
>>> def flatten_extend(matrix):
... flat_list = []
... for row in matrix:
... flat_list.extend(row)
... return flat_list
...
Inside flatten_extend(), you first create a new empty list called flat_list. You’ll use this list to store the flattened data when you extract it from matrix. Then you start a loop to iterate over the inner, or nested, lists from matrix. In this example, you use the name row to represent the current nested list.
In every iteration, you use .extend() to add the content of the current sublist to flat_list. This method takes an iterable as an argument and appends its items to the end of the target list.
Now go ahead and run the following code to check that your function does the job:
>>> flatten_extend(matrix)
[9, 3, 8, 3, 4, 5, 2, 8, 6, 4, 3, 1, 1, 0, 4, 5]
That’s neat! You’ve flattened your first list of lists. As a result, you have a one-dimensional list containing all the numeric values from matrix.
With .extend(), you’ve come up with a Pythonic and readable way to flatten your lists. You can get the same result using the augmented concatenation operator (+=) on your flat_list object. However, this alternative approach may not be as readable:
>>> def flatten_concatenation(matrix):
... flat_list = []
... for row in matrix:
... flat_list += row
... return flat_list
...
Read the full article at https://realpython.com/python-flatten-list/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: How to Flatten a List of Lists in Python
In this quiz, you’ll test your understanding of how to flatten a list in Python.
You’ll write code and answer questions to revisit the concept of converting a multidimensional list, such as a matrix, into a one-dimensional list.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
DSF member of the month - Bhuvnesh Sharma
For May 2026, we welcome Bhuvnesh Sharma as our DSF member of the month! ⭐

Bhuvnesh is a Django contributor since 2022 and a Google Summer of Code (GSoC) participant in 2023 for Django. He is now a mentor and a GSoC admin organizer for the Django Software Foundation organization. He is the founder of Django Events Foundation India (DEFI) and DjangoDay India conference. He has been a DSF member since July 2023. He is looking for new opportunities!
You can learn more about Bhuvnesh by visiting Bhuvnesh's website and his GitHub Profile.
Let’s spend some time getting to know Bhuvnesh better!
Can you tell us a little about yourself (hobbies, education, etc)
I’m Bhuvnesh (aka DevilsAutumn), a software developer from India. I graduated in 2024 from GL Bajaj Institute of technology and management, and most of my work has been around Python, Django and building backend systems. My journey with Django started when I started contributing to Django core in 2022. I usually like working on things where there is an actual product involved, not just writing few APIs and closing the task. I like thinking about how the whole thing will work: models, permissions, background jobs, deployment, users, edge cases and all of that.
Apart from work, I like reading books around startups and entrepreneurship, watching movies, and honestly I overthink a lot about building products. Sometimes too much, but yeah that’s also how many ideas start for me. I’ve also been involved with the Django community through Django India, GSoC, Djangonaut Space and DjangoDay India, which has been a big part of my journey.
I'm curious, where your nickname "DevilsAutumn" comes from?
Haha, Nice question. So, there is one of my friend who used to write sci-fi novels. In 2022, I decided that I’ll have one unique coding name for me and thinking that I have a friend who write novels his imagination must be great, I went to him to ask for name ideas and one of the names he suggested was DevilsAutumn, since then I use that as my nickname.
How did you start using Django?
When I was in my exploring phase, I was really curious and trying out different languages, frameworks etc. and I read a blog post from Instagram engineering team about Django being used at Instagram. A framework which is a backbone of a product used by billions of users, will get anyone curious. From there I started exploring Django and I fell in love with it. The framework, the community, the documentation - all of it was amazing.
What other framework do you know and if there is anything you would like to have in Django if you had magical powers?
I have also worked with FastAPI and I find that really cool as well. But the calmness django has is unbeatable.
If I had magical powers, I’d be living on the moon. Just kidding. 😆
There are a couple of things that I would love in Django:
First is "modernising" the website which is already underway. The website feels very boring and outdated. I’d love to see a modern version.
Second, I would love to see Django have built-in support for creating REST APIs. DRF is amazing and it has done a lot for the Django ecosystem, but because it is still an external library, there are some rough edges. Sometimes serialization can feel a bit slow or heavy, the learning curve is different from regular Django, and you also depend on a separate package for something which has become a core need in modern web apps.
What projects are you working on now?
I am currently working on a project called Trevo, which helps people find activities happening around them which anyone can join and socialize with others in real life.
Apart from that, I am also working on an open source python library which is a migration safety toolkit for Django. It's called django-migrations-inspector. It helps you find problems in your migration files before they go into production.
Which Django libraries are your favorite (core or 3rd party)?
Although there is a long list, I’d probably say Django REST Framework (DRF), django-import-export, and django-debug-toolbar.
DRF is the obvious one because I’ve used it a lot for building APIs with Django. Even with some rough edges, it has been very important for the ecosystem 😛
I also really like django-import-export, mostly because in real projects you always end up needing some Excel/CSV import export kind of thing, and this just saves time.
And django-debug-toolbar because it has made debugging queries and performance issues much easier for me personally.
What are the top three things in Django that you like?
I think the first thing has to be the community. People in the Django community are genuinely nice and helpful, and the docs are also really good. A lot of times, when you are stuck, either the documentation has already explained it properly or someone has discussed the same thing before.
Second, I really like the ecosystem around Django. For most of the common things you need while building a product, there is usually already a good package available. And Django itself also gives you so much out of the box, so you don’t have to build every basic thing from scratch.
And third is Django admin. Honestly, I really like it. Some people may not think of it as a very exciting feature, but when you are building real products, having a working admin panel so quickly is super useful. It saves a lot of time.
You are one of the admin organizers of GSoC program for Django organization, thank you for helping. How is it going for you? Do you need help?
It has been going well so far, thank you for asking. I’m really happy to help with organizing GSoC for Django. It’s always nice to see contributors getting involved and working on meaningful projects, I even posted about it on LinkedIn.
Everything is good for now, but I’ll reach out in case I need any help. In fact, we are also working creating GSoC working group to make things more smooth for future. I’m sure that is also going to help us.
You have been part of Djangonaut Space program as a Navigator (Mentor) in the first session. How did you find the experience? What is your reflection on the program after all this time?
It was a great experience! I love to help people who are new to open-source and guide them just like I was guided by a mentor in my college days. I believe anyone can do great things in life if they are given proper mentorship. That's my motivation behind getting involved in Djangonaut Space.
Djangonaut Space program has created a strong community of developers from all background that love Django. A lot of people want to contribute to open source, but they don’t always know where to start, or they feel the project is too big for them. Djangonaut Space helped reduce that fear by giving people guidance, structure, and a friendly space to ask questions.
Even after all this time, I still feel it is one of the best community-led efforts around Django. It doesn’t just help people contribute code, it helps them feel that they belong in the community.
Do you have any advice for folks would like to consider mentoring through GSoC or Djangonaut Space?
I just want to say that people who are experienced, who have been contributing to Django or people who are maintaining any 3rd party package, must consider mentoring through GSoC or Djangonaut Space program. It is one of the most impactful way to contribute to open source in my opinion because you are not just guiding a few people, you might be guiding the next generation of mentors, Django maintainers, org admins, community leaders or Djangonaut Space organizers.
And mentorship plays the most important role in maintaining the ecosystem that django has created for years.
You have been previously a participant of GSoC for Django organization, you are now an admin of the organization. That's great! How did you get to this point? Did you ever imagine you would end up here?
Haha honestly, no. I don’t think I ever imagined it would turn out this way. When I first got into GSoC with Django, I was just really happy to be there and contribute. At that time, I was mostly focused on learning, understanding the project better, and trying not to mess things up 😅
But after that I kind of stayed around. I kept contributing, stayed connected with the community, mentored in Djangonaut Space, then mentored in GSoC 2024, and slowly started getting more involved in the community and organizing side of things too.
So it was never like I had this clear plan that one day I’ll become an org admin. It just happened very naturally over time, mostly because I kept showing up and people trusted me with more responsibility.
Now being on this side feels a little unreal, but also very special. I know how it feels to be a contributor, how confusing and exciting it can be, so I really care about making the experience good for others too.
In a way, it feels like a full-circle moment, but also like there’s still a lot more to learn and do.
You are the founder of DjangoDay India and Django Events Foundation India, could you tell us a bit more on the event and what made you create this structure?
DjangoDay India started from a very simple thought, like we should have a proper Django-focused event in India. There are a lot of people here using Django — developers, students, companies — but we didn’t really have one place where everyone can come together. It was really difficult to organize DjangoDay India in 2025 because it was the first Django event happening at that scale in India but we still made it thanks to the amazing team.
Django Events Foundation India (DEFI) was created to give this some structure. I didn’t wanted DjangoDay India to become just a one-time thing or something which only depends on me. Apart from that, I even want to support more local Django events happening around India through DEFI. The idea is to make it sustainable, community-first, and slowly involve more people. For me, it is mainly about growing the Django ecosystem in India and giving people a space to speak, volunteer, sponsor, contribute, and maybe later lead also.
Do you remember your first contribution to Django and in open source?
Yes, so I was going through someone else’s PR which got merged and in that I found a small typo in the comment. Then I created a new PR to fix that. It was my first contribution to Django.
Talking about the first open source contribution, it was adding some phone number validation checks in validatorjs library.
Is there anything else you’d like to say?
Nothing much, just thank you for having me here. If someone is thinking of contributing to Django but feels scared, please don’t worry. Most of us also started by staring at the codebase and pretending we understood what was happening. Just start small, ask questions, and slowly it starts making sense.
Thank you for doing the interview, Bhuvnesh !
Python Bytes
#479 Talking About Types
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://tildeweb.nl/~michiel/httpxyz-one-month-in.html?featured_on=pythonbytes">httpxyz one month in</a></strong></li> <li><strong><a href="https://blog.geekuni.com/2026/04/python-concurrency.html?featured_on=pythonbytes">Learn concurrency - a deep dive into multithreading with Python</a></strong></li> <li><strong><a href="https://ichard26.github.io/blog/2026/04/whats-new-in-pip-26.1/?featured_on=pythonbytes">pip 26.1 - lockfiles and dependency cooldowns</a></strong></li> <li><strong><a href="https://peps.python.org/pep-0661/?featured_on=pythonbytes">Python 3.15 <code>sentinal</code> values from PEP 661</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=3E3KPBAYkWo' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="479">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1: <a href="https://tildeweb.nl/~michiel/httpxyz-one-month-in.html?featured_on=pythonbytes">httpxyz one month in</a></strong></p> <ul> <li>First version of httpxyz contained just the fixes to get zstd working, and the fixes to get the test suite running on python 3.14, some ‘housekeeping’ changes related to the renaming</li> <li>End of March: a compatibility shim that allows you to use httpxyz even with third-party packages that import httpx themselves, as long as you import httpxyz first. <ul> <li>Importing <code>httpxyz</code> automatically registers it under the <code>httpx</code> name in <code>sys.modules</code> , see https://httpxyz.org/httpx-compatibility/</li> </ul></li> <li>Fixed a WHOLE bunch of performance related issues by forking httpcore</li> </ul> <p><strong>Brian #2: <a href="https://blog.geekuni.com/2026/04/python-concurrency.html?featured_on=pythonbytes">Learn concurrency - a deep dive into multithreading with Python</a></strong></p> <ul> <li>Nikos Vaggalis</li> <li>“Whenever you are trying to speed up code using multiple cores, always ask yourself: “Do these threads need to talk to each other right now?” If the answer is yes, it will be slow. The best parallel code splits a big job into completely isolated chunks, processes them separately, and merges the results at the finish line.”</li> <li>Good overview of thread concurrency with Python and how that’s been improved dramatically with free-threaded Python</li> <li>Defines lots of terms you come across, including “embarrassingly parallel multithreading”</li> <li>There’s a counter example that’s nice <ul> <li>Start with a shared resource, a counter, and multiple threads updating it</li> <li>Attempt to fix with <code>threading.Lock()</code>, which fixes it, but slows things down</li> <li>Good explanation of why</li> <li>Proper fix with <code>concurrent.futures</code> and separating the work of different threads so that they can be independent and their results can be combined when they’re all finished.</li> </ul></li> </ul> <p><strong>Michael #3: <a href="https://ichard26.github.io/blog/2026/04/whats-new-in-pip-26.1/?featured_on=pythonbytes">pip 26.1 - lockfiles and dependency cooldowns</a></strong></p> <ul> <li>Python 3.9 is no longer supported</li> <li>Experimental: installing from pylock files</li> <li>Dependency cooldowns (see <a href="https://mkennedy.codes/posts/python-supply-chain-security-made-easy/?featured_on=pythonbytes">my post about this</a>)</li> <li>Lifting several 2020 resolver limitations</li> </ul> <p><strong>Brian #4: <a href="https://peps.python.org/pep-0661/?featured_on=pythonbytes">Python 3.15 <code>sentinal</code> values from PEP 661</a></strong></p> <div class="codehilite"> <pre><span></span><code><span class="n">MISSING</span> <span class="o">=</span> <span class="n">sentinel</span><span class="p">(</span><span class="s2">"MISSING"</span><span class="p">)</span> <span class="k">def</span><span class="w"> </span><span class="nf">next_value</span><span class="p">(</span><span class="n">default</span><span class="p">:</span> <span class="nb">int</span> <span class="o">|</span> <span class="n">MISSING</span> <span class="o">=</span> <span class="n">MISSING</span><span class="p">):</span> <span class="o">...</span> <span class="k">if</span> <span class="n">default</span> <span class="ow">is</span> <span class="n">MISSING</span><span class="p">:</span> <span class="o">...</span> </code></pre> </div> <ul> <li>Take a name str as a constructor parameter</li> <li>Intended to be compared with <code>is</code> operator, similar to <code>None</code></li> <li>Sentinal objects can be used as a type, also similar to <code>None</code> <ul> <li>and can be combined with other types with <code>|</code>.</li> </ul></li> <li>Unlike <code>None</code>, sentinal values are truthy. (Elipses <code>...</code> are also truthy) <ul> <li>This seems like a strange choice. but I guess it must have made sense to someone.</li> <li>It does force you to use <code>is</code> instead of depending on False-ness, so I guess it’ll make code using sentinels more readable.</li> </ul></li> <li>Interesting that the PEP was started in 2021, and we’re finally getting it this year.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://lucumr.pocoo.org/2026/4/28/before-github/?featured_on=pythonbytes">Before GitHub</a> - Armin Ronacher</li> <li><a href="https://tenacityaudio.org?featured_on=pythonbytes">tenacity</a> - cross-platform multi-track audio editor/recorder <ul> <li>learned about it from Armin’s article</li> </ul></li> </ul> <p><strong>Joke:</strong></p> <ul> <li>Joke option <a href="https://xkcd.com/3233/?featured_on=pythonbytes">Make it myself</a> <ul> <li>Seems similar to what people think about software now</li> </ul></li> </ul> <p>Links</p> <ul> <li><a href="https://tildeweb.nl/~michiel/httpxyz-one-month-in.html?featured_on=pythonbytes">httpxyz one month in</a></li> <li><a href="https://httpxyz.org/httpx-compatibility/?featured_on=pythonbytes">httpxyz.org/httpx-compatibility</a></li> <li><a href="https://blog.geekuni.com/2026/04/python-concurrency.html?featured_on=pythonbytes">Learn concurrency - a deep dive into multithreading with Python</a></li> <li><a href="https://ichard26.github.io/blog/2026/04/whats-new-in-pip-26.1/?featured_on=pythonbytes">pip 26.1 - lockfiles and dependency cooldowns</a></li> <li><a href="https://mkennedy.codes/posts/python-supply-chain-security-made-easy/?featured_on=pythonbytes">my post about this</a></li> <li><a href="https://peps.python.org/pep-0661/?featured_on=pythonbytes">Python 3.15 <code>sentinal</code> values from PEP 661</a></li> <li><a href="https://lucumr.pocoo.org/2026/4/28/before-github/?featured_on=pythonbytes">Before GitHub</a></li> <li><a href="https://tenacityaudio.org?featured_on=pythonbytes">tenacity</a></li> <li><a href="https://xkcd.com/3233/?featured_on=pythonbytes">Make it myself</a></li> </ul>
Python Software Foundation
Strategic Planning at the PSF
The Python Software Foundation (PSF) is excited to share that the PSF Board has been developing a strategic plan to guide the foundation's direction over the next five years. We are sharing the high-level goals today to collect feedback and commentary from the Python community. A full draft with detailed objectives will be published in early June for public feedback, and the board hopes to adopt the plan in July 2026, to be reviewed annually going forward.
Why now
The Python ecosystem is growing and changing fast. PyPI hosts over 800,000 projects and serves tens of billions of downloads per month. The Developers-in-Residence program has grown from a single role to a team spanning CPython development, security, and PyPI safety, proving that targeted investment in core infrastructure works. Last year's fundraiser showed that the community and sponsors are willing to support the PSF's mission when provided the opportunity.
The foundation also faces challenges. As we shared in November, the PSF's assets and yearly revenue have declined and costs have increased, while the demand for the foundation's work grows faster than its capacity. Last year we had to pause the Grants Program after reaching the budget cap earlier than expected. These pressures are part of why the board committed to a strategic plan: the foundation needs a clear framework for making hard choices about where to focus.
The PSF Board has discussed strategic planning over the years, including at the 2024 board retreat. This year, we committed to turning that discussion into a concrete plan. The process included numerous interviews with PSF Staff, community members, and participants across the Python ecosystem. After interviews, the PSF Board went through a prioritization exercise, followed by a series of dedicated and structured board discussions.
The direction
The plan has two parts:
I. Organizational Goals: How the PSF operates across all its activities, and
II. Program Goals: Where the PSF directs its work and resources.
We invite your feedback on all of the goals in both parts of the plan (See the “How to participate” section below).
I. Organizational Goals: How we operate
- Financial Sustainability: Diversify the PSF's revenue so the foundation is not dependent on any single source.
- Building a Resilient Foundation: Strengthen governance, financial oversight, and knowledge management so the organization can survive transitions and operate transparently.
- Diversity and Inclusion: D&I is not treated as a standalone effort. D&I is a lens for all PSF decisions and activities.
- Transparency and Community Trust: Increase visibility into how the PSF makes decisions and uses its resources, as the community's trust in its governance is the foundation of the PSF's credibility.
- Community Empowerment and Self-Sufficiency: Support Python communities in building their own capacity through collaboration and shared resources.
- Strong Partnerships and Collaboration: Partner with organizations that distribute, extend, and depend on Python, as well as with community groups across the open source ecosystem.
II. Program Goals: Where we focus our work
- Secure Python's Software Supply Chain and Distribution Infrastructure. PyPI is critical global infrastructure, and supply chain security goes beyond the index. Python reaches users through many channels beyond python.org and PyPI, which makes collaboration with distributors essential.
- Responsibly Grow and Advance Critical Python Infrastructure. The PSF stewards PyPI, CPython, python.org, pip, and more. Growth needs to match staffing capacity and sustainable funding.
- Foster a Thriving, Connected Global Python Community. Support the global Python community through events, grants, and working groups, while empowering regional communities to be self-sufficient.
- Develop the Next Generation of Python Developers. Make Python accessible to newcomers and remove barriers for underrepresented groups.
How the plan works
We developed this strategic plan to cover a five-year period. The board will review progress annually with community input, review whether priorities need to shift, and publish the results so the community can see how we are tracking. The intention is for the strategic plan to be flexible and adaptive, so that it can effectively guide the PSF’s priorities as the ecosystem continues to grow and evolve, rather than a static document that begins to collect dust on the shelf.
We developed the plan to set direction–not implementation details. How to carry it out is the job of PSF Staff, and the specifics will evolve as we learn what works. Once adopted, the plan will directly inform how the PSF allocates its budget and staff time and how it seeks funding.
How to participate
If any of these goals matter to you, or if you think we are missing something important, we want to hear from you.
We welcome you to email strategy@python.org to share your thoughts. This is the best way to reach us asynchronously.
You can also join the conversation with us at:
- PSF Board Office Hours on May 12 and June 9th, on the PSF Discord. We hope to spend both of these sessions focused on discussing the strategic plan with people from the community.
- PyCon US 2026 at the Members Lunch and a dedicated Open Space session. We know only a small fraction of our community will be present at PyCon US this year, so we warmly welcome you to engage with us on Discuss and via the email address provided above.
- A Python Discuss thread is available for open community discussion. We welcome you to join in with feedback and comments.
A full draft with detailed objectives under each Program Goal will be published in early June for community feedback via this blog, Python Discuss under the PSF category, and social media. The feedback window for this year will close before the July 8th PSF Board meeting.
This plan will shape what the PSF does and how it spends its resources for the next five years. If you use Python, contribute to it, or participate in communities around it, you have a stake in shaping its future.
Jannis Leidel, PSF Board Chair, on behalf of the PSF Board of Directors
Python Koans
Koan 20: The Unreliable Messenger
How to Clean Up
When you work with external resources such as a database or temporary files, you often need to run some cleanup actions after you've done the work.
Python provides two options - the context manager and the try/finally block. Both are valid options, but the context manager is often lauded as being more Pythonic.
Despite this, try/finally block is still widely used. As we will discover, try/finally is simple to use and well-suited in some cases, but it does come with pitfalls, as the messenger tragically discovered.
Let us try and contact the messenger.
Part 1: The Assured Action
A variant of the try/finally pattern exists in most languages1 and functions pretty much in the way you might imagine. Consider this trivial example:
def walk_path():
try:
print("Taking a step")
finally:
print("Leaving a footprint")
'Taking a step'
'Leaving a footprint'The interpreter enters the first block and executes the statement. The interpreter then proceeds to the finally block. The final block always executes. It does not matter if the first block succeeds or fails.
If a failure is raised during execution as shown below, the error disrupts the normal flow and the interpreter stops executing the try block immediately. The interpreter then jumps directly to the finally block.
def walk_path():
try:
raise Exception("A fallen tree")
finally:
print("Leaving a footprint")If you choose to handle the failure using an except block as shown below, the error is caught and handled by the except block before proceeding to the finally block. An error could also occur inside the except or else block. The interpreter would still execute the finally block before raising the new error.
def walk_path():
try:
raise Exception("A fallen tree")
except Exception:
print("Climbing over the trunk")
else:
print("Kicking the trunk away using superhuman strength")
finally:
print("Leaving a footprint")Part 2: The Trapped Messenger
This brings us to the behavior of returning values. A function can return a value from within the try block. When the interpreter encounters this return statement, It prepares to send this value back to the caller. But first, it must honor the finally block. So it pauses the return process and executes the finally block first before returning the prepared value from the try block.
def walk_path():
try:
return "Reaching the destination"
finally:
print("Leaving a footprint")The finally block can also contain its own return statement, as shown below. When this happens, the return in the finally block wins and the return value from the try is effectively ignored.
def walk_path():
try:
return "Reaching the destination"
finally:
return "Returning home"This behavior applies to exceptions as well. We can place a loop around our structure to observe break and continue statements.
def scout_path():
for step in range(3):
try:
raise Exception("A hidden trap")
finally:
break
return "The scout survives"
'The scout survives'A break statement inside a finally block will swallow any unhandled exception from the try block. A continue statement will do the exact same thing. The exception disappears completely.
Part 3: The Trapped Voice
Lets examine a more complex example with nested try/finally statements. Do return statements break out of parent try/finally blocks?
def send_message():
try:
print("Everything is fine")
return 0
finally:
try:
try:
print("Everything is still fine")
finally:
for x in range(2):
print(f"Scouting area {x}")
return 1
finally:
for x in range(2):
print(f"Covering tracks in area {x}")
return 2
print(f"Return value: {send_message()}")
Everything is fine
Everything is still fine
Scouting area 0
Scouting area 1
Covering tracks in area 0
Covering tracks in area 1
Return value: 2No, try/finally statements can be nested, and return statements from child blocks do not prevent parent finally blocks from running. However, the value from the last return statement trumps the rest and is still the one that is returned by the function.
Part 4: The Plot Thickens
As you can imagine, any language which allows you to write dead code that produces unintended outcomes is problematic. The Python language developers recognized this danger, and proposed that return/break/continue statements should be disallowed in finally blocks in PEP6012.
However, it was voted down for the following reason:
Reading the references in the PEP it seems to me that most languages implement this kind of construct but have style guides and/or linters that reject it. I would support a proposal to add this to PEP 8 (if it isn’t already there).
I note that the toy examples are somewhat misleading – the functionality that may be useful is a conditional return (or break etc.) inside a finally block.
-Guido 2019 PEP601
Guido’s reasoning was that there may be valid scenarios where the user requires full control of exception handling in the finally block, and may wish to override the raising of exceptions. Preventing this behavior would effectively hamstring advanced users.
However, in 2024 the community tried again with PEP7653. This time they were armed with evidence. They analyzed the top 8000 PyPi packages and found that:
Most of the usages (of
returninfinally) are incorrect, and introduce unintended exception-swallowing bugs. - PEP765
This was enough for the proposal to get over the line, and from Python 3.14 onwards, using return, break or continue in a finally clause emits a SyntaxWarning.
Part 5: Why use try/finally at all?
A context manager is the pythonic choice most of the time when you’re working with resources that already expose acquire/release semantics that can easily slot into __enter__ and __exit__. It’s the natural choice for files, locks, database connections, temporary state changes, etc. It’s declarative and minimizes surface area for errors.
However, they don’t always make sense:
Context manager code lives in a different location, and so introduces an extra layer of abstraction. Sometimes your code is so simple and localized that introducing an extra layer of abstraction would make the code less readable with little benefit.
Context managers can only use variables passed in during initialization, whereas finally can reference variables mutated during execution in the try block.
With try/finally, you can combine
exceptandfinallyclauses to explicitly manage different failure modes for more granular control.
Closing the Circle
The novice believed the original message was secure. The master understood the final seal controls the truth.
The finally block always speaks last. You must ensure its final words do not obscure the truth of what came before.
https://en.wikipedia.org/wiki/Exception_handling_syntax
https://peps.python.org/pep-0601/
https://peps.python.org/pep-0765/
May 10, 2026
Python Insider
Python 3.14.5 is out!
A special release with a new (old) garbage collector.
May 08, 2026
Real Python
The Real Python Podcast – Episode #294: Declarative Charts in Python & Discerning Iterators vs Iterables
What if you could build charts in Python by describing what your data means, instead of scripting every visual detail? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Memory Management in Python
In this quiz, you’ll test your understanding of Memory Management in Python.
By working through this quiz, you’ll revisit how Python handles memory allocation and freeing, the role of the Global Interpreter Lock, and how CPython organizes memory using arenas, pools, and blocks. Give it a shot!
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Seth Michael Larson
Using Epilogue Retrace app with iPhone 13 Pro and Ubuntu
When Epilogue announced the Retrace app for iOS and Android I was over the moon excited. In theory this meant I could archive ROMs from the GB Operator directly to my iPhone where I play the games with the Delta emulator. This meant I wouldn't need to ferry ROMs between the GB Operator to my laptop to my phone. Unfortunately I ran into two hurdles with my plan, if you were able to get Retrace to work with a pre-USB-C iPhone let me know.
Upgrading the GB Operator firmware
First I saw that the Retrace app required a new firmware
version for the GB Operator (v10.0.10), so I set
out to update the GB Operator firmware.
The documentation says to use Playback, so I went to update
Playback. Previously Playback was distributed as an AppImage
but newer versions use Flatpak. So... I had to figure out
how to install a Flatpak on Ubuntu.
I did that, had the new Playback app on Ubuntu and... the firmware update notification never prompted in the app. I contacted support and learned that apparently the Linux versions of Playback don't support updating the firmware... So I needed a Windows computer. My wife's laptop is Windows, so I was able to update the firmware using her computer instead of my Ubuntu laptop.
Trying Retrace with an iPhone 13 Pro
GB Operator uses USB-C for power delivery and data transfer and comes with a high-quality USB-C cord. This is perfect for my laptop which only has USB-C ports.
Unfortunately, I would be using Retrace on an iPhone 13 Pro. The iPhone 13 Pro came before Apple was legally required to use USB-C on their phones in Europe, so the phone has a lightning port. I purchased a Lightning to USB-C adapter cord from the Apple Store.
But... that doesn't work with the GB Operator. It doesn't deliver power to the device. I was able to try with my wife's iPhone 15 Pro (which has USB-C) and power delivery worked like normal, the GB Operator turned on as usual. That's unfortunate.
In summary: if you want to use Epilogue Retrace you need a phone that supports USB-C and upgrading the GB Operator firmware requires either macOS or Windows... I guess I'll be using Playback on Ubuntu for the next five years now that I've just replaced my iPhone 13 Pro battery 😢
Thanks for keeping RSS alive! ♥
Armin Ronacher
Pushing Local Models With Focus And Polish
I really, really want local models to work.
I want them to work in the very practical sense that I can open my coding agent, pick a local model, and get something that feels competitive enough that I do not immediately switch back to a hosted API after five minutes. There are a lot of reasons why I want this, but the biggest quite frankly is that we’re so early with this stuff, and the thought of locking all the experimentation away from the average developer really upsets me.
Frustratingly, right now that is still much harder than it should be but for reasons that have little to do with the complexity of the task or the quality of the models.
We have an enormous amount of activity around local inference, which is great. We have good projects, fast kernels, and people are doing great quantization work. A lot of very smart people are making all of this better, and yet the experience for someone trying to make this work with a coding agent is worse than it has any right to be.
Putting an API key into Pi and using a hosted model is a very boring operation. You select the provider, paste the key and then you are done thinking about how to get tokens. Doing the same thing locally, even when you have a high-end Mac with a lot of memory, is a completely different experience. You choose an inference engine, then a model, then a quantization, then a template, then a context size, then you’ve got to throw a bunch of JSON configs into different parts of the stack and then you discover that one of those choices quietly made the model worse or that something just does not work at all.
That is the gap I am interested in.
Runnable Is Not Finished
A lot of local model work optimizes for making models runnable. That is necessary, but it is not the same thing as making them feel finished. I give you a very basic example here to illustrate this gap: tool parameter streaming.
For whatever reason, most of the stuff you run locally does not support tool parameter streaming. I cannot quite explain it, but the consequences of that are actually surprisingly significant. If you are not familiar with how these APIs work, the simplest way to think about them is that they are emitting tokens as they become available. For text that is trivial, but for tool calls that is often not done, despite the completions API supporting this. As a result you only see what edits are being done on a file once the model has finished streaming the entire tool call.
This is bad for a lot of reasons:
-
A dead connection is a weird connection: local models are slow, so when you don’t get any tokens for 5 minutes then you can’t tell if the connection died or just nothing came. This means you need to increase the inactivity timeouts to the point where they are pointless.
-
You won’t see what will happen: if you are somewhat hands-on, not seeing what bash invocation the system is concocting slowly in the background means potentially wasted tokens, and also means that you won’t be able to interrupt it until way too late.
-
It’s just not SOTA. We can do better, and we should aim for having the best possible experience. Tool parameter streaming is as important as token streaming in other places.
Having a model spit out tokens doesn’t take long, but making the experience great end to end does take a lot more energy.
Fragmentation
The local stack is fragmented across many engines and layers. There is llama.cpp, Ollama, LM Studio, MLX, Transformers, vLLM, and many other pieces depending on hardware and taste. All of these are amazing projects! The problem is not that they exist or that there are that many of them (even though, quite frankly, I’m getting big old Python packaging vibes), the problem is that for a given model, the actual behavior you get depends on a long chain of small decisions that most users just don’t have the energy for.
Did the chat template render exactly right? Are the reasoning tokens handled in the intended way? Is the tool-call format translated correctly? Is the context window real? Are the KV caches actually working for a coding agent? Did I pick the right quantized model from Hugging Face? Are you accidentally leaving a lot of performance on the table because the model is just mismatched for your hardware? Does streaming usage work across all channels? Does the model need its previous reasoning content preserved in assistant messages? Is the coding agent set up correctly for it?
You also need to install many different things in addition to just your coding agent.
All of these things matter. They matter a lot.
The result is that people try a local model and get a result that is neither a fair evaluation of the model nor a polished product experience and this results in both people dismissing local models and energy being distributed across way too many separate efforts instead of getting one effort going great end to end.
This is a terrible way to build confidence.
Too Little Critical Mass
In line with our general “slow the fuck down” mantra, I want to reiterate once more how fast this industry is moving.
Every week there is a new model and a new vibeslopped thing. The attention immediately moves to making the next thing run instead of making one thing run really, really well in one harness. I get the excitement and dopamine hit, but it also means that too little critical mass accumulates behind any one model, hardware, inference engine, harness combo to find out how good it can really become when the entire stack is built around it.
Hosted model providers do not ship a bag of weights and ask you to figure out the rest, and we need to approach that line of thinking for local models too. I want someone to pick one model, pairs it up with one serving path, directly within a coding agent. Initially just for one hardware configuration, then for more. Pick a winner hard. If a tool call breaks, that is a product bug and then it’s fixed no matter where in the stack it failed. If the model’s reasoning stream is malformed, that is a product bug. If latency is much worse than it should be, that is a product bug. We need to start applying that mentality to local models too.
And not for every model! That is the point. Let’s pick one winner and polish the hell out of it. Learn what it takes to make that one configuration good, then take those learnings to the next config.
The DS4 Bet
This is why I am excited about ds4.c. It’s Salvatore Sanfilippo’s deliberately narrow inference engine for DeepSeek V4 Flash on Macs with 128GB+ of RAM only. It is not a generic GGUF runner and it is not trying to be a framework. It is a model-specific native engine with a Metal path, model-specific loading, prompt rendering, KV handling, server API glue, and tests.
DeepSeek V4 Flash is a good candidate for this kind of experiment because it has a combination of properties that are unusual for local use. It is large enough to feel meaningfully different from many smaller dense models, but sparse enough that the active parameter count makes it plausible to run. It has a very large context window. Since ds4.c targets Macs and Metal only, it can move KV caches into SSDs which greatly helps the kind of workloads we expect from coding agents.
To run ds4.c you don’t need MLX, Ollama or anything else. It’s the whole
package.
Embedding It In Pi
Which made me build pi-ds4 which is a Pi extension to directly embed the whole thing into Pi itself. Taking what ds4 is and dogfooding the hell out of it with a coding agent and zero configuration. To answer the question how good can the local model experience become if Pi treats this as a first-class provider rather than as a pile of manual configuration?
The extension registers ds4/deepseek-v4-flash, compiles and starts
ds4-server on demand, downloads and builds the runtime if needed, chooses the
quantization based on the machine, keeps a lease while Pi is using it, exposes
logs, and shuts the server down again through a watchdog when no clients are
left. It doesn’t even give you knobs right now, because I want to figure out how
to set the knobs automatically.
This is not about hiding the fact that local inference is complicated. It is about putting the complexity in one place where it can be improved, because there is a lot that we need to improve along the stack to make it work better.
I think we can do better with caching and there is probably some performance that can be gained if we all put our heads together.
Focusing and Learning
The experiment I want to run is not “can a local model run?” because we already know that it can. I want to know if, for people with beefed-out Macs for a start, we can get as close as possible to the ergonomics of a hosted provider with decent tool-calling performance: how to get caches to work well, how to improve the way we expose tools in harnesses for these models, and then scale it gradually to more hardware configs and later models.
I also want everybody to have access to this. Engineers need hammers and a hammer that’s locked behind a subscription in a data center in another country does not qualify. I know that the price tag on a Mac that can run this is itself astronomical, but I think it’s more likely that this will go down. Even worse, Apple right now due to the RAM shortage does not even sell the Mac Studio with that much RAM. So yes, it’s a selected group of people where ds4.c will start out.
But despite all of that, what matters is that a critical mass of pepole start to focus their efforts on a thing, tinker with it, improve it, not locked away, out in the open, and most importantly not limited by what the hyperscalers make available.
But if you have the right hardware and you care about local agents, I would love for you to try it within pi:
pi install https://github.com/mitsuhiko/pi-ds4
My hope is that this becomes a useful forcing function to really polish one coding agent experience. But really, the focal point should be ds4.c itself.

