skip to navigation
skip to content

Planet Python

Last update: May 12, 2026 07:44 PM UTC

May 12, 2026


PyCoder’s Weekly

Issue #734: Dunder-Gets, Django Tasks in Prod, Codex CLI, and More (2026-05-12)

#734 – MAY 12, 2026
View in Browser »

The PyCoder’s Weekly Logo


Do You Get It Now?

Learn about Python’s .__getitem__(), .__getattr__(), .__getattribute__(), and .__get__(): how they’re different and where to use them.
STEPHEN GRUPPETTA

Using Django Tasks in Production

Django added a generic API for dealing with concurrent tasks in version 6. This post talks about how it has been used in production.
TIM SCHILLING

Use Codex CLI to Enhance Your Python Projects

Learn how to use Codex CLI to add features to Python projects directly from your terminal, without needing a browser or IDE plugins.
REAL PYTHON course

Depot CI: Built for the Agent era

alt

Depot CI: A new CI engine. Fast by design. Your GitHub Actions workflows, running on a fundamentally faster engine — instant job startup, parallel steps, full debuggability, per-second billing. One command to migrate →
DEPOT sponsor

PEP 828: Supporting ‘Yield From’ in Asynchronous Generators (Deferred to 3.16)

PYTHON.ORG

PEP 797: Shared Object Proxies (Deferred to 3.16)

PYTHON.ORG

Django Security Releases: 6.0.5 and 5.2.14

DJANGO SOFTWARE FOUNDATION

Articles & Tutorials

Handling Schema Issues in Polars

You’ve got this great data pipeline going until one day it stops working. A schema error causes by a column upstream has stopped you in your tracks. This post talks about the four different causes of schema errors and what to do about them.
THIJS NIEUWDORP

Textual: An Intro to DOM Queries (Part II)

Textual is a TUI framework library for building terminal applications. It uses a DOM to represent the widgets in the application, and that DOM is queryable. This is part 2 in a series on how to find things in your Textual DOM.
MIKE DRISCOLL

Everything You Always Wanted to Know About PyCon Sprints!

PyCon US includes coding sprints to work on CPython itself, or projects in the ecosystem like Django, Flask, and BeeWare. This post tells you all about sprints and how you can join in on the fun.
DEB NICHOLSON

Why TUIs Are Back

Terminal User Interfaces are seeing a resurgence in the tools space. This opinion piece briefly talks about the history of interfaces and why we are where we are now.
ALCIDES FONSECA

Parallel Python at Anyscale With Ray

Talk Python interviews Richard Liaw and Edward Oakes. They talk about Ray, an open source Python framework a distributed execution engine for AI workloads.
TALK PYTHON podcast

Python 3.14.5 Release Candidate

Normally nobody fusses over a release candidate of a point release, but 3.14.5 includes a major change: rolling back of the incremental garbage collector.
HUGO VAN KEMENADE

Wagtail 7.4: Custom Page Explorer, Preview Checks & More

Between autosave improvements, new ways to sort your pages, and a content checker upgrade, you’ll have a lot of reasons to move to Wagtail 7.4
MEAGEN VOSS

The Simplest MCP Example Possible in Python

This guide introduces you to connecting your code to a local LLM model. It covers Ollama and FastMCP and what you can do with these tools.
AL SWEIGART

ChatterBot: Build a Chatbot With Python

Build a Python chatbot with the ChatterBot library. Clean real conversation data, train on custom datasets, and add local AI with Ollama.
REAL PYTHON

Hardening Firefox With Claude Mythos Preview

New details about what Mozilla found and how agentic harnesses helped them reproduce real bugs and dismiss false positives.
MOZILLA

Projects & Code

pytest-fly: pytest Observer

GITHUB.COM/JAMESABEL

Pymetrica: A Codebase Analysis Tool

GITHUB.COM/JUANJFARINA • Shared by Juan José Farina

PyWry: Cross-Platform Rendering Engine and UI Toolkit

GITHUB.COM/DEELEERAMONE

secure: HTTP Security Headers for FastAPI, Flask, Django

GITHUB.COM/TYPEERROR • Shared by Caleb Kinney

Kirokyu: Modular Task Management System

GITHUB.COM/AMRYOUNIS • Shared by Amr Younis

Events

Weekly Real Python Office Hours Q&A (Virtual)

May 13, 2026
REALPYTHON.COM

PyCon US 2026

May 13 to May 20, 2026
PYCON.ORG

Python Atlanta

May 14 to May 15, 2026
MEETUP.COM

Chattanooga Python User Group

May 15 to May 16, 2026
MEETUP.COM

PyDelhi User Group Meetup

May 16, 2026
MEETUP.COM

PyData London

June 5 to June 7, 2026
PYDATA.ORG • Shared by Tomara Youngblood


Happy Pythoning!
This was PyCoder’s Weekly Issue #734.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

May 12, 2026 07:30 PM UTC


Real Python

Building Type-Safe LLM Agents With Pydantic AI

Pydantic AI is a Python framework for building LLM agents that return validated, structured outputs using Pydantic models. Instead of parsing raw strings from LLMs, you get type-safe objects with automatic validation.

If you’ve used FastAPI or Pydantic before, then you’ll recognize the familiar pattern of defining schemas with type hints and letting the framework handle the type validation for you.

By the end of this video course, you’ll understand that:


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 12, 2026 02:00 PM UTC


Django Weblog

2026 Django Developers Survey

The Django Software Foundation is once again partnering with JetBrains to run the 2026 Django Developers Survey 🌈

It’s an important metric of Django usage and is immensely helpful to guide future technical and community decisions.

After the survey closes, we will publish the aggregated results. JetBrains will also randomly select 10 winners (from those who complete the survey in full with meaningful answers) who will each receive a $100 Amazon voucher or the equivalent in local currency.

How you can help

Once you’ve done the survey, take a moment to re-share on socials and with your communities. The more diverse the answers, the better the results for all of us.

Please use the following links:

May 12, 2026 12:00 PM UTC


Real Python

Quiz: Building Type-Safe LLM Agents With Pydantic AI

In this quiz, you’ll test your understanding of Building Type-Safe LLM Agents With Pydantic AI.

By working through this quiz, you’ll revisit how Pydantic AI returns structured outputs from LLMs, how validation retries improve reliability, how tools and function calling work, how dependency injection flows through RunContext, and what trade-offs to expect when running agents in production.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 12, 2026 12:00 PM UTC

Quiz: The LEGB Rule & Understanding Python Scope

In this quiz, you’ll test your understanding of The LEGB Rule & Understanding Python Scope.

By working through this quiz, you’ll revisit how Python resolves names using the LEGB rule, what the local, enclosing, global, and built-in scopes look like in practice, and how the global and nonlocal statements let you reach across scope boundaries.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 12, 2026 12:00 PM UTC


Python Software Foundation

Announcing PSF Community Service Award Recipients!

The PSF Community Service Awards (CSAs) are a formal way for the PSF Board of Directors to offer recognition of work which, in its opinion, significantly improves the Foundation's fulfillment of its mission to build a vibrant, welcoming, global Python community. These awards shine a light on the incredible people who are the heart and soul of our community– those whose dedication, creativity, and generosity help the PSF fulfill its mission. The PSF CSAs celebrate individuals who have been truly invaluable, inspiring others through their example, and demonstrates that service to the Python community leads to recognition and reward. If you know of someone in the Python community deserving of a PSF CSA award, please submit them to the PSF Board via psf@python.org at any time. You can read more about PSF CSA’s on our website

The PSF Board is excited to announce 5 new CSAs, awarded to Inessa Pawson, Kafui Alordo, Kalyan Prasad, Maria Jose Molina Contreras, and Paul Everitt, for their contributions to the Python community. Read more about their work and impact below. 

Inessa Pawson 

Inessa Pawson has been a tireless and dedicated contributor to the Python ecosystem for over eight years. She has led the PyCon US Maintainers Summit since 2020, not only shaping the event but actively opening doors for others to participate–onboarding new contributors and supporting attendees with characteristic warmth and care. 
 
Beyond PyCon US, Inessa has spearheaded the Maintainers and Community Track, the mentorship program, and the Teen Track at the SciPy Conference, and co-founded the Contributor Experience project, reflecting her deep commitment to making the Python community more inclusive and accessible. She brings that same dedication to her roles on the NumPy Steering Committee, the scikit-learn survey team, and the SPEC (Scientific Python Ecosystem Coordination) Steering Committee. As a leader on the pyOpenSci Advisory Council, Inessa has been instrumental in advancing the organization's mission to support open and reproducible science.

Kafui Alordo

Kafui Alordo has spent years building and nurturing the Python community in Ho, in the Volta Region of Ghana. What began for Kafui as volunteer coaching at the first Django Girls Ho workshop grew into co-organizing the second and third editions, and eventually leading the workshop as its primary organizer, while also lending his expertise as a coach and co-organizer at Django Girls events across Ghana. Recognizing that sustainable community growth starts with welcoming total beginners, Kafui introduced a coding bootcamp initiative for his user group that has broadened participation and helped new learners find their footing in Python. 

Kafui’s landmark achievement came with the organization of PyHo, the first-ever regional Python conference in Ho, which drew attendees from diverse backgrounds across the country. His impact has also extended well beyond Ghana, most recently stepping into the role of remote chair on the PyCascades organizing team.

Kalyan Prasad

Kalyan Prasad's journey in the Python community began in 2019 as a volunteer with the Hyderabad Python User Group (HydPy), one of India's largest Python communities, and he has grown steadily into one of its most consequential leaders. His dedication to PyConf Hyderabad has been especially remarkable–contributing across the CFP, program, and sponsorship teams, serving as co-chair in 2022, and stepping up as chair in both 2025 and 2026, representing four consecutive years of conference leadership at the regional and national level. 

At the national scale, Kalyan also served as co-chair for PyCon India 2023. Kalyan's commitment extends well beyond India, as he actively contributes to the broader Python ecosystem as a reviewer, mentor, and program committee member for conferences around the world. His care for community safety is further reflected in two years of service on the NumFOCUS Code of Conduct squad, ensuring that Python spaces remain welcoming and respectful for everyone. Kalyan has also joined the PSF Diversity & Inclusion Working Group this year, contributing to inclusion efforts. 

Maria Jose Molina Contreras

Maria Jose Molina Contreras has been a dedicated and wide-ranging contributor to the Python community, with deep roots in both Spanish-language and PyLadies initiatives. She has been a core organizer of PyLadiesCon since its inaugural edition in 2023, serving as co-chair in 2024 and 2025, and her tireless leadership helped make the most recent edition the most successful in the conference's history, raising over $55,000 in funds to support PyLadies members and chapters around the world. 

Maria’s commitment to Spanish-speaking Pythonistas is equally impressive: she contributes to the Python Docs ES initiative, coordinates events for Python en Español on Discord, and co-founded the PyLadies en Español initiative, including leading the PyLadies presence at PyCon US. At EuroPython, Maria has volunteered since 2023 and taken on growing responsibility, leading community booths, PyLadies events, and community organizer efforts in 2024 and 2025. She has also served as a reviewer for PyCon US Charlas since 2020 and has been a speaker at numerous conferences including PyCon US, EuroPython, and PyConES, sharing her expertise with audiences across the global community. 

Paul Everitt

Paul Everitt's relationship with Python stretches back to the very beginning! Paul was present at the early PyCons and played a foundational role as an incorporating member and director on the PSF's first Board of Directors, helping to establish the organization that supports Python to this day. Decades later, his commitment to the community remains as strong as ever, demonstrated through his long tenure as a Developer Advocate at JetBrains/PyCharm, where he has championed the company's sustained investment in Python open source. 

Paul’s advocacy extends beyond any one project, as he has provided support to smaller but important ecosystem projects like HTMX and remained a regular, encouraging presence at Python conferences and on podcasts. Most recently, Paul proved that his contributions are not merely historical–he co-authored PEP 750, introducing template strings (t-strings) as a significant new feature in Python 3.14, demonstrating a continued willingness to roll up his sleeves and shape the language itself. Whether writing PEPs, giving conference talks, or simply championing the people who make Python great, Paul’s generous and enthusiastic spirit is an invaluable gift to the Python community. 

May 12, 2026 11:55 AM UTC

May 11, 2026


PyCon

Introducing the 8 Companies on Startup Row at PyCon US 2026

Each year at PyCon US, Startup Row highlights a select group of early-stage companies building ambitious products with Python at their core. The 2026 cohort reflects a rapidly evolving landscape, where advances in AI, data infrastructure, and developer tooling are reshaping how software is built, deployed, and secured.

This year’s companies aim to solve an evolving set of problems facing independent developers and large-scale organizations alike: securing AI-driven applications, managing multimodal data, orchestrating autonomous agents, automating complex workflows, and extracting insight from increasingly unstructured information. Across these domains, Python continues to serve as a unifying layer: encouraging experimentation, enabling systems built to scale, and connecting open-source innovation with real-world impact.

Startup Row brings these emerging teams into direct conversation with the Python community at PyCon US. Throughout the conference, attendees can meet founders, explore new tools, and see firsthand how these companies are applying Python to solve meaningful problems. For the startups in attendance, it’s an opportunity to share their work, connect with users and collaborators, and contribute back to the ecosystem that helped shape them. Register now to experience Startup Row and much more at PyCon US 2026.

Supporting Startups at PyCon US

There are many ways to support Startup Row companies, during PyCon US and long after the conference wraps:
  • Stop by Startup Row: Spend a few minutes with each team, ask what they’re building, and see their products in action. 
  • Try their tools: Whether it’s an open-source library or a hosted service, hands-on usage (alongside constructive feedback) is one of the most valuable forms of support. If a startup seems compelling, consider a pilot project and become a design partner.
  • Share feedback: Early-stage teams benefit enormously from thoughtful questions, real-world use cases, and honest perspectives from the community.
  • Contribute to their open source projects: Many Startup Row companies are deeply rooted in open source. Startup Row companies with open-source roots welcome bug reports, documentation improvements, and pull requests. Contributions and constructive feedback are always appreciated.
  • Help spread the word: If you find something interesting, tell a friend, post about it, or share it with your team. (And if you're posting to social media, consider using tags like #PyConUS and #StartupRow to share the love.)
  • Explore opportunities to work together: Many of these companies are hiring, looking for design partners, or open to collaborations; don’t hesitate to ask.
  • But, most importantly, be supportive. Building a startup is hard, and every team is learning in real time. Curiosity, patience, and encouragement make a meaningful difference. 
Without further ado, let's...

Meet Startup Row at PyCon US 2026

We’re excited to introduce the companies selected for Startup Row at PyCon US 2026.

Arcjet

Embedding security directly into application code is fast becoming as indispensable as logging, especially as AI services open new attack surfaces. Arcjet offers a developer‑first platform that lets teams add bot detection, rate limiting and data‑privacy checks right where the request is processed.

The service ships open‑source JavaScript and Python SDKs that run a WebAssembly module locally before calling Arcjet’s low‑latency decision API, ensuring full application context informs every security verdict. Both SDKs are released under a permissive open‑source license, letting developers integrate the primitives without vendor lock‑in while scaling usage through Arcjet’s SaaS tiered pricing.

The JavaScript SDK alone has earned ≈1.7 k GitHub stars and the combined libraries have attracted over 1,000 developers protecting more than 500 production applications. Arcjet offers a free tier and usage‑based paid plans, mirroring Cloudflare’s model to serve startups and enterprises alike.

Arcjet is rolling out additional security tools and deepening integrations with popular frameworks such as FastAPI and Flask, aiming to broaden adoption across AI‑enabled services. In short, Arcjet aims to be the security‑as‑code layer every modern app ships with.

CapiscIO

As multi‑agent AI systems become the backbone of emerging digital workflows, developers lack a reliable way to verify agent identities and enforce governance. CapiscIO steps into that gap, offering an open‑core trust layer built for the nascent agent economy.

CapiscIO offers cryptographic Trust Badges, policy enforcement, and tamper‑evident chain‑of‑custody wrapped in a Python SDK. Released under Apache 2.0, it ships a CLI, LangChain integration, and an MCP SDK that let agents prove identity without overhauling existing infrastructure.

The capiscio‑core repository on GitHub hosts the open‑source core and SDKs under Apache 2.0, drawing early contributors building agentic pipelines.

Beon de Nood, Founder & CEO, brings two decades of enterprise development experience and a prior successful startup to the table. “AI governance should be practical, not bureaucratic. Organizations need visibility into what they have, confidence in what they deploy, and control over how agents behave in production,” he says.

CapiscIO is continuously adding new extensions, expanding its LangChain and MCP SDKs, and preparing a managed agent‑identity registry for enterprises. In short, CapiscIO aims to be the passport office of the agent economy, handing each autonomous component an unspoofable ID and clear permissions.

Chonkie

The explosion of retrieval‑augmented generation (RAG) is unlocking AI’s ability to reason over ever‑larger knowledge bases. Yet the first step of splitting massive texts into meaningful pieces still lags behind.

Chonkie offers an open‑core suite centered on Memchunk, a Python library with Cython acceleration that delivers up to 160 GB/s throughput and ten chunking strategies under a permissive license. It also ships Catsu, a unified embeddings client for nine providers, and a lightweight ingestion layer; the commercial Chonkie Labs service combines them into a SaaS that monitors the web and synthesises insights.

Co‑founder and CEO Shreyash  Nigam, who grew up in India and met his business partner in eighth grade, reflects the team’s open‑source ethos, saying “It’s fun to put a project on GitHub and see a community of developers crowd around it.” That enthusiasm underpins Chonkie’s decision to release its core tooling openly while building a commercial deep‑research service.

Backed by Y  Combinator’s Summer  2025 batch, Chonkie plans to grow from four to six engineers and launch the next version of Chonkie Labs later this year, adding real‑time web crawling and multi‑modal summarization. In short, Chonkie aims to be the Google of corporate intelligence.

Phemeral

Running production‑grade Python services used to mean wrestling with containers, VMs, or complex CI pipelines. 

Phemeral, launched in April 2026, offers Python developers a managed hosting platform that turns a GitHub repo into an instantly deployable, scale‑to‑zero backend.

Phemeral provides builds for popular frameworks (like Django, Flask, and FastAPI), integrations with popular package managers (e.g. uv, Pip, and Poetry), as well as continuous deployment on every push while charging only for actual request execution under a usage‑based model.

Founder & CEO Chinmaya Joshi says, "Building with Python is easier than ever, but hosting and deployment remain a pain. Phemeral is building the easiest way to deploy Python web apps."

Joshi is focused on expanding framework support and refining the platform so that Python developers (from vibe-coders and solo devs, to agencies and enterprises) can enjoy the same zero‑config experience modern front‑end platforms provide. 

Pixeltable

Multimodal generative AI is turning simple datasets into sprawling collections of video, images, audio and text, forcing engineers to stitch together ad‑hoc pipelines just to keep data flowing. That complexity has created a new bottleneck for teams trying to move from prototype to production.

The open‑source Python library from Pixeltable offers a declarative table API that lets developers store, query and version multimodal assets side by side while embedding custom Python functions. Built with incremental update capabilities, combined lineage and schema tracking, and a development‑to‑production mirror, the platform also provides orchestration capabilities that keep pipelines reproducible without rewriting code.

The project has earned ≈1.6 k  GitHub stars and a growing contributor base, closed a $5.5  million seed round in December 2024, and is already used by early adopters such as Obvio and Variata to streamline computer‑vision workflows.

Co‑founder and CTO Marcel  Kornacker, who previously founded Apache Impala and co-founded Apache Parquet, says “Just as relational databases revolutionized web development, Pixeltable is transforming AI application development.”

The company's roadmap centers on launching Pixeltable Cloud, a serverless managed service that will extend the open core with collaborative editing, auto‑scaling storage and built‑in monitoring. In short, Pixeltable aims to be the relational database of multimodal AI data.

SubImage

The sheer complexity of modern multi‑cloud environments turns security visibility into a labyrinth, and  SubImage offers a graph‑first view that cuts through the noise.

It builds an infrastructure graph using the open‑source Cartography library (Apache‑2.0, Python), then highlights exploit chains as attack paths and applies AI models to prioritize findings based on ownership and contextual risk.

Cartography, originally developed at Lyft and now a Cloud Native Computing Sandbox project, has ≈3.7 k GitHub stars, is used by over 70 organizations, and SubImage’s managed service already protects security teams at Veriff and Neo4j; the company closed a $4.2  million seed round in November 2025.

Co‑founder Alex  Chantavy, an offensive‑security engineer, says “The most important tool was our internal cloud knowledge graph because it showed us a map of the easiest attack paths … One of the most effective ways to defend an environment is to see it the same way an attacker would.”

The startup is focusing on scaling its managed service and deepening AI integration as it targets larger enterprise customers. In short, SubImage aims to be the map of the cloud for defenders.

Tetrix

Private‑market data pipelines still rely on manual downloads and spreadsheet gymnastics, leaving analysts chasing yesterday’s numbers. Tetrix’s AI investment intelligence platform is part of a wave that brings automation to this lagging workflow.

Built primarily in Python, Tetrix automates document collection from fund portals and other sources, extracts structured data from PDFs and other unstructured sources using tool-using language models, then presents exposures, cash flows, and benchmarks through an interactive dashboard that also accepts natural‑language queries.

The company is growing quickly, doubling revenue quarter over quarter and, at least so far, maintains an impressive record of zero customer churn. In the coming year or so, Tetrix plans to triple its headcount from fifteen to forty‑five employees.

TimeCopilot

Time‑series forecasting has long been a tangled mix of scripts, dashboards, and domain expertise, and the recent surge in autonomous agents is finally giving it a unified voice. Enter TimeCopilot, an open‑source framework that brings agentic reasoning to the heart of forecasting.

The platform, built in Python under a permissive open‑source license, lets users request forecasts in plain English. It automatically orchestrates more than thirty models from seven families, including Chronos and TimesFM, while weaving large language model reasoning into each prediction. Its declarative API was born from co‑founder  Azul Garza‑Ramírez’s economics background and her earlier work on TimeGPT for Nixtla (featured SR'23), evolving from a weekend experiment started nearly seven years ago.

The TimeCopilot/timecopilot repository has amassed roughly 420  stars on GitHub, with the release of OpenClaw marking a notable spike in community interest.

Upcoming plans include a managed SaaS offering with enterprise‑grade scaling and support, the rollout of a benchmarking suite to measure agentic forecast quality, and targeted use cases such as predicting cloud‑compute expenses for AI workloads.

Thank You's and Acknowledgements

Startup Row is a volunteer-driven program, co-led by Jason D. Rowley and Shea Tate-Di Donna (SR'15; Zana, acquired Startups.com), in collaboration with the PyCon US organizing team. Thanks to everyone who makes PyCon US possible.

We also extend a gracious thank-you to all startup founders who submitted applications to Startup Row at PyCon US this year. Thanks again for taking the time to share what you're building. We hope to help out in whatever way we can.

Good luck to everyone, and see you in Long Beach, CA!

May 11, 2026 05:33 PM UTC


Talk Python to Me

#548: Event Sourcing Design Pattern

What if your database worked more like Git? Every change captured as an immutable event you can replay, instead of a single mutating row that quietly forgets its own history. That's event sourcing, and Chris May is back on Talk Python, fresh off our Datastar panel, to walk us through what it actually looks like in Python. We'll cover the core patterns, the libraries to reach for, when not to use it, and why event sourcing turns out to be a surprisingly good fit for AI-assisted coding.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code talkpython26</a><br> <a href='https://talkpython.fm/temporal'>Temporal</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guest</strong><br/> <strong>Chris May</strong>: <a href="https://everydaysuperpowers.dev?featured_on=talkpython" target="_blank" >everydaysuperpowers.dev</a><br/> <br/> <strong>Intro to event sourcing e-book</strong>: <a href="https://everydaysuperpowers.gumroad.com/l/es_intro?featured_on=talkpython" target="_blank" >everydaysuperpowers.gumroad.com</a><br/> <br/> <strong>Domain-Driven Design: The Power of CQRS and Event Sourcing: How CQRS/ES Redefine Building Scalable System</strong>: <a href="https://ricofritzsche.me/cqrs-event-sourcing-projections/?featured_on=talkpython" target="_blank" >ricofritzsche.me</a><br/> <strong>DDD</strong>: <a href="https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215?featured_on=talkpython" target="_blank" >www.amazon.com</a><br/> <strong>Understanding Eventsourcing (Martin Dilger)</strong>: <a href="https://www.amazon.com/Understanding-Eventsourcing-Planning-Implementing-Eventmodeling/dp/B0DNXQJM9Z/ref=sr_1_1?dib=eyJ2IjoiMSJ9.LqdaOIXJSPbgGuz_Akil-snFyMZVys1Y2IhnqvPv_CGK3R6Vwvu6AN1PHBi6twz-c3bPG5mdbhLJQyYs30LXh2pT6wiqXPrz0RKmfeYzq_sT18tc2UAWVG8rFBN1C-H46AHiiDqusp6SyDm2W15n4ZBKn11xW4yNvazjq3pg369c53KDFONnWqe9AB4xzAF2VeQ4n64hOk30-GmG_1K6_zIPBw4PXkVX9UDYq0QDIAQ.0Kvsl2V8aqDO4Av47g881GGoRPCpF0gCrbF6GJZbjRE&amp;dib_tag=se&amp;keywords=understanding+event+sourcing&amp;qid=1777078561&amp;sbo=RZvfv%2F%2FHxDF%2BO5021pAnSA%3D%3D&amp;sr=8-1&amp;featured_on=talkpython" target="_blank" >www.amazon.com</a><br/> <strong>Event Sourcing Explained using Football Video</strong>: <a href="https://www.youtube.com/watch?v=xPmQxYIi5fA" target="_blank" >www.youtube.com</a><br/> <strong>Why I finally embraced event sourcing and why you should too article</strong>: <a href="https://everydaysuperpowers.dev/articles/why-i-finally-embraced-event-sourcingand-why-you-should-too/?featured_on=talkpython" target="_blank" >everydaysuperpowers.dev</a><br/> <strong>valkey</strong>: <a href="https://valkey.io/?featured_on=talkpython" target="_blank" >valkey.io</a><br/> <strong>diskcache</strong>: <a href="https://talkpython.fm/episodes/show/534/diskcache-your-secret-python-perf-weapon" target="_blank" >talkpython.fm</a><br/> <strong>eventsourcing package</strong>: <a href="https://github.com/pyeventsourcing/eventsourcing?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>eventsourcing docs</strong>: <a href="https://eventsourcing.readthedocs.io/en/stable/topics/tutorial/part1.html?featured_on=talkpython" target="_blank" >eventsourcing.readthedocs.io</a><br/> <strong>John Bywater</strong>: <a href="https://github.com/johnbywater?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Datastar</strong>: <a href="https://data-star.dev/?featured_on=talkpython" target="_blank" >data-star.dev</a><br/> <strong>Microconf</strong>: <a href="https://microconf.com/?featured_on=talkpython" target="_blank" >microconf.com</a><br/> <strong>Event Modeling &amp; Event Sourcing Podcast</strong>: <a href="https://podcast.eventmodeling.org?featured_on=talkpython" target="_blank" >podcast.eventmodeling.org</a><br/> <strong>Python Package Guides for AI Agents</strong>: <a href="https://github.com/mikeckennedy/python-package-guides-for-agents?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Iodine tablets AI joke</strong>: <a href="https://x.com/pr0grammerhum0r/status/2046650199930458334?s=46&amp;featured_on=pythonbytes" target="_blank" >x.com</a><br/> <strong>KurrentDb</strong>: <a href="https://www.kurrent.io?featured_on=talkpython" target="_blank" >www.kurrent.io</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=s37d6yN2P70" target="_blank" >youtube.com</a><br/> <strong>Episode #548 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/548/event-sourcing-design-pattern#takeaways-anchor" target="_blank" >talkpython.fm/548</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/548/event-sourcing-design-pattern" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

May 11, 2026 04:36 PM UTC


Real Python

How to Flatten a List of Lists in Python

Flattening a list in Python involves converting a nested list structure into a single, one-dimensional list. A common approach to flatten a list of lists is to use a for loop to iterate through each sublist. Then you can add each item to a new list with the .extend() method or the augmented concatenation operator (+=). This will “unlist” the list, resulting in a flattened list.

Python’s standard library offers other tools to achieve similar results. You can also use a list comprehension for a concise one-liner solution. Each method has its own performance characteristics, but for loops and list comprehensions are generally more efficient.

By the end of this tutorial, you’ll understand that:

  • Flattening a list involves converting nested lists into a single list.
  • You can use a for loop and .extend() or a list comprehension to flatten lists in Python.
  • Standard-library functions like itertools.chain() and functools.reduce() can also flatten lists.
  • A custom flatten() function, either recursive or iterative, handles arbitrarily nested lists.
  • The .flatten() method in NumPy efficiently flattens arrays for data science tasks.

To better illustrate what it means to flatten a list, say that you have the following matrix of numeric values:

Language: Python
>>> matrix = [
...     [9, 3, 8, 3],
...     [4, 5, 2, 8],
...     [6, 4, 3, 1],
...     [1, 0, 4, 5],
... ]

The matrix variable holds a Python list that contains four nested lists. Each nested list represents a row in the matrix. The rows store four items or numbers each. Now say that you want to turn this matrix into the following list:

Language: Python
[9, 3, 8, 3, 4, 5, 2, 8, 6, 4, 3, 1, 1, 0, 4, 5]

How do you manage to flatten your matrix and get a one-dimensional list like the one above? In this tutorial, you’ll learn how to do that in Python.

Free Bonus: Click here to download the free sample code that showcases and compares several ways to flatten a list of lists in Python.

Take the Quiz: Test your knowledge with our interactive “How to Flatten a List of Lists in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

How to Flatten a List of Lists in Python

Test your understanding of how to flatten a list of lists in Python using for loops, list comprehensions, itertools, recursion, and NumPy.

How to Flatten a List of Lists With a for Loop

How can you flatten a list of lists in Python? In general, to flatten a list of lists, you can run the following steps either explicitly or implicitly:

  1. Create a new empty list to store the flattened data.
  2. Iterate over each nested list or sublist in the original list.
  3. Add every item from the current sublist to the list of flattened data.
  4. Return the resulting list with the flattened data.

You can follow several paths and use multiple tools to run these steps in Python. The most natural and readable way to do this is to use a for loop, which allows you to explicitly iterate over the sublists.

Then you need a way to add items to the new flattened list. For that, you have a couple of valid options. First, you’ll turn to the .extend() method from the list class itself, and then you’ll give the augmented concatenation operator (+=) a go.

To continue with the matrix example, here’s how you would translate these steps into Python code using a for loop and the .extend() method:

Language: Python
>>> def flatten_extend(matrix):
...     flat_list = []
...     for row in matrix:
...         flat_list.extend(row)
...     return flat_list
...

Inside flatten_extend(), you first create a new empty list called flat_list. You’ll use this list to store the flattened data when you extract it from matrix. Then you start a loop to iterate over the inner, or nested, lists from matrix. In this example, you use the name row to represent the current nested list.

In every iteration, you use .extend() to add the content of the current sublist to flat_list. This method takes an iterable as an argument and appends its items to the end of the target list.

Now go ahead and run the following code to check that your function does the job:

Language: Python
>>> flatten_extend(matrix)
[9, 3, 8, 3, 4, 5, 2, 8, 6, 4, 3, 1, 1, 0, 4, 5]

That’s neat! You’ve flattened your first list of lists. As a result, you have a one-dimensional list containing all the numeric values from matrix.

With .extend(), you’ve come up with a Pythonic and readable way to flatten your lists. You can get the same result using the augmented concatenation operator (+=) on your flat_list object. However, this alternative approach may not be as readable:

Language: Python
>>> def flatten_concatenation(matrix):
...     flat_list = []
...     for row in matrix:
...         flat_list += row
...     return flat_list
...

Read the full article at https://realpython.com/python-flatten-list/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 11, 2026 02:00 PM UTC

Quiz: How to Flatten a List of Lists in Python

In this quiz, you’ll test your understanding of how to flatten a list in Python.

You’ll write code and answer questions to revisit the concept of converting a multidimensional list, such as a matrix, into a one-dimensional list.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 11, 2026 12:00 PM UTC


Django Weblog

DSF member of the month - Bhuvnesh Sharma

For May 2026, we welcome Bhuvnesh Sharma as our DSF member of the month! ⭐

Bhuvnesh, with a dark red pull over, in front of the camera, looking on the left. We can see a landscape with multiple buildings behind him and the cloudy sky.

Bhuvnesh is a Django contributor since 2022 and a Google Summer of Code (GSoC) participant in 2023 for Django. He is now a mentor and a GSoC admin organizer for the Django Software Foundation organization. He is the founder of Django Events Foundation India (DEFI) and DjangoDay India conference. He has been a DSF member since July 2023. He is looking for new opportunities!

You can learn more about Bhuvnesh by visiting Bhuvnesh's website and his GitHub Profile.

Let’s spend some time getting to know Bhuvnesh better!

Can you tell us a little about yourself (hobbies, education, etc)

I’m Bhuvnesh (aka DevilsAutumn), a software developer from India. I graduated in 2024 from GL Bajaj Institute of technology and management, and most of my work has been around Python, Django and building backend systems. My journey with Django started when I started contributing to Django core in 2022. I usually like working on things where there is an actual product involved, not just writing few APIs and closing the task. I like thinking about how the whole thing will work: models, permissions, background jobs, deployment, users, edge cases and all of that.

Apart from work, I like reading books around startups and entrepreneurship, watching movies, and honestly I overthink a lot about building products. Sometimes too much, but yeah that’s also how many ideas start for me. I’ve also been involved with the Django community through Django India, GSoC, Djangonaut Space and DjangoDay India, which has been a big part of my journey.

I'm curious, where your nickname "DevilsAutumn" comes from?

Haha, Nice question. So, there is one of my friend who used to write sci-fi novels. In 2022, I decided that I’ll have one unique coding name for me and thinking that I have a friend who write novels his imagination must be great, I went to him to ask for name ideas and one of the names he suggested was DevilsAutumn, since then I use that as my nickname.

How did you start using Django?

When I was in my exploring phase, I was really curious and trying out different languages, frameworks etc. and I read a blog post from Instagram engineering team about Django being used at Instagram. A framework which is a backbone of a product used by billions of users, will get anyone curious. From there I started exploring Django and I fell in love with it. The framework, the community, the documentation - all of it was amazing.

What other framework do you know and if there is anything you would like to have in Django if you had magical powers?

I have also worked with FastAPI and I find that really cool as well. But the calmness django has is unbeatable.

If I had magical powers, I’d be living on the moon. Just kidding. 😆

There are a couple of things that I would love in Django:

First is "modernising" the website which is already underway. The website feels very boring and outdated. I’d love to see a modern version.

Second, I would love to see Django have built-in support for creating REST APIs. DRF is amazing and it has done a lot for the Django ecosystem, but because it is still an external library, there are some rough edges. Sometimes serialization can feel a bit slow or heavy, the learning curve is different from regular Django, and you also depend on a separate package for something which has become a core need in modern web apps.

What projects are you working on now?

I am currently working on a project called Trevo, which helps people find activities happening around them which anyone can join and socialize with others in real life.

Apart from that, I am also working on an open source python library which is a migration safety toolkit for Django. It's called django-migrations-inspector. It helps you find problems in your migration files before they go into production.

Which Django libraries are your favorite (core or 3rd party)?

Although there is a long list, I’d probably say Django REST Framework (DRF), django-import-export, and django-debug-toolbar.

DRF is the obvious one because I’ve used it a lot for building APIs with Django. Even with some rough edges, it has been very important for the ecosystem 😛

I also really like django-import-export, mostly because in real projects you always end up needing some Excel/CSV import export kind of thing, and this just saves time.

And django-debug-toolbar because it has made debugging queries and performance issues much easier for me personally.

What are the top three things in Django that you like?

I think the first thing has to be the community. People in the Django community are genuinely nice and helpful, and the docs are also really good. A lot of times, when you are stuck, either the documentation has already explained it properly or someone has discussed the same thing before.

Second, I really like the ecosystem around Django. For most of the common things you need while building a product, there is usually already a good package available. And Django itself also gives you so much out of the box, so you don’t have to build every basic thing from scratch.

And third is Django admin. Honestly, I really like it. Some people may not think of it as a very exciting feature, but when you are building real products, having a working admin panel so quickly is super useful. It saves a lot of time.

You are one of the admin organizers of GSoC program for Django organization, thank you for helping. How is it going for you? Do you need help?

It has been going well so far, thank you for asking. I’m really happy to help with organizing GSoC for Django. It’s always nice to see contributors getting involved and working on meaningful projects, I even posted about it on LinkedIn.

Everything is good for now, but I’ll reach out in case I need any help. In fact, we are also working creating GSoC working group to make things more smooth for future. I’m sure that is also going to help us.

You have been part of Djangonaut Space program as a Navigator (Mentor) in the first session. How did you find the experience? What is your reflection on the program after all this time?

It was a great experience! I love to help people who are new to open-source and guide them just like I was guided by a mentor in my college days. I believe anyone can do great things in life if they are given proper mentorship. That's my motivation behind getting involved in Djangonaut Space.

Djangonaut Space program has created a strong community of developers from all background that love Django. A lot of people want to contribute to open source, but they don’t always know where to start, or they feel the project is too big for them. Djangonaut Space helped reduce that fear by giving people guidance, structure, and a friendly space to ask questions.

Even after all this time, I still feel it is one of the best community-led efforts around Django. It doesn’t just help people contribute code, it helps them feel that they belong in the community.

Do you have any advice for folks would like to consider mentoring through GSoC or Djangonaut Space?

I just want to say that people who are experienced, who have been contributing to Django or people who are maintaining any 3rd party package, must consider mentoring through GSoC or Djangonaut Space program. It is one of the most impactful way to contribute to open source in my opinion because you are not just guiding a few people, you might be guiding the next generation of mentors, Django maintainers, org admins, community leaders or Djangonaut Space organizers.

And mentorship plays the most important role in maintaining the ecosystem that django has created for years.

You have been previously a participant of GSoC for Django organization, you are now an admin of the organization. That's great! How did you get to this point? Did you ever imagine you would end up here?

Haha honestly, no. I don’t think I ever imagined it would turn out this way. When I first got into GSoC with Django, I was just really happy to be there and contribute. At that time, I was mostly focused on learning, understanding the project better, and trying not to mess things up 😅

But after that I kind of stayed around. I kept contributing, stayed connected with the community, mentored in Djangonaut Space, then mentored in GSoC 2024, and slowly started getting more involved in the community and organizing side of things too.

So it was never like I had this clear plan that one day I’ll become an org admin. It just happened very naturally over time, mostly because I kept showing up and people trusted me with more responsibility.

Now being on this side feels a little unreal, but also very special. I know how it feels to be a contributor, how confusing and exciting it can be, so I really care about making the experience good for others too.

In a way, it feels like a full-circle moment, but also like there’s still a lot more to learn and do.

You are the founder of DjangoDay India and Django Events Foundation India, could you tell us a bit more on the event and what made you create this structure?

DjangoDay India started from a very simple thought, like we should have a proper Django-focused event in India. There are a lot of people here using Django — developers, students, companies — but we didn’t really have one place where everyone can come together. It was really difficult to organize DjangoDay India in 2025 because it was the first Django event happening at that scale in India but we still made it thanks to the amazing team.

Django Events Foundation India (DEFI) was created to give this some structure. I didn’t wanted DjangoDay India to become just a one-time thing or something which only depends on me. Apart from that, I even want to support more local Django events happening around India through DEFI. The idea is to make it sustainable, community-first, and slowly involve more people. For me, it is mainly about growing the Django ecosystem in India and giving people a space to speak, volunteer, sponsor, contribute, and maybe later lead also.

Do you remember your first contribution to Django and in open source?

Yes, so I was going through someone else’s PR which got merged and in that I found a small typo in the comment. Then I created a new PR to fix that. It was my first contribution to Django.

Talking about the first open source contribution, it was adding some phone number validation checks in validatorjs library.

Is there anything else you’d like to say?

Nothing much, just thank you for having me here. If someone is thinking of contributing to Django but feels scared, please don’t worry. Most of us also started by staring at the codebase and pretending we understood what was happening. Just start small, ask questions, and slowly it starts making sense.


Thank you for doing the interview, Bhuvnesh !

May 11, 2026 11:00 AM UTC


Python Bytes

#479 Talking About Types

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://tildeweb.nl/~michiel/httpxyz-one-month-in.html?featured_on=pythonbytes">httpxyz one month in</a></strong></li> <li><strong><a href="https://blog.geekuni.com/2026/04/python-concurrency.html?featured_on=pythonbytes">Learn concurrency - a deep dive into multithreading with Python</a></strong></li> <li><strong><a href="https://ichard26.github.io/blog/2026/04/whats-new-in-pip-26.1/?featured_on=pythonbytes">pip 26.1 - lockfiles and dependency cooldowns</a></strong></li> <li><strong><a href="https://peps.python.org/pep-0661/?featured_on=pythonbytes">Python 3.15 <code>sentinal</code> values from PEP 661</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=3E3KPBAYkWo' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="479">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1: <a href="https://tildeweb.nl/~michiel/httpxyz-one-month-in.html?featured_on=pythonbytes">httpxyz one month in</a></strong></p> <ul> <li>First version of httpxyz contained just the fixes to get zstd working, and the fixes to get the test suite running on python 3.14, some ‘housekeeping’ changes related to the renaming</li> <li>End of March: a compatibility shim that allows you to use httpxyz even with third-party packages that import httpx themselves, as long as you import httpxyz first. <ul> <li>Importing <code>httpxyz</code> automatically registers it under the <code>httpx</code> name in <code>sys.modules</code> , see https://httpxyz.org/httpx-compatibility/</li> </ul></li> <li>Fixed a WHOLE bunch of performance related issues by forking httpcore</li> </ul> <p><strong>Brian #2: <a href="https://blog.geekuni.com/2026/04/python-concurrency.html?featured_on=pythonbytes">Learn concurrency - a deep dive into multithreading with Python</a></strong></p> <ul> <li>Nikos Vaggalis</li> <li>“Whenever you are trying to speed up code using multiple cores, always ask yourself: “Do these threads need to talk to each other right now?” If the answer is yes, it will be slow. The best parallel code splits a big job into completely isolated chunks, processes them separately, and merges the results at the finish line.”</li> <li>Good overview of thread concurrency with Python and how that’s been improved dramatically with free-threaded Python</li> <li>Defines lots of terms you come across, including “embarrassingly parallel multithreading”</li> <li>There’s a counter example that’s nice <ul> <li>Start with a shared resource, a counter, and multiple threads updating it</li> <li>Attempt to fix with <code>threading.Lock()</code>, which fixes it, but slows things down</li> <li>Good explanation of why</li> <li>Proper fix with <code>concurrent.futures</code> and separating the work of different threads so that they can be independent and their results can be combined when they’re all finished.</li> </ul></li> </ul> <p><strong>Michael #3: <a href="https://ichard26.github.io/blog/2026/04/whats-new-in-pip-26.1/?featured_on=pythonbytes">pip 26.1 - lockfiles and dependency cooldowns</a></strong></p> <ul> <li>Python 3.9 is no longer supported</li> <li>Experimental: installing from pylock files</li> <li>Dependency cooldowns (see <a href="https://mkennedy.codes/posts/python-supply-chain-security-made-easy/?featured_on=pythonbytes">my post about this</a>)</li> <li>Lifting several 2020 resolver limitations</li> </ul> <p><strong>Brian #4: <a href="https://peps.python.org/pep-0661/?featured_on=pythonbytes">Python 3.15 <code>sentinal</code> values from PEP 661</a></strong></p> <div class="codehilite"> <pre><span></span><code><span class="n">MISSING</span> <span class="o">=</span> <span class="n">sentinel</span><span class="p">(</span><span class="s2">&quot;MISSING&quot;</span><span class="p">)</span> <span class="k">def</span><span class="w"> </span><span class="nf">next_value</span><span class="p">(</span><span class="n">default</span><span class="p">:</span> <span class="nb">int</span> <span class="o">|</span> <span class="n">MISSING</span> <span class="o">=</span> <span class="n">MISSING</span><span class="p">):</span> <span class="o">...</span> <span class="k">if</span> <span class="n">default</span> <span class="ow">is</span> <span class="n">MISSING</span><span class="p">:</span> <span class="o">...</span> </code></pre> </div> <ul> <li>Take a name str as a constructor parameter</li> <li>Intended to be compared with <code>is</code> operator, similar to <code>None</code></li> <li>Sentinal objects can be used as a type, also similar to <code>None</code> <ul> <li>and can be combined with other types with <code>|</code>.</li> </ul></li> <li>Unlike <code>None</code>, sentinal values are truthy. (Elipses <code>...</code> are also truthy) <ul> <li>This seems like a strange choice. but I guess it must have made sense to someone.</li> <li>It does force you to use <code>is</code> instead of depending on False-ness, so I guess it’ll make code using sentinels more readable.</li> </ul></li> <li>Interesting that the PEP was started in 2021, and we’re finally getting it this year.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://lucumr.pocoo.org/2026/4/28/before-github/?featured_on=pythonbytes">Before GitHub</a> - Armin Ronacher</li> <li><a href="https://tenacityaudio.org?featured_on=pythonbytes">tenacity</a> - cross-platform multi-track audio editor/recorder <ul> <li>learned about it from Armin’s article</li> </ul></li> </ul> <p><strong>Joke:</strong></p> <ul> <li>Joke option <a href="https://xkcd.com/3233/?featured_on=pythonbytes">Make it myself</a> <ul> <li>Seems similar to what people think about software now</li> </ul></li> </ul> <p>Links</p> <ul> <li><a href="https://tildeweb.nl/~michiel/httpxyz-one-month-in.html?featured_on=pythonbytes">httpxyz one month in</a></li> <li><a href="https://httpxyz.org/httpx-compatibility/?featured_on=pythonbytes">httpxyz.org/httpx-compatibility</a></li> <li><a href="https://blog.geekuni.com/2026/04/python-concurrency.html?featured_on=pythonbytes">Learn concurrency - a deep dive into multithreading with Python</a></li> <li><a href="https://ichard26.github.io/blog/2026/04/whats-new-in-pip-26.1/?featured_on=pythonbytes">pip 26.1 - lockfiles and dependency cooldowns</a></li> <li><a href="https://mkennedy.codes/posts/python-supply-chain-security-made-easy/?featured_on=pythonbytes">my post about this</a></li> <li><a href="https://peps.python.org/pep-0661/?featured_on=pythonbytes">Python 3.15 <code>sentinal</code> values from PEP 661</a></li> <li><a href="https://lucumr.pocoo.org/2026/4/28/before-github/?featured_on=pythonbytes">Before GitHub</a></li> <li><a href="https://tenacityaudio.org?featured_on=pythonbytes">tenacity</a></li> <li><a href="https://xkcd.com/3233/?featured_on=pythonbytes">Make it myself</a></li> </ul>

May 11, 2026 08:00 AM UTC


Python Software Foundation

Strategic Planning at the PSF

The Python Software Foundation (PSF) is excited to share that the PSF Board has been developing a strategic plan to guide the foundation's direction over the next five years. We are sharing the high-level goals today to collect feedback and commentary from the Python community. A full draft with detailed objectives will be published in early June for public feedback, and the board hopes to adopt the plan in July 2026, to be reviewed annually going forward.

Why now

The Python ecosystem is growing and changing fast. PyPI hosts over 800,000 projects and serves tens of billions of downloads per month. The Developers-in-Residence program has grown from a single role to a team spanning CPython development, security, and PyPI safety, proving that targeted investment in core infrastructure works. Last year's fundraiser showed that the community and sponsors are willing to support the PSF's mission when provided the opportunity.

The foundation also faces challenges. As we shared in November, the PSF's assets and yearly revenue have declined and costs have increased, while the demand for the foundation's work grows faster than its capacity. Last year we had to pause the Grants Program after reaching the budget cap earlier than expected. These pressures are part of why the board committed to a strategic plan: the foundation needs a clear framework for making hard choices about where to focus.

The PSF Board has discussed strategic planning over the years, including at the 2024 board retreat. This year, we committed to turning that discussion into a concrete plan. The process included numerous interviews with PSF Staff, community members, and participants across the Python ecosystem. After interviews, the PSF Board went through a prioritization exercise, followed by a series of dedicated and structured board discussions.

The direction

The plan has two parts: 

I. Organizational Goals: How the PSF operates across all its activities, and
II. Program Goals: Where the PSF directs its work and resources. 

We invite your feedback on all of the goals in both parts of the plan (See the “How to participate” section below). 

I. Organizational Goals: How we operate

  1. Financial Sustainability: Diversify the PSF's revenue so the foundation is not dependent on any single source.
  2. Building a Resilient Foundation: Strengthen governance, financial oversight, and knowledge management so the organization can survive transitions and operate transparently.
  3. Diversity and Inclusion: D&I is not treated as a standalone effort. D&I is a lens for all PSF decisions and activities.
  4. Transparency and Community Trust: Increase visibility into how the PSF makes decisions and uses its resources, as the community's trust in its governance is the foundation of the PSF's credibility.
  5. Community Empowerment and Self-Sufficiency: Support Python communities in building their own capacity through collaboration and shared resources.
  6. Strong Partnerships and Collaboration: Partner with organizations that distribute, extend, and depend on Python, as well as with community groups across the open source ecosystem.

II. Program Goals: Where we focus our work

How the plan works

We developed this strategic plan to cover a five-year period. The board will review progress annually with community input, review whether priorities need to shift, and publish the results so the community can see how we are tracking. The intention is for the strategic plan to be flexible and adaptive, so that it can effectively guide the PSF’s priorities as the ecosystem continues to grow and evolve, rather than a static document that begins to collect dust on the shelf.

We developed the plan to set direction–not implementation details. How to carry it out is the job of PSF Staff, and the specifics will evolve as we learn what works. Once adopted, the plan will directly inform how the PSF allocates its budget and staff time and how it seeks funding.

How to participate

If any of these goals matter to you, or if you think we are missing something important, we want to hear from you.

We welcome you to email strategy@python.org to share your thoughts. This is the best way to reach us asynchronously.

You can also join the conversation with us at:

A full draft with detailed objectives under each Program Goal will be published in early June for community feedback via this blog, Python Discuss under the PSF category, and social media. The feedback window for this year will close before the July 8th PSF Board meeting.

This plan will shape what the PSF does and how it spends its resources for the next five years. If you use Python, contribute to it, or participate in communities around it, you have a stake in shaping its future.

Jannis Leidel, PSF Board Chair, on behalf of the PSF Board of Directors

May 11, 2026 06:47 AM UTC


Python Koans

Koan 20: The Unreliable Messenger

How to Clean Up

When you work with external resources such as a database or temporary files, you often need to run some cleanup actions after you've done the work.

Python provides two options - the context manager and the try/finally block. Both are valid options, but the context manager is often lauded as being more Pythonic.

Thanks for reading Python Koans! Subscribe for free to receive new posts and support my work.

Despite this, try/finally block is still widely used. As we will discover, try/finally is simple to use and well-suited in some cases, but it does come with pitfalls, as the messenger tragically discovered.

Let us try and contact the messenger.

Part 1: The Assured Action

A variant of the try/finally pattern exists in most languages1 and functions pretty much in the way you might imagine. Consider this trivial example:

def walk_path():
    try:
        print("Taking a step")
    finally:
        print("Leaving a footprint")

'Taking a step'
'Leaving a footprint'

The interpreter enters the first block and executes the statement. The interpreter then proceeds to the finally block. The final block always executes. It does not matter if the first block succeeds or fails.

If a failure is raised during execution as shown below, the error disrupts the normal flow and the interpreter stops executing the try block immediately. The interpreter then jumps directly to the finally block.

def walk_path():
    try:
        raise Exception("A fallen tree")
    finally:
        print("Leaving a footprint")

If you choose to handle the failure using an except block as shown below, the error is caught and handled by the except block before proceeding to the finally block. An error could also occur inside the except or else block. The interpreter would still execute the finally block before raising the new error.

def walk_path():
    try:
        raise Exception("A fallen tree")
    except Exception:
        print("Climbing over the trunk")
    else:
        print("Kicking the trunk away using superhuman strength")
    finally:
        print("Leaving a footprint")

Part 2: The Trapped Messenger

This brings us to the behavior of returning values. A function can return a value from within the try block. When the interpreter encounters this return statement, It prepares to send this value back to the caller. But first, it must honor the finally block. So it pauses the return process and executes the finally block first before returning the prepared value from the try block.

def walk_path():
    try:
        return "Reaching the destination"
    finally:
        print("Leaving a footprint")

The finally block can also contain its own return statement, as shown below. When this happens, the return in the finally block wins and the return value from the try is effectively ignored.

def walk_path():
    try:
        return "Reaching the destination"
    finally:
        return "Returning home"

This behavior applies to exceptions as well. We can place a loop around our structure to observe break and continue statements.

def scout_path():
    for step in range(3):
        try:
            raise Exception("A hidden trap")
        finally:
            break
    return "The scout survives"
'The scout survives'

A break statement inside a finally block will swallow any unhandled exception from the try block. A continue statement will do the exact same thing. The exception disappears completely.

Part 3: The Trapped Voice

Lets examine a more complex example with nested try/finally statements. Do return statements break out of parent try/finally blocks?

def send_message():
    try:
        print("Everything is fine")
        return 0
    finally:
        try:
            try:
                print("Everything is still fine")
            finally:
                for x in range(2):
                    print(f"Scouting area {x}")
                return 1
        finally:
            for x in range(2):
                print(f"Covering tracks in area {x}")
                return 2
print(f"Return value: {send_message()}")

Everything is fine
Everything is still fine
Scouting area 0
Scouting area 1
Covering tracks in area 0
Covering tracks in area 1
Return value: 2

No, try/finally statements can be nested, and return statements from child blocks do not prevent parent finally blocks from running. However, the value from the last return statement trumps the rest and is still the one that is returned by the function.

Part 4: The Plot Thickens

As you can imagine, any language which allows you to write dead code that produces unintended outcomes is problematic. The Python language developers recognized this danger, and proposed that return/break/continue statements should be disallowed in finally blocks in PEP6012.

However, it was voted down for the following reason:

Reading the references in the PEP it seems to me that most languages implement this kind of construct but have style guides and/or linters that reject it. I would support a proposal to add this to PEP 8 (if it isn’t already there).

I note that the toy examples are somewhat misleading – the functionality that may be useful is a conditional return (or break etc.) inside a finally block.

-Guido 2019 PEP601

Guido’s reasoning was that there may be valid scenarios where the user requires full control of exception handling in the finally block, and may wish to override the raising of exceptions. Preventing this behavior would effectively hamstring advanced users.

However, in 2024 the community tried again with PEP7653. This time they were armed with evidence. They analyzed the top 8000 PyPi packages and found that:

Most of the usages (of return in finally) are incorrect, and introduce unintended exception-swallowing bugs. - PEP765

This was enough for the proposal to get over the line, and from Python 3.14 onwards, using return, break or continue in a finally clause emits a SyntaxWarning.

Part 5: Why use try/finally at all?

A context manager is the pythonic choice most of the time when you’re working with resources that already expose acquire/release semantics that can easily slot into __enter__ and __exit__. It’s the natural choice for files, locks, database connections, temporary state changes, etc. It’s declarative and minimizes surface area for errors.

However, they don’t always make sense:

Closing the Circle

The novice believed the original message was secure. The master understood the final seal controls the truth.

The finally block always speaks last. You must ensure its final words do not obscure the truth of what came before.

Thanks for reading Python Koans! Subscribe for free to receive new posts and support my work.

If you enjoyed this post, feel free to share it with your friends!

Share

1

https://en.wikipedia.org/wiki/Exception_handling_syntax

2

https://peps.python.org/pep-0601/

3

https://peps.python.org/pep-0765/

May 11, 2026 12:00 AM UTC

May 10, 2026


Python Insider

Python 3.14.5 is out!

A special release with a new (old) garbage collector.

May 10, 2026 12:00 AM UTC

May 08, 2026


Real Python

The Real Python Podcast – Episode #294: Declarative Charts in Python & Discerning Iterators vs Iterables

What if you could build charts in Python by describing what your data means, instead of scripting every visual detail? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 08, 2026 12:00 PM UTC

Quiz: Memory Management in Python

In this quiz, you’ll test your understanding of Memory Management in Python.

By working through this quiz, you’ll revisit how Python handles memory allocation and freeing, the role of the Global Interpreter Lock, and how CPython organizes memory using arenas, pools, and blocks. Give it a shot!


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 08, 2026 12:00 PM UTC


Seth Michael Larson

Using Epilogue Retrace app with iPhone 13 Pro and Ubuntu

When Epilogue announced the Retrace app for iOS and Android I was over the moon excited. In theory this meant I could archive ROMs from the GB Operator directly to my iPhone where I play the games with the Delta emulator. This meant I wouldn't need to ferry ROMs between the GB Operator to my laptop to my phone. Unfortunately I ran into two hurdles with my plan, if you were able to get Retrace to work with a pre-USB-C iPhone let me know.

Upgrading the GB Operator firmware

First I saw that the Retrace app required a new firmware version for the GB Operator (v10.0.10), so I set out to update the GB Operator firmware. The documentation says to use Playback, so I went to update Playback. Previously Playback was distributed as an AppImage but newer versions use Flatpak. So... I had to figure out how to install a Flatpak on Ubuntu.

I did that, had the new Playback app on Ubuntu and... the firmware update notification never prompted in the app. I contacted support and learned that apparently the Linux versions of Playback don't support updating the firmware... So I needed a Windows computer. My wife's laptop is Windows, so I was able to update the firmware using her computer instead of my Ubuntu laptop.

Trying Retrace with an iPhone 13 Pro

GB Operator uses USB-C for power delivery and data transfer and comes with a high-quality USB-C cord. This is perfect for my laptop which only has USB-C ports.

Unfortunately, I would be using Retrace on an iPhone 13 Pro. The iPhone 13 Pro came before Apple was legally required to use USB-C on their phones in Europe, so the phone has a lightning port. I purchased a Lightning to USB-C adapter cord from the Apple Store.

But... that doesn't work with the GB Operator. It doesn't deliver power to the device. I was able to try with my wife's iPhone 15 Pro (which has USB-C) and power delivery worked like normal, the GB Operator turned on as usual. That's unfortunate.

In summary: if you want to use Epilogue Retrace you need a phone that supports USB-C and upgrading the GB Operator firmware requires either macOS or Windows... I guess I'll be using Playback on Ubuntu for the next five years now that I've just replaced my iPhone 13 Pro battery 😢



Thanks for keeping RSS alive! ♥

May 08, 2026 12:00 AM UTC


Armin Ronacher

Pushing Local Models With Focus And Polish

I really, really want local models to work.

I want them to work in the very practical sense that I can open my coding agent, pick a local model, and get something that feels competitive enough that I do not immediately switch back to a hosted API after five minutes. There are a lot of reasons why I want this, but the biggest quite frankly is that we’re so early with this stuff, and the thought of locking all the experimentation away from the average developer really upsets me.

Frustratingly, right now that is still much harder than it should be but for reasons that have little to do with the complexity of the task or the quality of the models.

We have an enormous amount of activity around local inference, which is great. We have good projects, fast kernels, and people are doing great quantization work. A lot of very smart people are making all of this better, and yet the experience for someone trying to make this work with a coding agent is worse than it has any right to be.

Putting an API key into Pi and using a hosted model is a very boring operation. You select the provider, paste the key and then you are done thinking about how to get tokens. Doing the same thing locally, even when you have a high-end Mac with a lot of memory, is a completely different experience. You choose an inference engine, then a model, then a quantization, then a template, then a context size, then you’ve got to throw a bunch of JSON configs into different parts of the stack and then you discover that one of those choices quietly made the model worse or that something just does not work at all.

That is the gap I am interested in.

Runnable Is Not Finished

A lot of local model work optimizes for making models runnable. That is necessary, but it is not the same thing as making them feel finished. I give you a very basic example here to illustrate this gap: tool parameter streaming.

For whatever reason, most of the stuff you run locally does not support tool parameter streaming. I cannot quite explain it, but the consequences of that are actually surprisingly significant. If you are not familiar with how these APIs work, the simplest way to think about them is that they are emitting tokens as they become available. For text that is trivial, but for tool calls that is often not done, despite the completions API supporting this. As a result you only see what edits are being done on a file once the model has finished streaming the entire tool call.

This is bad for a lot of reasons:

Having a model spit out tokens doesn’t take long, but making the experience great end to end does take a lot more energy.

Fragmentation

The local stack is fragmented across many engines and layers. There is llama.cpp, Ollama, LM Studio, MLX, Transformers, vLLM, and many other pieces depending on hardware and taste. All of these are amazing projects! The problem is not that they exist or that there are that many of them (even though, quite frankly, I’m getting big old Python packaging vibes), the problem is that for a given model, the actual behavior you get depends on a long chain of small decisions that most users just don’t have the energy for.

Did the chat template render exactly right? Are the reasoning tokens handled in the intended way? Is the tool-call format translated correctly? Is the context window real? Are the KV caches actually working for a coding agent? Did I pick the right quantized model from Hugging Face? Are you accidentally leaving a lot of performance on the table because the model is just mismatched for your hardware? Does streaming usage work across all channels? Does the model need its previous reasoning content preserved in assistant messages? Is the coding agent set up correctly for it?

You also need to install many different things in addition to just your coding agent.

All of these things matter. They matter a lot.

The result is that people try a local model and get a result that is neither a fair evaluation of the model nor a polished product experience and this results in both people dismissing local models and energy being distributed across way too many separate efforts instead of getting one effort going great end to end.

This is a terrible way to build confidence.

Too Little Critical Mass

In line with our general “slow the fuck down” mantra, I want to reiterate once more how fast this industry is moving.

Every week there is a new model and a new vibeslopped thing. The attention immediately moves to making the next thing run instead of making one thing run really, really well in one harness. I get the excitement and dopamine hit, but it also means that too little critical mass accumulates behind any one model, hardware, inference engine, harness combo to find out how good it can really become when the entire stack is built around it.

Hosted model providers do not ship a bag of weights and ask you to figure out the rest, and we need to approach that line of thinking for local models too. I want someone to pick one model, pairs it up with one serving path, directly within a coding agent. Initially just for one hardware configuration, then for more. Pick a winner hard. If a tool call breaks, that is a product bug and then it’s fixed no matter where in the stack it failed. If the model’s reasoning stream is malformed, that is a product bug. If latency is much worse than it should be, that is a product bug. We need to start applying that mentality to local models too.

And not for every model! That is the point. Let’s pick one winner and polish the hell out of it. Learn what it takes to make that one configuration good, then take those learnings to the next config.

The DS4 Bet

This is why I am excited about ds4.c. It’s Salvatore Sanfilippo’s deliberately narrow inference engine for DeepSeek V4 Flash on Macs with 128GB+ of RAM only. It is not a generic GGUF runner and it is not trying to be a framework. It is a model-specific native engine with a Metal path, model-specific loading, prompt rendering, KV handling, server API glue, and tests.

DeepSeek V4 Flash is a good candidate for this kind of experiment because it has a combination of properties that are unusual for local use. It is large enough to feel meaningfully different from many smaller dense models, but sparse enough that the active parameter count makes it plausible to run. It has a very large context window. Since ds4.c targets Macs and Metal only, it can move KV caches into SSDs which greatly helps the kind of workloads we expect from coding agents.

To run ds4.c you don’t need MLX, Ollama or anything else. It’s the whole package.

Embedding It In Pi

Which made me build pi-ds4 which is a Pi extension to directly embed the whole thing into Pi itself. Taking what ds4 is and dogfooding the hell out of it with a coding agent and zero configuration. To answer the question how good can the local model experience become if Pi treats this as a first-class provider rather than as a pile of manual configuration?

The extension registers ds4/deepseek-v4-flash, compiles and starts ds4-server on demand, downloads and builds the runtime if needed, chooses the quantization based on the machine, keeps a lease while Pi is using it, exposes logs, and shuts the server down again through a watchdog when no clients are left. It doesn’t even give you knobs right now, because I want to figure out how to set the knobs automatically.

This is not about hiding the fact that local inference is complicated. It is about putting the complexity in one place where it can be improved, because there is a lot that we need to improve along the stack to make it work better.

I think we can do better with caching and there is probably some performance that can be gained if we all put our heads together.

Focusing and Learning

The experiment I want to run is not “can a local model run?” because we already know that it can. I want to know if, for people with beefed-out Macs for a start, we can get as close as possible to the ergonomics of a hosted provider with decent tool-calling performance: how to get caches to work well, how to improve the way we expose tools in harnesses for these models, and then scale it gradually to more hardware configs and later models.

I also want everybody to have access to this. Engineers need hammers and a hammer that’s locked behind a subscription in a data center in another country does not qualify. I know that the price tag on a Mac that can run this is itself astronomical, but I think it’s more likely that this will go down. Even worse, Apple right now due to the RAM shortage does not even sell the Mac Studio with that much RAM. So yes, it’s a selected group of people where ds4.c will start out.

But despite all of that, what matters is that a critical mass of pepole start to focus their efforts on a thing, tinker with it, improve it, not locked away, out in the open, and most importantly not limited by what the hyperscalers make available.

But if you have the right hardware and you care about local agents, I would love for you to try it within pi:

pi install https://github.com/mitsuhiko/pi-ds4

My hope is that this becomes a useful forcing function to really polish one coding agent experience. But really, the focal point should be ds4.c itself.

May 08, 2026 12:00 AM UTC

May 07, 2026


Real Python

Quiz: Qt Designer and Python: Build Your GUI Applications Faster

In this quiz, you’ll test your knowledge of Qt Designer and Python: Build Your GUI Applications Faster.

By working through this quiz, you’ll revisit how Qt Designer turns visual designs into .ui files, how layout managers control widget geometry, how signals and slots connect user actions to your code, and how to load .ui files into a PyQt application with pyuic5 or uic.loadUi().


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 07, 2026 12:00 PM UTC


PyCharm

Python Unplugged on PyTV: Key Takeaways From Our Community Conference

Python Unplugged on PyTV: Key Takeaways From Our Community Conference

AI is changing how Python developers learn, build, and contribute to open source. At the same time, long-standing questions around community, sustainability, data workflows, and web development are becoming even more important.

Python Unplugged on PyTV, a free online conference hosted by JetBrains PyCharm, brought these conversations together in over seven hours of talks and discussions with developers, maintainers, educators, and tool builders from across the Python ecosystem.

Don’t have time to watch the full event? This blog post gives you a quick overview of what’s happening in Python today, based on talks from 13 Python experts – from AI-assisted development and open-source sustainability to modern data processing, Django, and community building.

Watch the recap video

Want to see the highlights from Python Unplugged on PyTV? Watch the full recap video below.

JetBrains’ Dr. Jodie Burchell, Data Scientist and Python Advocacy Team Lead; Cheuk Ting Ho, Data Scientist and Developer Advocate; and Will Vincent, Python Developer Advocate, discuss the key talking points from the day.

Need a quick overview? Here are the highlights

If you’d rather get the key takeaways in a written format, we’ve broken down the biggest insights from the day below. From the evolving role of AI to the importance of the Python community, these are the moments that stood out most from Python Unplugged on PyTV.

Highlight 1: Python goes beyond scripts and prototypes

Many developers first come to Python through a specific use case: automating tasks, building prototypes, learning data science, or experimenting with AI and machine learning. That accessibility is one of Python’s strengths, but it’s only the entry point.

In her session, AI Practitioners Are Only Getting Half the Goodness of Python, Deb Nicholson, Executive Director at the PSF, discussed how many AI and ML practitioners use Python mainly as a scripting or prototyping language. But Python is also used to build and maintain real-world software, supported by frameworks, data tools, testing workflows, packaging standards, and an active open-source community.

This broader context matters for learning, too. In his How to Learn Python session, Mark Smith, Head of Python Ecosystem at JetBrains, focused on what comes after the fundamentals: building real projects, reading other people’s code, and developing the habits needed to move past “tutorial hell.”

AI can help, but it shouldn’t replace hands-on practice. As Cheuk noted in the recap video, one useful tip from Mark’s talk was to turn off AI features while learning, so beginners still build the judgment needed to understand and improve the code they work with.

Highlight 2: The continuing role of community in Python

Python’s success has always been rooted in its community, and that remains as true as ever. Georgi Ker, Director and Fellow at the PSF; Una Galyeva, Head of AI at Geobear Global; and Jessica Greene, Senior ML Engineer at Ecosia, showcased this in their How PyLadies Is Shaping the Future of Python discussion.

PyLadies is an international mentorship group focused on helping more women become active participants and leaders in the Python community. The success of initiatives like PyLadies highlights how inclusive spaces can broaden participation and shape the future of the language.

As Will noted in our recap video, “Being part of the community is not just the code. It’s the conferences, it’s the people, it’s the live events – that’s what makes Python special.”

Python depends on a culture of shared responsibility, and contributors play a vital role. As AI brings more people into the ecosystem, preserving these values becomes even more important. Travis Oliphant, creator of NumPy, touched on this in his insightful session, Community is More Than Code: People Are What Make Python Thrive, and Why That Will Continue in an AI-Enabled Era.

There’s also a strong link between community and innovation, as Carol Willing, Core Developer at JupyterLab, explained in her session, Conversation, Computation, and Community: Key Principles for Solving Scientific Problems With Jupyter Notebooks and AI Tools. Tools like Jupyter have thrived in part because they enable conversation, collaboration, and knowledge sharing among people.

Highlight 3: AI poses both a threat and an opportunity for Python open source

AI is fundamentally changing how developers interact with open source.

On the positive side, AI coding tools lower the barrier to entry and allow more people to contribute. However, this increased accessibility comes with trade-offs. Maintainers are now dealing with a higher volume of contributions, many of which require significant review or refinement. Deb Nicholson, Executive Director at the PSF, discussed this trade-off in more detail in her session, AI Practitioners Are Only Getting Half the Goodness of Python.

This shift places additional pressure on those responsible for maintaining open-source projects. While AI can accelerate development, it also risks introducing poorly structured or low-quality code at scale.

Paul Everitt, Developer Advocate at JetBrains; Georgi Ker, Director and Fellow at the PSF; and Carol Willing, Core Developer at JupyterLab, pondered this in their Open Source in the Age of Coding Agents discussion. Ultimately, AI can’t replace the human systems that sustain open source. Trust, collaboration, and shared ownership remain essential, and arguably become even more important as contribution volumes increase. The real challenge lies in ensuring communities remain healthy and resilient as they scale.

Highlight 4: AI has also revolutionized how Python practitioners work

Beyond its impact on open source, AI is transforming day-to-day development workflows.

As Marlene Mhangami, Senior Developer Advocate at Microsoft Agentic, explained in her A Practical Guide to Agentic Coding session, coding is emerging as a new paradigm in which developers delegate tasks to AI systems capable of planning, executing, and refining code. This means the developer’s role is moving toward orchestration and validation, requiring new skills in guiding and evaluating AI outputs.

At the same time, development is becoming more conversational and exploratory. In environments like Jupyter, AI tools help users iterate faster, test ideas more easily, and move more fluidly between thinking and coding.

AI is also having a tangible impact on frameworks like Django, as discussed by Sheena O’Connell, Board Member at the PSF, in her talk, Powering Up Django Development With Claude Code. AI tools can speed up development in Django by handling repetitive tasks such as boilerplate generation and debugging. However, this comes with a caveat – developers must remain critical and treat AI as a collaborator, not a source of truth.

For beginners, AI can be a powerful learning aid, but over-reliance can limit deeper understanding. Building projects, reading code, and actively solving problems remain essential for developing real expertise.

Highlight 5: The importance of open-source AI

The open-source AI ecosystem is expanding rapidly, bringing with it a growing landscape of models, datasets, and tools.

This openness drives collaboration, transparency, and innovation, making it easier for developers to experiment and build on existing work. At the same time, it introduces challenges around fragmentation and long-term sustainability.

As Merve Noyan, ML Engineer at Hugging Face, explained in her Open-Source AI Ecosystem session, platforms like Hugging Face play a key role in organizing this ecosystem and making it more accessible, while Python continues to connect tools, communities, and technologies.

Highlight 6: Context is key for effective AI agents

As AI systems become more advanced, the way they interact with their input data is becoming increasingly important. Tuana Çelik, Developer Relations Engineer at LlamaIndex, covered this in detail in her insightful Orchestrating Document-Centric Agents With LlamaIndex talk.

LlamaIndex enables developers to build document-centric AI agents that retrieve, index, and reason over large collections of information. By structuring how documents are ingested and queried, it provides the LLM with much more context for the text it is processing, helping produce more accurate, context-aware responses.

This is particularly valuable in knowledge bases and enterprise assistants, where understanding relationships between pieces of information is as important as accessing the data itself.

Highlight 7: How Polars is refining high-performance data processing

Polars is pushing Python data processing toward a more scalable, production-ready future, as Polars creator Ritchie Vink explained in his Towards Query Profiling in Polars session.

Its high-performance, lazy execution model allows queries to be optimized automatically behind the scenes. However, this level of abstraction can make it harder for developers to fully understand performance.

To address this, there’s a growing need for better tooling, particularly around query profiling. By exposing execution plans, memory usage, and bottlenecks, developers can make informed decisions and build more efficient data workflows.

With features like streaming execution, Polars is helping bridge the gap between local data processing and large-scale systems.

As Jodie highlighted in the recap discussion, this shift is bringing more advanced data concepts into everyday Python workflows. She commented, “It’s really interesting to see more big data ideas coming to local Python data processing.”

Highlight 8: The power of typing in modern Python

Typing in Python continues to evolve, with a growing focus on flexibility rather than rigid enforcement. Open-source Django projects creator Carlton Gibson shed more light on this during his talk, Static Islands, Dynamic Sea: Some Thoughts on Incremental Typing.

The talk highlighted how developers are increasingly adopting an incremental approach. By creating “static islands” within a dynamic codebase, they can improve reliability, maintainability, and tooling without sacrificing Python’s core strengths.

In our recap video, Will agreed with this sentiment, adding, “It doesn’t have to be all-or-nothing. We don’t have to turn Python into something that it’s not.”

This approach is particularly useful in large frameworks like Django, where typing can help define clearer boundaries while still preserving developer ergonomics.

Highlight 9: The Django renaissance: Debunking aging myths

Django remains a modern, actively developed framework, as Django Fellow Sarah Boyce revealed in her session, Django Has a Marketing Problem: Debunking the Myths That Won’t Die.

Many of the criticisms that it’s outdated or unscalable don’t reflect the current reality. In practice, Django continues to evolve and power a wide range of applications.

The challenge is less about Django’s capabilities and more about perception, as the Django community was called to champion its strengths, ongoing evolution, and real-world impact.

Shifting this narrative will be key to ensuring its continued relevance and adoption in the years ahead.

What’s next for Python Unplugged on PyTV?

Python Unplugged on PyTV was our first step in reimagining what a fully online community conference can look like, and the response was incredible.

Looking at the numbers, more than 5,500 people joined us during the livestream. Since then, we’ve had a further 110,000 watch the event recording, showing just how global and engaged the Python community really is.

We’d love to bring Python Unplugged on PyTV back next year. What would you like to see more of? Who should we invite as speakers? Are there topics we didn’t cover that you’d love to explore?

Drop your suggestions in the comments and help shape the future of Python Unplugged on PyTV.

May 07, 2026 11:27 AM UTC


Seth Michael Larson

Library dependency version specifiers aren't for fixing vulnerabilities

Let's say you are the maintainer of a Python library that depends on another Python library like “urllib3”. Because you want to make sure users receive a compatible version of urllib3 you add a version specifier that restricts the version to the current “major” version so users know that older versions aren't compatible. This is what your pyproject.toml might look like:

[project]
name = "example-library"
dependencies = [
  "urllib3>=2",  
]

Now let's say that urllib3 publishes a vulnerability that affects “version 2.6.2 and earlier” and is fixed in version 2.6.3. Later you receive this pull request from a concerned user that changes the minimum version from 2 to 2.6.3 to “disallow installing a vulnerable version or urllib3”:

  [project]
  name = "example-library"
  dependencies = [
-  "urllib3>=2",  
+  "urllib3>=2.6.3",
  ]

You probably should not accept this pull request. Version ranges for libraries are meant to be used for compatibility, not for security vulnerabilities. This is a key difference between libraries and applications: libraries should allow the greatest version ranges within the realms of compatibility and applications should only “allow” a single version of each dependency by using a lock file (requirements.txt with --hash, pylock.toml, uv.lock).

It's not the responsibility of library maintainers to force their users are using secure versions of dependencies that aren't directly managed by the library (such as by bundling). That is the responsibility of users.

Why not?

If every library applied this strategy the result would be mass-toil both for users and maintainers.

urllib3 is directly depended on by over 10,000 other libraries on the Python Package Index. So a single vulnerability under this strategy would amplify to a new release for 10,000 projects. Projects like numpy (80,000), requests (72,000), and pandas (55,000) would have even more disastrous amplifications. There are a decent number of vulnerabilities published every day for open source libraries, so this would mean mass-releases, every day, forever: not good.

The much more efficient strategy is to allow users to manage their own application dependencies to ensure they are not affected by vulnerabilities.

Why... maybe?

You can imagine scenarios where a security vulnerability might affect compatibility, such as if a feature is removed or changed in a backwards-incompatible way. In this case then a version range update may be warranted.

Another scenario is where your library version specifiers disallows upgrading to a version with a security fix, such as when a security fix is only available for urllib3 2.x but your library is only compatible with urllib3 1.x. In this case you as a library maintainer may want to consider this request to allow your users to upgrade to secure versions more easily. However, even in this scenario it is not a vulnerability in your library if your version specifiers don't allow an easy upgrade from a vulnerable version to a fixed version of a dependency.



Thanks for keeping RSS alive! ♥

May 07, 2026 12:00 AM UTC

May 06, 2026


Talk Python to Me

#547: Parallel Python at Anyscale with Ray

When OpenAI trained GPT-3, they didn't roll their own orchestration layer. They used Ray, an open source Python framework born out of the same Berkeley research lab lineage that gave us Apache Spark. And here's the twist: Ray was originally built for reinforcement learning research, then quietly faded as RL hit a wall. Until ChatGPT showed up. Suddenly reinforcement learning was back, as the post-training step that turns a raw language model into something genuinely useful. <br/> <br/> Edward Oakes and Richard Liaw, two founding engineers behind Ray and Anyscale, join me on Talk Python to tell that story. We'll trace Ray from its RISE Lab origins at UC Berkeley to powering some of the largest training runs in the world. We'll talk about what Ray actually is, a distributed execution engine for AI workloads, and how a few lines of Python become work running across hundreds of GPUs. We'll cover Ray Data for multimodal pipelines, the dashboard, the VS Code remote debugger, KubRay for Kubernetes, and where Ray fits alongside Dask, multiprocessing, and asyncio. <br/> <br/> If you've ever stared at a single-machine Python script and thought, "there has to be a better way to scale this", this one's for you<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code talkpython26</a><br> <a href='https://talkpython.fm/agentfield-page'>AgentField AI</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guests</strong><br/> <strong>Richard Liaw</strong>: <a href="https://github.com/richardliaw?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Edward Oakes</strong>: <a href="https://github.com/edoakes?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>Ray</strong>: <a href="https://www.ray.io?featured_on=talkpython" target="_blank" >www.ray.io</a><br/> <strong>Example code (we used for walk-through)</strong>: <a href="https://docs.ray.io/en/latest/ray-overview/examples/e2e-audio/README.html?featured_on=talkpython" target="_blank" >docs.ray.io</a><br/> <strong>Getting Started with Ray</strong>: <a href="https://docs.ray.io/en/latest/ray-observability/getting-started.html?featured_on=talkpython" target="_blank" >docs.ray.io</a><br/> <strong>Ray Libraries</strong>: <a href="https://docs.ray.io/en/latest/ray-overview/ray-libraries.html?featured_on=talkpython" target="_blank" >docs.ray.io</a><br/> <strong>kuberay</strong>: <a href="https://github.com/ray-project/kuberay?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=-pVs4-MHaTo" target="_blank" >youtube.com</a><br/> <strong>Episode #547 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/547/parallel-python-at-anyscale-with-ray#takeaways-anchor" target="_blank" >talkpython.fm/547</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/547/parallel-python-at-anyscale-with-ray" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

May 06, 2026 08:40 PM UTC


Talk Python Blog

Announcing German Subtitles on Courses

If you’re a native German speaker who is taking our courses over at Talk Python Training, we have excellent news for you. We now offer German transcripts and subtitles for our course videos.

They’re incredibly easy to enable. Just click the CC for closed captions in the player and switch your language to Deutsch.

That’s it! Now you’re enjoying closed captions in your language.

What if I don’t speak German?

If you don’t speak German, well, maybe these subtitles are not for you. But there are still some very cool updates to the website as well.

May 06, 2026 06:05 PM UTC


Real Python

ChatterBot: Build a Chatbot With Python

The Python ChatterBot library lets you build a self-learning command-line chatbot with just a few lines of code. You’ll set up a basic bot, clean real WhatsApp conversation data with regular expressions, and train your chatbot on that custom corpus. You’ll also plug in a local LLM through Ollama to augment its replies with contextual knowledge.

By the end of this tutorial, you’ll understand that:

  • ChatterBot is a Python library that combines text processing, machine learning, and a local database to generate chatbot replies.
  • A minimal ChatterBot script instantiates ChatBot, collects user input in a loop, and returns matching responses through .get_response().
  • Training with ListTrainer and default settings stores conversation pairs in a SQLite database that ChatterBot queries with Levenshtein distance to pick each reply.
  • ChatterBot can call a local LLM through OllamaLogicAdapter, voting against other logic adapters with a confidence score.
  • ChatterBot was revived in 2025 with spaCy-based NLP, CSV and JSON trainers, and experimental LLM support.

Along the way, you’ll move from a potted plant that can only echo hello to a chatbot that chats knowledgeably about houseplants. You can follow along with your own WhatsApp export or grab the provided sample data below.

Get Your Code: Click here to download the free sample code that you’ll use to build a chatbot with Python’s Chatterbot.

Take the Quiz: Test your knowledge with our interactive “ChatterBot: Build a Chatbot With Python” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

ChatterBot: Build a Chatbot With Python

Test your understanding of the ChatterBot Python library, from training a basic bot with ListTrainer to wiring in a local LLM through Ollama.

Preview the Chatbot

At the end of this tutorial, you’ll have a command-line chatbot that can respond to your inputs with semi-meaningful replies:

You’ll achieve that by preparing WhatsApp chat data and using it to train the chatbot. Beyond learning from your automated training, the chatbot will improve over time as it gets more exposure to questions and replies from user interactions.

Project Overview

The ChatterBot library combines text processing, machine learning algorithms, and data storage and retrieval to allow you to build flexible chatbots.

You can build an industry-specific chatbot by training it with relevant data. Additionally, the chatbot will remember user responses and continue building its internal graph structure to improve the responses that it can give.

Note: After a long hiatus, ChatterBot was revived in early 2025 with support for modern Python, new training formats for CSV and JSON data, and even experimental LLM integration. Under the hood, ChatterBot now uses spaCy for language processing, which gives it a more robust NLP pipeline than before.

If you want to develop an LLM-first chatbot, Real Python’s LLM Application Development With Python learning path takes you through the concepts and libraries step by step:

Two people operating a large machine with conveyor belts and panels labeled RAG, Agents, and MCP, alongside a robotic arm, a Python logo, and API key icons.

Learning Path

LLM Application Development With Python

13 Resources ⋅ Skills: OpenAI, Ollama, OpenRouter, Prompt Engineering, LangChain, LlamaIndex, ChromaDB, MarkItDown, RAG, Embeddings, Pydantic AI, LangGraph, MCP

In this tutorial, you’ll start with an untrained chatbot that’ll showcase how quickly you can create an interactive chatbot using Python’s ChatterBot. You’ll also notice how small the vocabulary of an untrained chatbot is.

Next, you’ll learn how you can train such a chatbot and check on the slightly improved results. The more plentiful and high-quality your training data is, the better your chatbot’s responses will be.

Therefore, you’ll either fetch the conversation history of one of your WhatsApp chats or use the provided chat.txt file that you can download here:

Get Your Code: Click here to download the free sample code that you’ll use to build a chatbot with Python’s Chatterbot.

It’s rare that input data comes exactly in the form you need, so you’ll clean the chat export data to get it into a useful input format. This process will show you some tools you can use for data cleaning, which may help you prepare other input data to feed to your chatbot.

After data cleaning, you’ll retrain your chatbot and give it another spin to experience the improved performance. Finally, you’ll hook a local LLM into your chatbot to augment the variety and contextual relevance of its responses.

When you work through this process from start to finish, you’ll get a good idea of how you can build and train a Python chatbot with the ChatterBot library so that it can provide an interactive experience with relevant replies.

Prerequisites

Before you get started, make sure that you have Python 3.10 or later installed, which is the minimum Python version that ChatterBot supports. If you need help setting up Python, check out Python 3 Installation & Setup Guide.

Read the full article at https://realpython.com/build-a-chatbot-python-chatterbot/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 06, 2026 02:00 PM UTC