Planet Python
Last update: May 12, 2026 07:43 AM UTC
May 11, 2026
PyCon
Introducing the 8 Companies on Startup Row at PyCon US 2026
Each year at PyCon US, Startup Row highlights a select group of early-stage companies building ambitious products with Python at their core. The 2026 cohort reflects a rapidly evolving landscape, where advances in AI, data infrastructure, and developer tooling are reshaping how software is built, deployed, and secured.
This year’s companies aim to solve an evolving set of problems facing independent developers and large-scale organizations alike: securing AI-driven applications, managing multimodal data, orchestrating autonomous agents, automating complex workflows, and extracting insight from increasingly unstructured information. Across these domains, Python continues to serve as a unifying layer: encouraging experimentation, enabling systems built to scale, and connecting open-source innovation with real-world impact.
Startup Row brings these emerging teams into direct conversation with the Python community at PyCon US. Throughout the conference, attendees can meet founders, explore new tools, and see firsthand how these companies are applying Python to solve meaningful problems. For the startups in attendance, it’s an opportunity to share their work, connect with users and collaborators, and contribute back to the ecosystem that helped shape them. Register now to experience Startup Row and much more at PyCon US 2026.
Supporting Startups at PyCon US
There are many ways to support Startup Row companies, during PyCon US and long after the conference wraps:- Stop by Startup Row: Spend a few minutes with each team, ask what they’re building, and see their products in action.
- Try their tools: Whether it’s an open-source library or a hosted service, hands-on usage (alongside constructive feedback) is one of the most valuable forms of support. If a startup seems compelling, consider a pilot project and become a design partner.
- Share feedback: Early-stage teams benefit enormously from thoughtful questions, real-world use cases, and honest perspectives from the community.
- Contribute to their open source projects: Many Startup Row companies are deeply rooted in open source. Startup Row companies with open-source roots welcome bug reports, documentation improvements, and pull requests. Contributions and constructive feedback are always appreciated.
- Help spread the word: If you find something interesting, tell a friend, post about it, or share it with your team. (And if you're posting to social media, consider using tags like #PyConUS and #StartupRow to share the love.)
- Explore opportunities to work together: Many of these companies are hiring, looking for design partners, or open to collaborations; don’t hesitate to ask.
- But, most importantly, be supportive. Building a startup is hard, and every team is learning in real time. Curiosity, patience, and encouragement make a meaningful difference.
Meet Startup Row at PyCon US 2026
We’re excited to introduce the companies selected for Startup Row at PyCon US 2026.Arcjet
Embedding security directly into application code is fast becoming as indispensable as logging, especially as AI services open new attack surfaces. Arcjet offers a developer‑first platform that lets teams add bot detection, rate limiting and data‑privacy checks right where the request is processed.The service ships open‑source JavaScript and Python SDKs that run a WebAssembly module locally before calling Arcjet’s low‑latency decision API, ensuring full application context informs every security verdict. Both SDKs are released under a permissive open‑source license, letting developers integrate the primitives without vendor lock‑in while scaling usage through Arcjet’s SaaS tiered pricing.
The JavaScript SDK alone has earned ≈1.7 k GitHub stars and the combined libraries have attracted over 1,000 developers protecting more than 500 production applications. Arcjet offers a free tier and usage‑based paid plans, mirroring Cloudflare’s model to serve startups and enterprises alike.
Arcjet is rolling out additional security tools and deepening integrations with popular frameworks such as FastAPI and Flask, aiming to broaden adoption across AI‑enabled services. In short, Arcjet aims to be the security‑as‑code layer every modern app ships with.
CapiscIO
As multi‑agent AI systems become the backbone of emerging digital workflows, developers lack a reliable way to verify agent identities and enforce governance. CapiscIO steps into that gap, offering an open‑core trust layer built for the nascent agent economy.CapiscIO offers cryptographic Trust Badges, policy enforcement, and tamper‑evident chain‑of‑custody wrapped in a Python SDK. Released under Apache 2.0, it ships a CLI, LangChain integration, and an MCP SDK that let agents prove identity without overhauling existing infrastructure.
The capiscio‑core repository on GitHub hosts the open‑source core and SDKs under Apache 2.0, drawing early contributors building agentic pipelines.
Beon de Nood, Founder & CEO, brings two decades of enterprise development experience and a prior successful startup to the table. “AI governance should be practical, not bureaucratic. Organizations need visibility into what they have, confidence in what they deploy, and control over how agents behave in production,” he says.
CapiscIO is continuously adding new extensions, expanding its LangChain and MCP SDKs, and preparing a managed agent‑identity registry for enterprises. In short, CapiscIO aims to be the passport office of the agent economy, handing each autonomous component an unspoofable ID and clear permissions.
Chonkie
The explosion of retrieval‑augmented generation (RAG) is unlocking AI’s ability to reason over ever‑larger knowledge bases. Yet the first step of splitting massive texts into meaningful pieces still lags behind.Chonkie offers an open‑core suite centered on Memchunk, a Python library with Cython acceleration that delivers up to 160 GB/s throughput and ten chunking strategies under a permissive license. It also ships Catsu, a unified embeddings client for nine providers, and a lightweight ingestion layer; the commercial Chonkie Labs service combines them into a SaaS that monitors the web and synthesises insights.
Co‑founder and CEO Shreyash Nigam, who grew up in India and met his business partner in eighth grade, reflects the team’s open‑source ethos, saying “It’s fun to put a project on GitHub and see a community of developers crowd around it.” That enthusiasm underpins Chonkie’s decision to release its core tooling openly while building a commercial deep‑research service.
Backed by Y Combinator’s Summer 2025 batch, Chonkie plans to grow from four to six engineers and launch the next version of Chonkie Labs later this year, adding real‑time web crawling and multi‑modal summarization. In short, Chonkie aims to be the Google of corporate intelligence.
Phemeral
Running production‑grade Python services used to mean wrestling with containers, VMs, or complex CI pipelines.
Phemeral, launched in April 2026, offers Python developers a managed hosting platform that turns a GitHub repo into an instantly deployable, scale‑to‑zero backend.
Phemeral provides builds for popular frameworks (like Django, Flask, and FastAPI), integrations with popular package managers (e.g. uv, Pip, and Poetry), as well as continuous deployment on every push while charging only for actual request execution under a usage‑based model.
Founder & CEO Chinmaya Joshi says, "Building with Python is easier than ever, but hosting and deployment remain a pain. Phemeral is building the easiest way to deploy Python web apps."
Joshi is focused on expanding framework support and refining the platform so that Python developers (from vibe-coders and solo devs, to agencies and enterprises) can enjoy the same zero‑config experience modern front‑end platforms provide.
Pixeltable
Multimodal generative AI is turning simple datasets into sprawling collections of video, images, audio and text, forcing engineers to stitch together ad‑hoc pipelines just to keep data flowing. That complexity has created a new bottleneck for teams trying to move from prototype to production.The open‑source Python library from Pixeltable offers a declarative table API that lets developers store, query and version multimodal assets side by side while embedding custom Python functions. Built with incremental update capabilities, combined lineage and schema tracking, and a development‑to‑production mirror, the platform also provides orchestration capabilities that keep pipelines reproducible without rewriting code.
The project has earned ≈1.6 k GitHub stars and a growing contributor base, closed a $5.5 million seed round in December 2024, and is already used by early adopters such as Obvio and Variata to streamline computer‑vision workflows.
Co‑founder and CTO Marcel Kornacker, who previously founded Apache Impala and co-founded Apache Parquet, says “Just as relational databases revolutionized web development, Pixeltable is transforming AI application development.”
The company's roadmap centers on launching Pixeltable Cloud, a serverless managed service that will extend the open core with collaborative editing, auto‑scaling storage and built‑in monitoring. In short, Pixeltable aims to be the relational database of multimodal AI data.
SubImage
The sheer complexity of modern multi‑cloud environments turns security visibility into a labyrinth, and SubImage offers a graph‑first view that cuts through the noise.It builds an infrastructure graph using the open‑source Cartography library (Apache‑2.0, Python), then highlights exploit chains as attack paths and applies AI models to prioritize findings based on ownership and contextual risk.
Cartography, originally developed at Lyft and now a Cloud Native Computing Sandbox project, has ≈3.7 k GitHub stars, is used by over 70 organizations, and SubImage’s managed service already protects security teams at Veriff and Neo4j; the company closed a $4.2 million seed round in November 2025.
Co‑founder Alex Chantavy, an offensive‑security engineer, says “The most important tool was our internal cloud knowledge graph because it showed us a map of the easiest attack paths … One of the most effective ways to defend an environment is to see it the same way an attacker would.”
The startup is focusing on scaling its managed service and deepening AI integration as it targets larger enterprise customers. In short, SubImage aims to be the map of the cloud for defenders.
Tetrix
Private‑market data pipelines still rely on manual downloads and spreadsheet gymnastics, leaving analysts chasing yesterday’s numbers. Tetrix’s AI investment intelligence platform is part of a wave that brings automation to this lagging workflow.Built primarily in Python, Tetrix automates document collection from fund portals and other sources, extracts structured data from PDFs and other unstructured sources using tool-using language models, then presents exposures, cash flows, and benchmarks through an interactive dashboard that also accepts natural‑language queries.
The company is growing quickly, doubling revenue quarter over quarter and, at least so far, maintains an impressive record of zero customer churn. In the coming year or so, Tetrix plans to triple its headcount from fifteen to forty‑five employees.
TimeCopilot
Time‑series forecasting has long been a tangled mix of scripts, dashboards, and domain expertise, and the recent surge in autonomous agents is finally giving it a unified voice. Enter TimeCopilot, an open‑source framework that brings agentic reasoning to the heart of forecasting.The platform, built in Python under a permissive open‑source license, lets users request forecasts in plain English. It automatically orchestrates more than thirty models from seven families, including Chronos and TimesFM, while weaving large language model reasoning into each prediction. Its declarative API was born from co‑founder Azul Garza‑Ramírez’s economics background and her earlier work on TimeGPT for Nixtla (featured SR'23), evolving from a weekend experiment started nearly seven years ago.
The TimeCopilot/timecopilot repository has amassed roughly 420 stars on GitHub, with the release of OpenClaw marking a notable spike in community interest.
Upcoming plans include a managed SaaS offering with enterprise‑grade scaling and support, the rollout of a benchmarking suite to measure agentic forecast quality, and targeted use cases such as predicting cloud‑compute expenses for AI workloads.
Thank You's and Acknowledgements
Startup Row is a volunteer-driven program, co-led by Jason D. Rowley and Shea Tate-Di Donna (SR'15; Zana, acquired Startups.com), in collaboration with the PyCon US organizing team. Thanks to everyone who makes PyCon US possible.We also extend a gracious thank-you to all startup founders who submitted applications to Startup Row at PyCon US this year. Thanks again for taking the time to share what you're building. We hope to help out in whatever way we can.
Good luck to everyone, and see you in Long Beach, CA!
Talk Python to Me
#548: Event Sourcing Design Pattern
What if your database worked more like Git? Every change captured as an immutable event you can replay, instead of a single mutating row that quietly forgets its own history. That's event sourcing, and Chris May is back on Talk Python, fresh off our Datastar panel, to walk us through what it actually looks like in Python. We'll cover the core patterns, the libraries to reach for, when not to use it, and why event sourcing turns out to be a surprisingly good fit for AI-assisted coding.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code talkpython26</a><br> <a href='https://talkpython.fm/temporal'>Temporal</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guest</strong><br/> <strong>Chris May</strong>: <a href="https://everydaysuperpowers.dev?featured_on=talkpython" target="_blank" >everydaysuperpowers.dev</a><br/> <br/> <strong>Intro to event sourcing e-book</strong>: <a href="https://everydaysuperpowers.gumroad.com/l/es_intro?featured_on=talkpython" target="_blank" >everydaysuperpowers.gumroad.com</a><br/> <br/> <strong>Domain-Driven Design: The Power of CQRS and Event Sourcing: How CQRS/ES Redefine Building Scalable System</strong>: <a href="https://ricofritzsche.me/cqrs-event-sourcing-projections/?featured_on=talkpython" target="_blank" >ricofritzsche.me</a><br/> <strong>DDD</strong>: <a href="https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215?featured_on=talkpython" target="_blank" >www.amazon.com</a><br/> <strong>Understanding Eventsourcing (Martin Dilger)</strong>: <a href="https://www.amazon.com/Understanding-Eventsourcing-Planning-Implementing-Eventmodeling/dp/B0DNXQJM9Z/ref=sr_1_1?dib=eyJ2IjoiMSJ9.LqdaOIXJSPbgGuz_Akil-snFyMZVys1Y2IhnqvPv_CGK3R6Vwvu6AN1PHBi6twz-c3bPG5mdbhLJQyYs30LXh2pT6wiqXPrz0RKmfeYzq_sT18tc2UAWVG8rFBN1C-H46AHiiDqusp6SyDm2W15n4ZBKn11xW4yNvazjq3pg369c53KDFONnWqe9AB4xzAF2VeQ4n64hOk30-GmG_1K6_zIPBw4PXkVX9UDYq0QDIAQ.0Kvsl2V8aqDO4Av47g881GGoRPCpF0gCrbF6GJZbjRE&dib_tag=se&keywords=understanding+event+sourcing&qid=1777078561&sbo=RZvfv%2F%2FHxDF%2BO5021pAnSA%3D%3D&sr=8-1&featured_on=talkpython" target="_blank" >www.amazon.com</a><br/> <strong>Event Sourcing Explained using Football Video</strong>: <a href="https://www.youtube.com/watch?v=xPmQxYIi5fA" target="_blank" >www.youtube.com</a><br/> <strong>Why I finally embraced event sourcing and why you should too article</strong>: <a href="https://everydaysuperpowers.dev/articles/why-i-finally-embraced-event-sourcingand-why-you-should-too/?featured_on=talkpython" target="_blank" >everydaysuperpowers.dev</a><br/> <strong>valkey</strong>: <a href="https://valkey.io/?featured_on=talkpython" target="_blank" >valkey.io</a><br/> <strong>diskcache</strong>: <a href="https://talkpython.fm/episodes/show/534/diskcache-your-secret-python-perf-weapon" target="_blank" >talkpython.fm</a><br/> <strong>eventsourcing package</strong>: <a href="https://github.com/pyeventsourcing/eventsourcing?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>eventsourcing docs</strong>: <a href="https://eventsourcing.readthedocs.io/en/stable/topics/tutorial/part1.html?featured_on=talkpython" target="_blank" >eventsourcing.readthedocs.io</a><br/> <strong>John Bywater</strong>: <a href="https://github.com/johnbywater?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Datastar</strong>: <a href="https://data-star.dev/?featured_on=talkpython" target="_blank" >data-star.dev</a><br/> <strong>Microconf</strong>: <a href="https://microconf.com/?featured_on=talkpython" target="_blank" >microconf.com</a><br/> <strong>Event Modeling & Event Sourcing Podcast</strong>: <a href="https://podcast.eventmodeling.org?featured_on=talkpython" target="_blank" >podcast.eventmodeling.org</a><br/> <strong>Python Package Guides for AI Agents</strong>: <a href="https://github.com/mikeckennedy/python-package-guides-for-agents?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Iodine tablets AI joke</strong>: <a href="https://x.com/pr0grammerhum0r/status/2046650199930458334?s=46&featured_on=pythonbytes" target="_blank" >x.com</a><br/> <strong>KurrentDb</strong>: <a href="https://www.kurrent.io?featured_on=talkpython" target="_blank" >www.kurrent.io</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=s37d6yN2P70" target="_blank" >youtube.com</a><br/> <strong>Episode #548 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/548/event-sourcing-design-pattern#takeaways-anchor" target="_blank" >talkpython.fm/548</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/548/event-sourcing-design-pattern" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
Real Python
How to Flatten a List of Lists in Python
Flattening a list in Python involves converting a nested list structure into a single, one-dimensional list. A common approach to flatten a list of lists is to use a for loop to iterate through each sublist. Then you can add each item to a new list with the .extend() method or the augmented concatenation operator (+=). This will “unlist” the list, resulting in a flattened list.
Python’s standard library offers other tools to achieve similar results. You can also use a list comprehension for a concise one-liner solution. Each method has its own performance characteristics, but for loops and list comprehensions are generally more efficient.
By the end of this tutorial, you’ll understand that:
- Flattening a list involves converting nested lists into a single list.
- You can use a
forloop and.extend()or a list comprehension to flatten lists in Python. - Standard-library functions like
itertools.chain()andfunctools.reduce()can also flatten lists. - A custom
flatten()function, either recursive or iterative, handles arbitrarily nested lists. - The
.flatten()method in NumPy efficiently flattens arrays for data science tasks.
To better illustrate what it means to flatten a list, say that you have the following matrix of numeric values:
>>> matrix = [
... [9, 3, 8, 3],
... [4, 5, 2, 8],
... [6, 4, 3, 1],
... [1, 0, 4, 5],
... ]
The matrix variable holds a Python list that contains four nested lists. Each nested list represents a row in the matrix. The rows store four items or numbers each. Now say that you want to turn this matrix into the following list:
[9, 3, 8, 3, 4, 5, 2, 8, 6, 4, 3, 1, 1, 0, 4, 5]
How do you manage to flatten your matrix and get a one-dimensional list like the one above? In this tutorial, you’ll learn how to do that in Python.
Free Bonus: Click here to download the free sample code that showcases and compares several ways to flatten a list of lists in Python.
Take the Quiz: Test your knowledge with our interactive “How to Flatten a List of Lists in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Flatten a List of Lists in PythonTest your understanding of how to flatten a list of lists in Python using for loops, list comprehensions, itertools, recursion, and NumPy.
How to Flatten a List of Lists With a for Loop
How can you flatten a list of lists in Python? In general, to flatten a list of lists, you can run the following steps either explicitly or implicitly:
- Create a new empty list to store the flattened data.
- Iterate over each nested list or sublist in the original list.
- Add every item from the current sublist to the list of flattened data.
- Return the resulting list with the flattened data.
You can follow several paths and use multiple tools to run these steps in Python. The most natural and readable way to do this is to use a for loop, which allows you to explicitly iterate over the sublists.
Then you need a way to add items to the new flattened list. For that, you have a couple of valid options. First, you’ll turn to the .extend() method from the list class itself, and then you’ll give the augmented concatenation operator (+=) a go.
To continue with the matrix example, here’s how you would translate these steps into Python code using a for loop and the .extend() method:
>>> def flatten_extend(matrix):
... flat_list = []
... for row in matrix:
... flat_list.extend(row)
... return flat_list
...
Inside flatten_extend(), you first create a new empty list called flat_list. You’ll use this list to store the flattened data when you extract it from matrix. Then you start a loop to iterate over the inner, or nested, lists from matrix. In this example, you use the name row to represent the current nested list.
In every iteration, you use .extend() to add the content of the current sublist to flat_list. This method takes an iterable as an argument and appends its items to the end of the target list.
Now go ahead and run the following code to check that your function does the job:
>>> flatten_extend(matrix)
[9, 3, 8, 3, 4, 5, 2, 8, 6, 4, 3, 1, 1, 0, 4, 5]
That’s neat! You’ve flattened your first list of lists. As a result, you have a one-dimensional list containing all the numeric values from matrix.
With .extend(), you’ve come up with a Pythonic and readable way to flatten your lists. You can get the same result using the augmented concatenation operator (+=) on your flat_list object. However, this alternative approach may not be as readable:
>>> def flatten_concatenation(matrix):
... flat_list = []
... for row in matrix:
... flat_list += row
... return flat_list
...
Read the full article at https://realpython.com/python-flatten-list/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: How to Flatten a List of Lists in Python
In this quiz, you’ll test your understanding of how to flatten a list in Python.
You’ll write code and answer questions to revisit the concept of converting a multidimensional list, such as a matrix, into a one-dimensional list.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
DSF member of the month - Bhuvnesh Sharma
For May 2026, we welcome Bhuvnesh Sharma as our DSF member of the month! ⭐

Bhuvnesh is a Django contributor since 2022 and a Google Summer of Code (GSoC) participant in 2023 for Django. He is now a mentor and an admin organizer for GSoC for the Django organization. He is the founder of Django Events Foundation India (DEFI) and DjangoDay India conference. He has been a DSF member since July 2023. He is looking for new opportunities!
You can learn more about Bhuvnesh by visiting Bhuvnesh's website and his GitHub Profile.
Let’s spend some time getting to know Bhuvnesh better!
Can you tell us a little about yourself (hobbies, education, etc)
I’m Bhuvnesh (aka DevilsAutumn), a software developer from India. I graduated in 2024 from GL Bajaj Institute of technology and management, and most of my work has been around Python, Django and building backend systems. My journey with django started when I started contributing to Django core in 2022. I usually like working on things where there is an actual product involved, not just writing few APIs and closing the task. I like thinking about how the whole thing will work: models, permissions, background jobs, deployment, users, edge cases and all of that.
Apart from work, I like reading books around startups and entrepreneurship, watching movies, and honestly I overthink a lot about building products. Sometimes too much, but yeah that’s also how many ideas start for me. I’ve also been involved with the Django community through Django India, GSoC, Djangonaut Space and DjangoDay India, which has been a big part of my journey.
I'm curious, where your nickname "DevilsAutumn" comes from?
Haha, Nice question. So, there is one of my friend who used to write sci-fi novels. In 2022, I decided that I’ll have one unique coding name for me and thinking that I have a friend who write novels his imagination must be great, I went to him to ask for name ideas and one of the names he suggested was DevilsAutumn, since then I use that as my nickname.
How did you start using Django?
When I was in my exploring phase, I was really curious and trying out different languages, frameworks etc. and I read a blog post from Instagram engineering team about Django being used at instagram. A framework which is a backbone of a product used by billions of users, will get anyone curious. From there I started exploring Django and I fell in love with it. The framework, the community, the documentation - all of it was amazing.
What other framework do you know and if there is anything you would like to have in Django if you had magical powers?
I have also worked with FastAPI and I find that really cool as well. But the calmness django has is unbeatable.
If I had magical powers, I’d be living on the moon. Just kidding. 😆
There are a couple of things that I would love in Django:
First is "modernising" the website which is already underway. The website feels very boring and outdated. I’d love to see a modern version.
Second, I would love to see Django have built-in support for creating REST APIs. DRF is amazing and it has done a lot for the Django ecosystem, but because it is still an external library, there are some rough edges. Sometimes serialization can feel a bit slow or heavy, the learning curve is different from regular Django, and you also depend on a separate package for something which has become a core need in modern web apps.
What projects are you working on now?
I am currently working on a project called Trevo, which helps people find activities happening around them which anyone can join and socialize with others in real life.
Apart from that, I am also working on an open source python library which is a migration safety toolkit for Django. It's called django-migrations-inspector. It helps you find problems in your migration files before they go into production.
Which Django libraries are your favorite (core or 3rd party)?
Although there is a long list, I’d probably say Django REST Framework (DRF), django-import-export, and django-debug-toolbar.
DRF is the obvious one because I’ve used it a lot for building APIs with Django. Even with some rough edges, it has been very important for the ecosystem 😛
I also really like django-import-export, mostly because in real projects you always end up needing some Excel/CSV import export kind of thing, and this just saves time.
And django-debug-toolbar because it has made debugging queries and performance issues much easier for me personally.
What are the top three things in Django that you like?
I think the first thing has to be the community. People in the Django community are genuinely nice and helpful, and the docs are also really good. A lot of times, when you are stuck, either the documentation has already explained it properly or someone has discussed the same thing before.
Second, I really like the ecosystem around Django. For most of the common things you need while building a product, there is usually already a good package available. And Django itself also gives you so much out of the box, so you don’t have to build every basic thing from scratch.
And third is Django admin. Honestly, I really like it. Some people may not think of it as a very exciting feature, but when you are building real products, having a working admin panel so quickly is super useful. It saves a lot of time.
You are one of the admin organizers of GSoC program for Django organization, thank you for helping. How is going for you? Do you need help?
It has been going well so far, thank you for asking. I’m really happy to help with organizing GSoC for Django. It’s always nice to see contributors getting involved and working on meaningful projects, I even posted about it on LinkedIn.
Everything is good for now, but I’ll reach out in case I need any help. In fact, we are also working creating GSoC working group to make things more smooth for future. I’m sure that is also going to help us.
You have been part of Djangonaut Space program as a Navigator (Mentor) in the first session. How did you find the experience? What is your reflection on the program after so many times?
It was a great experience! I love to help people who are new to open-source and guide them just like I was guided by a mentor in my college days. I believe anyone can do great things in life if they are given proper mentorship. That's my motivation behind getting involved in Djangonaut Space.
Djangonaut Space program has created a strong community of developers from all background that love Django. A lot of people want to contribute to open source, but they don’t always know where to start, or they feel the project is too big for them. Djangonaut Space helped reduce that fear by giving people guidance, structure, and a friendly space to ask questions.
Even after all this time, I still feel it is one of the best community-led efforts around Django. It doesn’t just help people contribute code, it helps them feel that they belong in the community.
Do you have any advice for folks would like to consider mentoring through GSoC or Djangonaut Space?
I just want to say that people who are experienced, who have been contributing to Django or people who are maintaining any 3rd party package, must consider mentoring through GSoC or Djangonaut Space program. It is one of the most impactful way to contribute to open source in my opinion because you are not just guiding a few people, you might be guiding the next generation of mentors, Django maintainers, org admins, community leaders or Djangonaut Space organizers.
And mentorship plays the most important role in maintaining the ecosystem that django has created for years.
You have been previously a participant of GSoC for Django organization, you are now an admin of the organization. That's great! How did you get to this point? Did you ever imagine you would end up here?
Haha honestly, no. I don’t think I ever imagined it would turn out this way. When I first got into GSoC with Django, I was just really happy to be there and contribute. At that time, I was mostly focused on learning, understanding the project better, and trying not to mess things up 😅
But after that I kind of stayed around. I kept contributing, stayed connected with the community, mentored in Djangonaut Space, then mentored in GSoC 2024, and slowly started getting more involved in the community and organizing side of things too.
So it was never like I had this clear plan that one day I’ll become an org admin. It just happened very naturally over time, mostly because I kept showing up and people trusted me with more responsibility.
Now being on this side feels a little unreal, but also very special. I know how it feels to be a contributor, how confusing and exciting it can be, so I really care about making the experience good for others too.
In a way, it feels like a full-circle moment, but also like there’s still a lot more to learn and do.
You are the founder of DjangoDay India and Django Events Foundation India, could you tell us a bit more on the event and what made you create this structure?
DjangoDay India started from a very simple thought, like we should have a proper Django-focused event in India. There are a lot of people here using Django — developers, students, companies — but we didn’t really have one place where everyone can come together. It was really difficult to organize DjangoDay India in 2025 because it was the first Django event happening at that scale in India but we still made it because of the amazing team.
Django Events Foundation India (DEFI) was created to give this some structure. I didn’t wanted DjangoDay India to become just a one-time thing or something which only depends on me. Apart from that, I even want to support more local Django events happening around India through DEFI. The idea is to make it sustainable, community-first, and slowly involve more people. For me, it is mainly about growing the Django ecosystem in India and giving people a space to speak, volunteer, sponsor, contribute, and maybe later lead also.
Do you remember your first contribution to Django and in open source?
Yes, so I was going through someone else’s PR which got merged and in that I found a small typo in the comment. Then I created a new PR to fix that. It was my first contribution to Django.
Talking about the first open source contribution, it was adding some phone number validation checks in validatorjs library.
Is there anything else you’d like to say?
Nothing much, just thank you for having me here. If someone is thinking of contributing to Django but feels scared, please don’t worry. Most of us also started by staring at the codebase and pretending we understood what was happening. Just start small, ask questions, and slowly it starts making sense.
Thank you for doing the interview, Bhuvnesh !
Python Bytes
#479 Talking About Types
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://tildeweb.nl/~michiel/httpxyz-one-month-in.html?featured_on=pythonbytes">httpxyz one month in</a></strong></li> <li><strong><a href="https://blog.geekuni.com/2026/04/python-concurrency.html?featured_on=pythonbytes">Learn concurrency - a deep dive into multithreading with Python</a></strong></li> <li><strong><a href="https://ichard26.github.io/blog/2026/04/whats-new-in-pip-26.1/?featured_on=pythonbytes">pip 26.1 - lockfiles and dependency cooldowns</a></strong></li> <li><strong><a href="https://peps.python.org/pep-0661/?featured_on=pythonbytes">Python 3.15 <code>sentinal</code> values from PEP 661</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=3E3KPBAYkWo' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="479">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1: <a href="https://tildeweb.nl/~michiel/httpxyz-one-month-in.html?featured_on=pythonbytes">httpxyz one month in</a></strong></p> <ul> <li>First version of httpxyz contained just the fixes to get zstd working, and the fixes to get the test suite running on python 3.14, some ‘housekeeping’ changes related to the renaming</li> <li>End of March: a compatibility shim that allows you to use httpxyz even with third-party packages that import httpx themselves, as long as you import httpxyz first. <ul> <li>Importing <code>httpxyz</code> automatically registers it under the <code>httpx</code> name in <code>sys.modules</code> , see https://httpxyz.org/httpx-compatibility/</li> </ul></li> <li>Fixed a WHOLE bunch of performance related issues by forking httpcore</li> </ul> <p><strong>Brian #2: <a href="https://blog.geekuni.com/2026/04/python-concurrency.html?featured_on=pythonbytes">Learn concurrency - a deep dive into multithreading with Python</a></strong></p> <ul> <li>Nikos Vaggalis</li> <li>“Whenever you are trying to speed up code using multiple cores, always ask yourself: “Do these threads need to talk to each other right now?” If the answer is yes, it will be slow. The best parallel code splits a big job into completely isolated chunks, processes them separately, and merges the results at the finish line.”</li> <li>Good overview of thread concurrency with Python and how that’s been improved dramatically with free-threaded Python</li> <li>Defines lots of terms you come across, including “embarrassingly parallel multithreading”</li> <li>There’s a counter example that’s nice <ul> <li>Start with a shared resource, a counter, and multiple threads updating it</li> <li>Attempt to fix with <code>threading.Lock()</code>, which fixes it, but slows things down</li> <li>Good explanation of why</li> <li>Proper fix with <code>concurrent.futures</code> and separating the work of different threads so that they can be independent and their results can be combined when they’re all finished.</li> </ul></li> </ul> <p><strong>Michael #3: <a href="https://ichard26.github.io/blog/2026/04/whats-new-in-pip-26.1/?featured_on=pythonbytes">pip 26.1 - lockfiles and dependency cooldowns</a></strong></p> <ul> <li>Python 3.9 is no longer supported</li> <li>Experimental: installing from pylock files</li> <li>Dependency cooldowns (see <a href="https://mkennedy.codes/posts/python-supply-chain-security-made-easy/?featured_on=pythonbytes">my post about this</a>)</li> <li>Lifting several 2020 resolver limitations</li> </ul> <p><strong>Brian #4: <a href="https://peps.python.org/pep-0661/?featured_on=pythonbytes">Python 3.15 <code>sentinal</code> values from PEP 661</a></strong></p> <div class="codehilite"> <pre><span></span><code><span class="n">MISSING</span> <span class="o">=</span> <span class="n">sentinel</span><span class="p">(</span><span class="s2">"MISSING"</span><span class="p">)</span> <span class="k">def</span><span class="w"> </span><span class="nf">next_value</span><span class="p">(</span><span class="n">default</span><span class="p">:</span> <span class="nb">int</span> <span class="o">|</span> <span class="n">MISSING</span> <span class="o">=</span> <span class="n">MISSING</span><span class="p">):</span> <span class="o">...</span> <span class="k">if</span> <span class="n">default</span> <span class="ow">is</span> <span class="n">MISSING</span><span class="p">:</span> <span class="o">...</span> </code></pre> </div> <ul> <li>Take a name str as a constructor parameter</li> <li>Intended to be compared with <code>is</code> operator, similar to <code>None</code></li> <li>Sentinal objects can be used as a type, also similar to <code>None</code> <ul> <li>and can be combined with other types with <code>|</code>.</li> </ul></li> <li>Unlike <code>None</code>, sentinal values are truthy. (Elipses <code>...</code> are also truthy) <ul> <li>This seems like a strange choice. but I guess it must have made sense to someone.</li> <li>It does force you to use <code>is</code> instead of depending on False-ness, so I guess it’ll make code using sentinels more readable.</li> </ul></li> <li>Interesting that the PEP was started in 2021, and we’re finally getting it this year.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://lucumr.pocoo.org/2026/4/28/before-github/?featured_on=pythonbytes">Before GitHub</a> - Armin Ronacher</li> <li><a href="https://tenacityaudio.org?featured_on=pythonbytes">tenacity</a> - cross-platform multi-track audio editor/recorder <ul> <li>learned about it from Armin’s article</li> </ul></li> </ul> <p><strong>Joke:</strong></p> <ul> <li>Joke option <a href="https://xkcd.com/3233/?featured_on=pythonbytes">Make it myself</a> <ul> <li>Seems similar to what people think about software now</li> </ul></li> </ul> <p>Links</p> <ul> <li><a href="https://tildeweb.nl/~michiel/httpxyz-one-month-in.html?featured_on=pythonbytes">httpxyz one month in</a></li> <li><a href="https://httpxyz.org/httpx-compatibility/?featured_on=pythonbytes">httpxyz.org/httpx-compatibility</a></li> <li><a href="https://blog.geekuni.com/2026/04/python-concurrency.html?featured_on=pythonbytes">Learn concurrency - a deep dive into multithreading with Python</a></li> <li><a href="https://ichard26.github.io/blog/2026/04/whats-new-in-pip-26.1/?featured_on=pythonbytes">pip 26.1 - lockfiles and dependency cooldowns</a></li> <li><a href="https://mkennedy.codes/posts/python-supply-chain-security-made-easy/?featured_on=pythonbytes">my post about this</a></li> <li><a href="https://peps.python.org/pep-0661/?featured_on=pythonbytes">Python 3.15 <code>sentinal</code> values from PEP 661</a></li> <li><a href="https://lucumr.pocoo.org/2026/4/28/before-github/?featured_on=pythonbytes">Before GitHub</a></li> <li><a href="https://tenacityaudio.org?featured_on=pythonbytes">tenacity</a></li> <li><a href="https://xkcd.com/3233/?featured_on=pythonbytes">Make it myself</a></li> </ul>
Python Software Foundation
Strategic Planning at the PSF
The Python Software Foundation (PSF) is excited to share that the PSF Board has been developing a strategic plan to guide the foundation's direction over the next five years. We are sharing the high-level goals today to collect feedback and commentary from the Python community. A full draft with detailed objectives will be published in early June for public feedback, and the board hopes to adopt the plan in July 2026, to be reviewed annually going forward.
Why now
The Python ecosystem is growing and changing fast. PyPI hosts over 800,000 projects and serves tens of billions of downloads per month. The Developers-in-Residence program has grown from a single role to a team spanning CPython development, security, and PyPI safety, proving that targeted investment in core infrastructure works. Last year's fundraiser showed that the community and sponsors are willing to support the PSF's mission when provided the opportunity.
The foundation also faces challenges. As we shared in November, the PSF's assets and yearly revenue have declined and costs have increased, while the demand for the foundation's work grows faster than its capacity. Last year we had to pause the Grants Program after reaching the budget cap earlier than expected. These pressures are part of why the board committed to a strategic plan: the foundation needs a clear framework for making hard choices about where to focus.
The PSF Board has discussed strategic planning over the years, including at the 2024 board retreat. This year, we committed to turning that discussion into a concrete plan. The process included numerous interviews with PSF Staff, community members, and participants across the Python ecosystem. After interviews, the PSF Board went through a prioritization exercise, followed by a series of dedicated and structured board discussions.
The direction
The plan has two parts:
I. Organizational Goals: How the PSF operates across all its activities, and
II. Program Goals: Where the PSF directs its work and resources.
We invite your feedback on all of the goals in both parts of the plan (See the “How to participate” section below).
I. Organizational Goals: How we operate
- Financial Sustainability: Diversify the PSF's revenue so the foundation is not dependent on any single source.
- Building a Resilient Foundation: Strengthen governance, financial oversight, and knowledge management so the organization can survive transitions and operate transparently.
- Diversity and Inclusion: D&I is not treated as a standalone effort. D&I is a lens for all PSF decisions and activities.
- Transparency and Community Trust: Increase visibility into how the PSF makes decisions and uses its resources, as the community's trust in its governance is the foundation of the PSF's credibility.
- Community Empowerment and Self-Sufficiency: Support Python communities in building their own capacity through collaboration and shared resources.
- Strong Partnerships and Collaboration: Partner with organizations that distribute, extend, and depend on Python, as well as with community groups across the open source ecosystem.
II. Program Goals: Where we focus our work
- Secure Python's Software Supply Chain and Distribution Infrastructure. PyPI is critical global infrastructure, and supply chain security goes beyond the index. Python reaches users through many channels beyond python.org and PyPI, which makes collaboration with distributors essential.
- Responsibly Grow and Advance Critical Python Infrastructure. The PSF stewards PyPI, CPython, python.org, pip, and more. Growth needs to match staffing capacity and sustainable funding.
- Foster a Thriving, Connected Global Python Community. Support the global Python community through events, grants, and working groups, while empowering regional communities to be self-sufficient.
- Develop the Next Generation of Python Developers. Make Python accessible to newcomers and remove barriers for underrepresented groups.
How the plan works
We developed this strategic plan to cover a five-year period. The board will review progress annually with community input, review whether priorities need to shift, and publish the results so the community can see how we are tracking. The intention is for the strategic plan to be flexible and adaptive, so that it can effectively guide the PSF’s priorities as the ecosystem continues to grow and evolve, rather than a static document that begins to collect dust on the shelf.
We developed the plan to set direction–not implementation details. How to carry it out is the job of PSF Staff, and the specifics will evolve as we learn what works. Once adopted, the plan will directly inform how the PSF allocates its budget and staff time and how it seeks funding.
How to participate
If any of these goals matter to you, or if you think we are missing something important, we want to hear from you.
We welcome you to email strategy@python.org to share your thoughts. This is the best way to reach us asynchronously.
You can also join the conversation with us at:
- PSF Board Office Hours on May 12 and June 9th, on the PSF Discord. We hope to spend both of these sessions focused on discussing the strategic plan with people from the community.
- PyCon US 2026 at the Members Lunch and a dedicated Open Space session. We know only a small fraction of our community will be present at PyCon US this year, so we warmly welcome you to engage with us on Discuss and via the email address provided above.
- A Python Discuss thread is available for open community discussion. We welcome you to join in with feedback and comments.
A full draft with detailed objectives under each Program Goal will be published in early June for community feedback via this blog, Python Discuss under the PSF category, and social media. The feedback window for this year will close before the July 8th PSF Board meeting.
This plan will shape what the PSF does and how it spends its resources for the next five years. If you use Python, contribute to it, or participate in communities around it, you have a stake in shaping its future.
Jannis Leidel, PSF Board Chair, on behalf of the PSF Board of Directors
May 08, 2026
Real Python
The Real Python Podcast – Episode #294: Declarative Charts in Python & Discerning Iterators vs Iterables
What if you could build charts in Python by describing what your data means, instead of scripting every visual detail? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Memory Management in Python
In this quiz, you’ll test your understanding of Memory Management in Python.
By working through this quiz, you’ll revisit how Python handles memory allocation and freeing, the role of the Global Interpreter Lock, and how CPython organizes memory using arenas, pools, and blocks. Give it a shot!
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Seth Michael Larson
Using Epilogue Retrace app with iPhone 13 Pro and Ubuntu
When Epilogue announced the Retrace app for iOS and Android I was over the moon excited. In theory this meant I could archive ROMs from the GB Operator directly to my iPhone where I play the games with the Delta emulator. This meant I wouldn't need to ferry ROMs between the GB Operator to my laptop to my phone. Unfortunately I ran into two hurdles with my plan, if you were able to get Retrace to work with a pre-USB-C iPhone let me know.
Upgrading the GB Operator firmware
First I saw that the Retrace app required a new firmware
version for the GB Operator (v10.0.10), so I set
out to update the GB Operator firmware.
The documentation says to use Playback, so I went to update
Playback. Previously Playback was distributed as an AppImage
but newer versions use Flatpak. So... I had to figure out
how to install a Flatpak on Ubuntu.
I did that, had the new Playback app on Ubuntu and... the firmware update notification never prompted in the app. I contacted support and learned that apparently the Linux versions of Playback don't support updating the firmware... So I needed a Windows computer. My wife's laptop is Windows, so I was able to update the firmware using her computer instead of my Ubuntu laptop.
Trying Retrace with an iPhone 13 Pro
GB Operator uses USB-C for power delivery and data transfer and comes with a high-quality USB-C cord. This is perfect for my laptop which only has USB-C ports.
Unfortunately, I would be using Retrace on an iPhone 13 Pro. The iPhone 13 Pro came before Apple was legally required to use USB-C on their phones in Europe, so the phone has a lightning port. I purchased a Lightning to USB-C adapter cord from the Apple Store.
But... that doesn't work with the GB Operator. It doesn't deliver power to the device. I was able to try with my wife's iPhone 15 Pro (which has USB-C) and power delivery worked like normal, the GB Operator turned on as usual. That's unfortunate.
In summary: if you want to use Epilogue Retrace you need a phone that supports USB-C and upgrading the GB Operator firmware requires either macOS or Windows... I guess I'll be using Playback on Ubuntu for the next five years now that I've just replaced my iPhone 13 Pro battery 😢
Thanks for keeping RSS alive! ♥
Armin Ronacher
Pushing Local Models With Focus And Polish
I really, really want local models to work.
I want them to work in the very practical sense that I can open my coding agent, pick a local model, and get something that feels competitive enough that I do not immediately switch back to a hosted API after five minutes. There are a lot of reasons why I want this, but the biggest quite frankly is that we’re so early with this stuff, and the thought of locking all the experimentation away from the average developer really upsets me.
Frustratingly, right now that is still much harder than it should be but for reasons that have little to do with the complexity of the task or the quality of the models.
We have an enormous amount of activity around local inference, which is great. We have good projects, fast kernels, and people are doing great quantization work. A lot of very smart people are making all of this better, and yet the experience for someone trying to make this work with a coding agent is worse than it has any right to be.
Putting an API key into Pi and using a hosted model is a very boring operation. You select the provider, paste the key and then you are done thinking about how to get tokens. Doing the same thing locally, even when you have a high-end Mac with a lot of memory, is a completely different experience. You choose an inference engine, then a model, then a quantization, then a template, then a context size, then you’ve got to throw a bunch of JSON configs into different parts of the stack and then you discover that one of those choices quietly made the model worse or that something just does not work at all.
That is the gap I am interested in.
Runnable Is Not Finished
A lot of local model work optimizes for making models runnable. That is necessary, but it is not the same thing as making them feel finished. I give you a very basic example here to illustrate this gap: tool parameter streaming.
For whatever reason, most of the stuff you run locally does not support tool parameter streaming. I cannot quite explain it, but the consequences of that are actually surprisingly significant. If you are not familiar with how these APIs work, the simplest way to think about them is that they are emitting tokens as they become available. For text that is trivial, but for tool calls that is often not done, despite the completions API supporting this. As a result you only see what edits are being done on a file once the model has finished streaming the entire tool call.
This is bad for a lot of reasons:
-
A dead connection is a weird connection: local models are slow, so when you don’t get any tokens for 5 minutes then you can’t tell if the connection died or just nothing came. This means you need to increase the inactivity timeouts to the point where they are pointless.
-
You won’t see what will happen: if you are somewhat hands-on, not seeing what bash invocation the system is concocting slowly in the background means potentially wasted tokens, and also means that you won’t be able to interrupt it until way too late.
-
It’s just not SOTA. We can do better, and we should aim for having the best possible experience. Tool parameter streaming is as important as token streaming in other places.
Having a model spit out tokens doesn’t take long, but making the experience great end to end does take a lot more energy.
Fragmentation
The local stack is fragmented across many engines and layers. There is llama.cpp, Ollama, LM Studio, MLX, Transformers, vLLM, and many other pieces depending on hardware and taste. All of these are amazing projects! The problem is not that they exist or that there are that many of them (even though, quite frankly, I’m getting big old Python packaging vibes), the problem is that for a given model, the actual behavior you get depends on a long chain of small decisions that most users just don’t have the energy for.
Did the chat template render exactly right? Are the reasoning tokens handled in the intended way? Is the tool-call format translated correctly? Is the context window real? Are the KV caches actually working for a coding agent? Did I pick the right quantized model from Hugging Face? Are you accidentally leaving a lot of performance on the table because the model is just mismatched for your hardware? Does streaming usage work across all channels? Does the model need its previous reasoning content preserved in assistant messages? Is the coding agent set up correctly for it?
You also need to install many different things in addition to just your coding agent.
All of these things matter. They matter a lot.
The result is that people try a local model and get a result that is neither a fair evaluation of the model nor a polished product experience and this results in both people dismissing local models and energy being distributed across way too many separate efforts instead of getting one effort going great end to end.
This is a terrible way to build confidence.
Too Little Critical Mass
In line with our general “slow the fuck down” mantra, I want to reiterate once more how fast this industry is moving.
Every week there is a new model and a new vibeslopped thing. The attention immediately moves to making the next thing run instead of making one thing run really, really well in one harness. I get the excitement and dopamine hit, but it also means that too little critical mass accumulates behind any one model, hardware, inference engine, harness combo to find out how good it can really become when the entire stack is built around it.
Hosted model providers do not ship a bag of weights and ask you to figure out the rest, and we need to approach that line of thinking for local models too. I want someone to pick one model, pairs it up with one serving path, directly within a coding agent. Initially just for one hardware configuration, then for more. Pick a winner hard. If a tool call breaks, that is a product bug and then it’s fixed no matter where in the stack it failed. If the model’s reasoning stream is malformed, that is a product bug. If latency is much worse than it should be, that is a product bug. We need to start applying that mentality to local models too.
And not for every model! That is the point. Let’s pick one winner and polish the hell out of it. Learn what it takes to make that one configuration good, then take those learnings to the next config.
The DS4 Bet
This is why I am excited about ds4.c. It’s Salvatore Sanfilippo’s deliberately narrow inference engine for DeepSeek V4 Flash on Macs with 128GB+ of RAM only. It is not a generic GGUF runner and it is not trying to be a framework. It is a model-specific native engine with a Metal path, model-specific loading, prompt rendering, KV handling, server API glue, and tests.
DeepSeek V4 Flash is a good candidate for this kind of experiment because it has a combination of properties that are unusual for local use. It is large enough to feel meaningfully different from many smaller dense models, but sparse enough that the active parameter count makes it plausible to run. It has a very large context window. Since ds4.c targets Macs and Metal only, it can move KV caches into SSDs which greatly helps the kind of workloads we expect from coding agents.
To run ds4.c you don’t need MLX, Ollama or anything else. It’s the whole
package.
Embedding It In Pi
Which made me build pi-ds4 which is a Pi extension to directly embed the whole thing into Pi itself. Taking what ds4 is and dogfooding the hell out of it with a coding agent and zero configuration. To answer the question how good can the local model experience become if Pi treats this as a first-class provider rather than as a pile of manual configuration?
The extension registers ds4/deepseek-v4-flash, compiles and starts
ds4-server on demand, downloads and builds the runtime if needed, chooses the
quantization based on the machine, keeps a lease while Pi is using it, exposes
logs, and shuts the server down again through a watchdog when no clients are
left. It doesn’t even give you knobs right now, because I want to figure out how
to set the knobs automatically.
This is not about hiding the fact that local inference is complicated. It is about putting the complexity in one place where it can be improved, because there is a lot that we need to improve along the stack to make it work better.
I think we can do better with caching and there is probably some performance that can be gained if we all put our heads together.
Focusing and Learning
The experiment I want to run is not “can a local model run?” because we already know that it can. I want to know if, for people with beefed-out Macs for a start, we can get as close as possible to the ergonomics of a hosted provider with decent tool-calling performance: how to get caches to work well, how to improve the way we expose tools in harnesses for these models, and then scale it gradually to more hardware configs and later models.
I also want everybody to have access to this. Engineers need hammers and a hammer that’s locked behind a subscription in a data center in another country does not qualify. I know that the price tag on a Mac that can run this is itself astronomical, but I think it’s more likely that this will go down. Even worse, Apple right now due to the RAM shortage does not even sell the Mac Studio with that much RAM. So yes, it’s a selected group of people where ds4.c will start out.
But despite all of that, what matters is that a critical mass of pepole start to focus their efforts on a thing, tinker with it, improve it, not locked away, out in the open, and most importantly not limited by what the hyperscalers make available.
But if you have the right hardware and you care about local agents, I would love for you to try it within pi:
pi install https://github.com/mitsuhiko/pi-ds4
My hope is that this becomes a useful forcing function to really polish one coding agent experience. But really, the focal point should be ds4.c itself.
May 07, 2026
Real Python
Quiz: Qt Designer and Python: Build Your GUI Applications Faster
In this quiz, you’ll test your knowledge of Qt Designer and Python: Build Your GUI Applications Faster.
By working through this quiz, you’ll revisit how Qt Designer turns visual designs into .ui files, how layout managers control widget geometry, how signals and slots connect user actions to your code, and how to load .ui files into a PyQt application with pyuic5 or uic.loadUi().
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm
Python Unplugged on PyTV: Key Takeaways From Our Community Conference
AI is changing how Python developers learn, build, and contribute to open source. At the same time, long-standing questions around community, sustainability, data workflows, and web development are becoming even more important.
Python Unplugged on PyTV, a free online conference hosted by JetBrains PyCharm, brought these conversations together in over seven hours of talks and discussions with developers, maintainers, educators, and tool builders from across the Python ecosystem.
Don’t have time to watch the full event? This blog post gives you a quick overview of what’s happening in Python today, based on talks from 13 Python experts – from AI-assisted development and open-source sustainability to modern data processing, Django, and community building.
Watch the recap video
Want to see the highlights from Python Unplugged on PyTV? Watch the full recap video below.
JetBrains’ Dr. Jodie Burchell, Data Scientist and Python Advocacy Team Lead; Cheuk Ting Ho, Data Scientist and Developer Advocate; and Will Vincent, Python Developer Advocate, discuss the key talking points from the day.
Need a quick overview? Here are the highlights
If you’d rather get the key takeaways in a written format, we’ve broken down the biggest insights from the day below. From the evolving role of AI to the importance of the Python community, these are the moments that stood out most from Python Unplugged on PyTV.
Highlight 1: Python goes beyond scripts and prototypes
Many developers first come to Python through a specific use case: automating tasks, building prototypes, learning data science, or experimenting with AI and machine learning. That accessibility is one of Python’s strengths, but it’s only the entry point.
In her session, AI Practitioners Are Only Getting Half the Goodness of Python, Deb Nicholson, Executive Director at the PSF, discussed how many AI and ML practitioners use Python mainly as a scripting or prototyping language. But Python is also used to build and maintain real-world software, supported by frameworks, data tools, testing workflows, packaging standards, and an active open-source community.
This broader context matters for learning, too. In his How to Learn Python session, Mark Smith, Head of Python Ecosystem at JetBrains, focused on what comes after the fundamentals: building real projects, reading other people’s code, and developing the habits needed to move past “tutorial hell.”
AI can help, but it shouldn’t replace hands-on practice. As Cheuk noted in the recap video, one useful tip from Mark’s talk was to turn off AI features while learning, so beginners still build the judgment needed to understand and improve the code they work with.
Highlight 2: The continuing role of community in Python
Python’s success has always been rooted in its community, and that remains as true as ever. Georgi Ker, Director and Fellow at the PSF; Una Galyeva, Head of AI at Geobear Global; and Jessica Greene, Senior ML Engineer at Ecosia, showcased this in their How PyLadies Is Shaping the Future of Python discussion.
PyLadies is an international mentorship group focused on helping more women become active participants and leaders in the Python community. The success of initiatives like PyLadies highlights how inclusive spaces can broaden participation and shape the future of the language.
As Will noted in our recap video, “Being part of the community is not just the code. It’s the conferences, it’s the people, it’s the live events – that’s what makes Python special.”
Python depends on a culture of shared responsibility, and contributors play a vital role. As AI brings more people into the ecosystem, preserving these values becomes even more important. Travis Oliphant, creator of NumPy, touched on this in his insightful session, Community is More Than Code: People Are What Make Python Thrive, and Why That Will Continue in an AI-Enabled Era.
There’s also a strong link between community and innovation, as Carol Willing, Core Developer at JupyterLab, explained in her session, Conversation, Computation, and Community: Key Principles for Solving Scientific Problems With Jupyter Notebooks and AI Tools. Tools like Jupyter have thrived in part because they enable conversation, collaboration, and knowledge sharing among people.
Highlight 3: AI poses both a threat and an opportunity for Python open source
AI is fundamentally changing how developers interact with open source.
On the positive side, AI coding tools lower the barrier to entry and allow more people to contribute. However, this increased accessibility comes with trade-offs. Maintainers are now dealing with a higher volume of contributions, many of which require significant review or refinement. Deb Nicholson, Executive Director at the PSF, discussed this trade-off in more detail in her session, AI Practitioners Are Only Getting Half the Goodness of Python.
This shift places additional pressure on those responsible for maintaining open-source projects. While AI can accelerate development, it also risks introducing poorly structured or low-quality code at scale.
Paul Everitt, Developer Advocate at JetBrains; Georgi Ker, Director and Fellow at the PSF; and Carol Willing, Core Developer at JupyterLab, pondered this in their Open Source in the Age of Coding Agents discussion. Ultimately, AI can’t replace the human systems that sustain open source. Trust, collaboration, and shared ownership remain essential, and arguably become even more important as contribution volumes increase. The real challenge lies in ensuring communities remain healthy and resilient as they scale.
Highlight 4: AI has also revolutionized how Python practitioners work
Beyond its impact on open source, AI is transforming day-to-day development workflows.
As Marlene Mhangami, Senior Developer Advocate at Microsoft Agentic, explained in her A Practical Guide to Agentic Coding session, coding is emerging as a new paradigm in which developers delegate tasks to AI systems capable of planning, executing, and refining code. This means the developer’s role is moving toward orchestration and validation, requiring new skills in guiding and evaluating AI outputs.
At the same time, development is becoming more conversational and exploratory. In environments like Jupyter, AI tools help users iterate faster, test ideas more easily, and move more fluidly between thinking and coding.
AI is also having a tangible impact on frameworks like Django, as discussed by Sheena O’Connell, Board Member at the PSF, in her talk, Powering Up Django Development With Claude Code. AI tools can speed up development in Django by handling repetitive tasks such as boilerplate generation and debugging. However, this comes with a caveat – developers must remain critical and treat AI as a collaborator, not a source of truth.
For beginners, AI can be a powerful learning aid, but over-reliance can limit deeper understanding. Building projects, reading code, and actively solving problems remain essential for developing real expertise.
Highlight 5: The importance of open-source AI
The open-source AI ecosystem is expanding rapidly, bringing with it a growing landscape of models, datasets, and tools.
This openness drives collaboration, transparency, and innovation, making it easier for developers to experiment and build on existing work. At the same time, it introduces challenges around fragmentation and long-term sustainability.
As Merve Noyan, ML Engineer at Hugging Face, explained in her Open-Source AI Ecosystem session, platforms like Hugging Face play a key role in organizing this ecosystem and making it more accessible, while Python continues to connect tools, communities, and technologies.
Highlight 6: Context is key for effective AI agents
As AI systems become more advanced, the way they interact with their input data is becoming increasingly important. Tuana Çelik, Developer Relations Engineer at LlamaIndex, covered this in detail in her insightful Orchestrating Document-Centric Agents With LlamaIndex talk.
LlamaIndex enables developers to build document-centric AI agents that retrieve, index, and reason over large collections of information. By structuring how documents are ingested and queried, it provides the LLM with much more context for the text it is processing, helping produce more accurate, context-aware responses.
This is particularly valuable in knowledge bases and enterprise assistants, where understanding relationships between pieces of information is as important as accessing the data itself.
Highlight 7: How Polars is refining high-performance data processing
Polars is pushing Python data processing toward a more scalable, production-ready future, as Polars creator Ritchie Vink explained in his Towards Query Profiling in Polars session.
Its high-performance, lazy execution model allows queries to be optimized automatically behind the scenes. However, this level of abstraction can make it harder for developers to fully understand performance.
To address this, there’s a growing need for better tooling, particularly around query profiling. By exposing execution plans, memory usage, and bottlenecks, developers can make informed decisions and build more efficient data workflows.
With features like streaming execution, Polars is helping bridge the gap between local data processing and large-scale systems.
As Jodie highlighted in the recap discussion, this shift is bringing more advanced data concepts into everyday Python workflows. She commented, “It’s really interesting to see more big data ideas coming to local Python data processing.”
Highlight 8: The power of typing in modern Python
Typing in Python continues to evolve, with a growing focus on flexibility rather than rigid enforcement. Open-source Django projects creator Carlton Gibson shed more light on this during his talk, Static Islands, Dynamic Sea: Some Thoughts on Incremental Typing.
The talk highlighted how developers are increasingly adopting an incremental approach. By creating “static islands” within a dynamic codebase, they can improve reliability, maintainability, and tooling without sacrificing Python’s core strengths.
In our recap video, Will agreed with this sentiment, adding, “It doesn’t have to be all-or-nothing. We don’t have to turn Python into something that it’s not.”
This approach is particularly useful in large frameworks like Django, where typing can help define clearer boundaries while still preserving developer ergonomics.
Highlight 9: The Django renaissance: Debunking aging myths
Django remains a modern, actively developed framework, as Django Fellow Sarah Boyce revealed in her session, Django Has a Marketing Problem: Debunking the Myths That Won’t Die.
Many of the criticisms that it’s outdated or unscalable don’t reflect the current reality. In practice, Django continues to evolve and power a wide range of applications.
The challenge is less about Django’s capabilities and more about perception, as the Django community was called to champion its strengths, ongoing evolution, and real-world impact.
Shifting this narrative will be key to ensuring its continued relevance and adoption in the years ahead.
What’s next for Python Unplugged on PyTV?
Python Unplugged on PyTV was our first step in reimagining what a fully online community conference can look like, and the response was incredible.
Looking at the numbers, more than 5,500 people joined us during the livestream. Since then, we’ve had a further 110,000 watch the event recording, showing just how global and engaged the Python community really is.
We’d love to bring Python Unplugged on PyTV back next year. What would you like to see more of? Who should we invite as speakers? Are there topics we didn’t cover that you’d love to explore?
Drop your suggestions in the comments and help shape the future of Python Unplugged on PyTV.
Seth Michael Larson
Library dependency version specifiers aren't for fixing vulnerabilities
Let's say you are the maintainer of a Python library that depends on another
Python library like “urllib3”.
Because you want to make sure users receive a compatible version
of urllib3 you add a version specifier
that restricts the version to the current “major” version so users
know that older versions aren't compatible. This is
what your pyproject.toml might look like:
[project]
name = "example-library"
dependencies = [
"urllib3>=2",
]
Now let's say that urllib3 publishes a vulnerability
that affects “version 2.6.2 and earlier” and is fixed in version 2.6.3.
Later you receive this pull request from a concerned user that changes
the minimum version from 2 to 2.6.3 to “disallow installing a vulnerable version or urllib3”:
[project]
name = "example-library"
dependencies = [
- "urllib3>=2",
+ "urllib3>=2.6.3",
]
You probably should not accept this pull request. Version ranges
for libraries are meant to be used for compatibility, not for security
vulnerabilities. This is a key difference between libraries and applications:
libraries should allow the greatest version ranges within the realms of
compatibility and applications should only “allow” a single version of each
dependency by using a lock file (requirements.txt with --hash, pylock.toml, uv.lock).
It's not the responsibility of library maintainers to force their users are using secure versions of dependencies that aren't directly managed by the library (such as by bundling). That is the responsibility of users.
Why not?
If every library applied this strategy the result would be mass-toil both for users and maintainers.
urllib3 is directly depended on by over 10,000 other libraries on the Python Package Index. So a single vulnerability under this strategy would amplify to a new release for 10,000 projects. Projects like numpy (80,000), requests (72,000), and pandas (55,000) would have even more disastrous amplifications. There are a decent number of vulnerabilities published every day for open source libraries, so this would mean mass-releases, every day, forever: not good.
The much more efficient strategy is to allow users to manage their own application dependencies to ensure they are not affected by vulnerabilities.
Why... maybe?
You can imagine scenarios where a security vulnerability might affect compatibility, such as if a feature is removed or changed in a backwards-incompatible way. In this case then a version range update may be warranted.
Another scenario is where your library version specifiers disallows upgrading to a version with a security fix, such as when a security fix is only available for urllib3 2.x but your library is only compatible with urllib3 1.x. In this case you as a library maintainer may want to consider this request to allow your users to upgrade to secure versions more easily. However, even in this scenario it is not a vulnerability in your library if your version specifiers don't allow an easy upgrade from a vulnerable version to a fixed version of a dependency.
Thanks for keeping RSS alive! ♥
May 06, 2026
Talk Python to Me
#547: Parallel Python at Anyscale with Ray
When OpenAI trained GPT-3, they didn't roll their own orchestration layer. They used Ray, an open source Python framework born out of the same Berkeley research lab lineage that gave us Apache Spark. And here's the twist: Ray was originally built for reinforcement learning research, then quietly faded as RL hit a wall. Until ChatGPT showed up. Suddenly reinforcement learning was back, as the post-training step that turns a raw language model into something genuinely useful. <br/> <br/> Edward Oakes and Richard Liaw, two founding engineers behind Ray and Anyscale, join me on Talk Python to tell that story. We'll trace Ray from its RISE Lab origins at UC Berkeley to powering some of the largest training runs in the world. We'll talk about what Ray actually is, a distributed execution engine for AI workloads, and how a few lines of Python become work running across hundreds of GPUs. We'll cover Ray Data for multimodal pipelines, the dashboard, the VS Code remote debugger, KubRay for Kubernetes, and where Ray fits alongside Dask, multiprocessing, and asyncio. <br/> <br/> If you've ever stared at a single-machine Python script and thought, "there has to be a better way to scale this", this one's for you<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code talkpython26</a><br> <a href='https://talkpython.fm/agentfield-page'>AgentField AI</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guests</strong><br/> <strong>Richard Liaw</strong>: <a href="https://github.com/richardliaw?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Edward Oakes</strong>: <a href="https://github.com/edoakes?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>Ray</strong>: <a href="https://www.ray.io?featured_on=talkpython" target="_blank" >www.ray.io</a><br/> <strong>Example code (we used for walk-through)</strong>: <a href="https://docs.ray.io/en/latest/ray-overview/examples/e2e-audio/README.html?featured_on=talkpython" target="_blank" >docs.ray.io</a><br/> <strong>Getting Started with Ray</strong>: <a href="https://docs.ray.io/en/latest/ray-observability/getting-started.html?featured_on=talkpython" target="_blank" >docs.ray.io</a><br/> <strong>Ray Libraries</strong>: <a href="https://docs.ray.io/en/latest/ray-overview/ray-libraries.html?featured_on=talkpython" target="_blank" >docs.ray.io</a><br/> <strong>kuberay</strong>: <a href="https://github.com/ray-project/kuberay?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=-pVs4-MHaTo" target="_blank" >youtube.com</a><br/> <strong>Episode #547 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/547/parallel-python-at-anyscale-with-ray#takeaways-anchor" target="_blank" >talkpython.fm/547</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/547/parallel-python-at-anyscale-with-ray" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
Talk Python Blog
Announcing German Subtitles on Courses
If you’re a native German speaker who is taking our courses over at Talk Python Training, we have excellent news for you. We now offer German transcripts and subtitles for our course videos.

They’re incredibly easy to enable. Just click the CC for closed captions in the player and switch your language to Deutsch.
That’s it! Now you’re enjoying closed captions in your language.
What if I don’t speak German?
If you don’t speak German, well, maybe these subtitles are not for you. But there are still some very cool updates to the website as well.
Real Python
ChatterBot: Build a Chatbot With Python
The Python ChatterBot library lets you build a self-learning command-line chatbot with just a few lines of code. You’ll set up a basic bot, clean real WhatsApp conversation data with regular expressions, and train your chatbot on that custom corpus. You’ll also plug in a local LLM through Ollama to augment its replies with contextual knowledge.
By the end of this tutorial, you’ll understand that:
- ChatterBot is a Python library that combines text processing, machine learning, and a local database to generate chatbot replies.
- A minimal ChatterBot script instantiates
ChatBot, collects user input in a loop, and returns matching responses through.get_response(). - Training with
ListTrainerand default settings stores conversation pairs in a SQLite database that ChatterBot queries with Levenshtein distance to pick each reply. - ChatterBot can call a local LLM through
OllamaLogicAdapter, voting against other logic adapters with a confidence score. - ChatterBot was revived in 2025 with spaCy-based NLP, CSV and JSON trainers, and experimental LLM support.
Along the way, you’ll move from a potted plant that can only echo hello to a chatbot that chats knowledgeably about houseplants. You can follow along with your own WhatsApp export or grab the provided sample data below.
Get Your Code: Click here to download the free sample code that you’ll use to build a chatbot with Python’s Chatterbot.
Take the Quiz: Test your knowledge with our interactive “ChatterBot: Build a Chatbot With Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
ChatterBot: Build a Chatbot With PythonTest your understanding of the ChatterBot Python library, from training a basic bot with ListTrainer to wiring in a local LLM through Ollama.
Preview the Chatbot
At the end of this tutorial, you’ll have a command-line chatbot that can respond to your inputs with semi-meaningful replies:
You’ll achieve that by preparing WhatsApp chat data and using it to train the chatbot. Beyond learning from your automated training, the chatbot will improve over time as it gets more exposure to questions and replies from user interactions.
Project Overview
The ChatterBot library combines text processing, machine learning algorithms, and data storage and retrieval to allow you to build flexible chatbots.
You can build an industry-specific chatbot by training it with relevant data. Additionally, the chatbot will remember user responses and continue building its internal graph structure to improve the responses that it can give.
Note: After a long hiatus, ChatterBot was revived in early 2025 with support for modern Python, new training formats for CSV and JSON data, and even experimental LLM integration. Under the hood, ChatterBot now uses spaCy for language processing, which gives it a more robust NLP pipeline than before.
If you want to develop an LLM-first chatbot, Real Python’s LLM Application Development With Python learning path takes you through the concepts and libraries step by step:
Learning Path
LLM Application Development With Python
13 Resources ⋅ Skills: OpenAI, Ollama, OpenRouter, Prompt Engineering, LangChain, LlamaIndex, ChromaDB, MarkItDown, RAG, Embeddings, Pydantic AI, LangGraph, MCP
In this tutorial, you’ll start with an untrained chatbot that’ll showcase how quickly you can create an interactive chatbot using Python’s ChatterBot. You’ll also notice how small the vocabulary of an untrained chatbot is.
Next, you’ll learn how you can train such a chatbot and check on the slightly improved results. The more plentiful and high-quality your training data is, the better your chatbot’s responses will be.
Therefore, you’ll either fetch the conversation history of one of your WhatsApp chats or use the provided chat.txt file that you can download here:
Get Your Code: Click here to download the free sample code that you’ll use to build a chatbot with Python’s Chatterbot.
It’s rare that input data comes exactly in the form you need, so you’ll clean the chat export data to get it into a useful input format. This process will show you some tools you can use for data cleaning, which may help you prepare other input data to feed to your chatbot.
After data cleaning, you’ll retrain your chatbot and give it another spin to experience the improved performance. Finally, you’ll hook a local LLM into your chatbot to augment the variety and contextual relevance of its responses.
When you work through this process from start to finish, you’ll get a good idea of how you can build and train a Python chatbot with the ChatterBot library so that it can provide an interactive experience with relevant replies.
Prerequisites
Before you get started, make sure that you have Python 3.10 or later installed, which is the minimum Python version that ChatterBot supports. If you need help setting up Python, check out Python 3 Installation & Setup Guide.
Read the full article at https://realpython.com/build-a-chatbot-python-chatterbot/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Mike Driscoll
Textual – An Intro to DOM Queries (Part II)
Last month, you learned the basics of Textual’s DOM queries. If you missed it, you can read the article now!
In this tutorial you will be learning about the following topics:
- The DOMQuery object
- Getting the first or last widget
- Query filters
- Query exclusions
- Other query methods
Let’s get started!
The DOMQuery Object
The
DOMQuery object gets returned whenever you call query(). The DOMQuery object works just like a Python list and supports all the same operations that you do with a list:query[1]len(query)reverse(query)- etc
However, DOMQuery has additional methods of its very own. You can get a list of these methods using Python’s dir() function against the DOMQuery object.
To see this in action, create a new file named domquery_methods.py in your Python IDE and add this code:
# domquery_methods.py
import pprint
from textual.app import App, ComposeResult
from textual.widgets import Button, Label
class QueryApp(App):
def compose(self) -> ComposeResult:
yield Label("Press a button", id="label")
yield Button("Get DomQuery Methods", id="one")
def on_button_pressed(self) -> None:
widgets = self.query("Button")
s = ""
s += f"{type(widgets)}\n"
for entry in dir(widgets):
s += f"{entry}\n"
label = self.query_one("#label")
label.update(s)
if __name__ == "__main__":
app = QueryApp()
app.run()
Here, you loop over all the entries that dir() returns and add them to your string. Then, you update the Label as before.
When you run this code, you will see something like this:

Let’s spend a few moments learning about the most helpful methods from this list!
Getting the First or Last Widget
Textual’s DOMQuery object has a couple of handy methods you can use to get the first or last matching widget. You may have noticed them in the output from the previous code example. They are called: first() and last().
You can take the example code from earlier with three buttons and slightly update it to ask for the first and last widgets. For this example, you will name the file first_and_last.py and use this code:
# first_and_last.py
from textual.app import App, ComposeResult
from textual.widgets import Button, Label
class QueryApp(App):
def compose(self) -> ComposeResult:
yield Label("Press a button", id="label")
yield Button("One", id="one")
yield Button("Two", id="two")
yield Button("Three", id="three")
def on_button_pressed(self) -> None:
widgets = self.query("Button")
s = ""
s += f"The first button: {widgets.first()}\n"
s += f"The last button: {widgets.last()}"
label = self.query_one("#label")
label.update(s)
if __name__ == "__main__":
app = QueryApp()
app.run()
To make everything explicit, you add an id to all the Button widgets. Then, in on_button_pressed(), you grab the first and last widgets from the DOMQuery object and put them in the Label.
When you run this code and press any of the buttons, you will see something like this:

Of course, you don’t need to use first() and last() if you don’t want to because the DOMQuery object behaves like a list. You could use widgets[0] and widgets[-1] to get the first and last widgets if you want to instead.
However, there are benefits to using these methods. For example, you can pass in an expect_type argument to first() and last(), making this even more explicit about what type you expect the first and last widgets to be.
Here’s an example that expects the last widget to be a Button:
last_button = self.query().last(Button)
If the last widget is not a Button, you will receive a WrongType exception.
Query Filters
Textual also provides a filter() method that you may use on your DOMQuery objects. As you might expect, the filter() is useful for getting subsets of widgets from the query list.
Here is an example where you run a query to get all the widgets, and then you extract the Label and the Button widgets into new DOMQuery objects:
# Get all the widgets
widgets = self.query()
# Get the Label widgets
label_widgets = widgets.filter("Label")
# Get the Button widgets
button_widgets = widgets.filter("Button")
Query Exclusions
You may exclude widgets from a DOMQuery using the exclude() method. You can think of exclude() as the logical opposite of filter(). It will remove any widgets from the DOMQuery that match a CSS selector.
You can use the previous example to get all the widgets. However, this time, you want to exclude all the Button widgets.
Here’s one way you could do that:
# Get all the widgets
widgets = self.query()
# Exclude the buttons
non_buttons = widgets.exclude("Button")
With a little practice, you can soon filter and exclude as many or as few widgets as you want from your query set.
Other Query Methods
So far, you have seen how to get list-like DOMQuery objects that you can loop over and call each widget’s own method on. However, the DOMQuery object supports several methods that you can run, which will update all the matched widgets without needing to loop over them.
A good example in the Textual documentation mentions you can use add_class() to mark widgets as “started” or “ended”. Here’s an example:
self.query("Input").add_class("started")
Of course, you could add any other CSS class to all your widgets, too.
Here is a list of all the methods you can use that will affect everything in the query set:
add_class– Add a CSS class or classes.blur– Removes the focus from the widgets you matched.focus– Focuses the first matching widgets. It’s kind of the opposite ofblur.refresh– Need to refresh a set of widgets? Use this method!remove_class– Removes a CSS class. In other words, if you disabled your widgets withadd_class(), you could re-enable them with this one.remove– Remove all the matched widgets from the DOM.set_class– Sets a CSS class or classes on your widget query setset– Sets common attributes on your widgets. For example, you can setdisplay,visible,disabled, orloading.toggle_class– Sets a CSS class or classes if it is not already set or vice-versa
You will need to spend some time trying each of these out. You can use the code examples in this chapter and try calling them against various DOMQuery objects. If you need a reminder of what pre-built classes are available to you, refer to chapter three under pseudo classes or check the Textual documentation.
Wrapping Up
Learning how to use DOM queries in Textual will help you whenever you need to do bulk updates to widgets in your application. You will be using some kind of query in most of the applications you build, so understanding how to use them can only help you be more effective.
Let’s review what you learned:
- The DOMQuery object
- Getting the first or last widget
- Query filters
- Query exclusions
- Other query methods
Now that you have a good grounding in working with the DOMQuery object, you should spend some time reviewing the code in this chapter and utilizing the methods you learned about. Give them a try, and you’ll be ready to make bulk updates to your user interface in no time!
Learn More
Want to learn more about Textual? Check out my book:
Purchase on Gumroad, Leanpub, or Amazon
The post Textual – An Intro to DOM Queries (Part II) appeared first on Mouse Vs Python.
Real Python
Quiz: Python & APIs: A Winning Combo for Reading Public Data
In this quiz, you’ll test your understanding of Python & APIs: A Winning Combo for Reading Public Data.
By working through this quiz, you’ll revisit how APIs send requests and responses, how the requests library works, what status codes and headers mean, and how to handle authentication, pagination, and rate limits in your own code.
Good luck!
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: How to Use OpenCode for AI-Assisted Python Coding
In this quiz, you’ll test your understanding of How to Use OpenCode for AI-Assisted Python Coding.
By working through these questions, you’ll revisit how to install OpenCode,
connect it to an AI provider, configure project context with AGENTS.md, and take advantage of features
like mid-session model switching and built-in language servers.
If you’d like a broader look at AI-assisted Python development, you can also follow the Python Coding With AI learning path.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python GUIs
Sorting and Filtering a QTableView with QSortFilterProxyModel — Learn how to add interactive sorting and filtering to your PyQt/PySide table views without touching your underlying data
I'm using QTableView to show some data, which works well. But I would like to be able to sort the data by different columns. How can I do this without sorting the data manually?
If you've already built a QTableView with a custom model, you might be wondering how to let users sort columns by clicking headers or filter rows based on search input. The good news is that Qt provides a ready-made tool for this: QSortFilterProxyModel. It sits between your model and your view, rearranging and filtering the data without modifying the original source. Your data stays untouched — the proxy just changes how it's presented.
In this tutorial, we'll start with a simple table model and progressively add sorting, filtering, and then tackle some of the common pitfalls — like working with proxy indexes correctly and avoiding crashes when updating data.
A Simple Table Model
Let's begin with a basic QTableView displaying a list-of-lists. This is the same pattern used in the model/view architecture tutorial.
import sys
from PyQt6.QtCore import Qt, QAbstractTableModel
from PyQt6.QtWidgets import QApplication, QMainWindow, QTableView
class TableModel(QAbstractTableModel):
def __init__(self, data):
super().__init__()
self._data = data
self._headers = ["Name", "Age", "City"]
def data(self, index, role):
if role == Qt.ItemDataRole.DisplayRole:
return self._data[index.row()][index.column()]
def rowCount(self, index):
return len(self._data)
def columnCount(self, index):
return len(self._data[0])
def headerData(self, section, orientation, role):
if role == Qt.ItemDataRole.DisplayRole:
if orientation == Qt.Orientation.Horizontal:
return self._headers[section]
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.table = QTableView()
data = [
["Alice", 25, "New York"],
["Bob", 30, "Denver"],
["Charlie", 35, "Austin"],
["Diana", 28, "Denver"],
["Eve", 22, "Austin"],
]
self.model = TableModel(data)
self.table.setModel(self.model)
self.setCentralWidget(self.table)
self.setWindowTitle("QTableView — No Sorting Yet")
self.resize(500, 300)
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
Run this and you'll see a plain table. Clicking the column headers does nothing as sorting is off by default. Let's change that first.
Adding sorting with QSortFilterProxyModel
To add sorting, we insert a QSortFilterProxyModel between our TableModel and the QTableView. The proxy model wraps the source model and provides sorted (and later, filtered) access to the same data.
Here's what changes in the MainWindow.__init__ method:
from PyQt6.QtCore import Qt, QAbstractTableModel, QSortFilterProxyModel
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.table = QTableView()
data = [
["Alice", 25, "New York"],
["Bob", 30, "Denver"],
["Charlie", 35, "Austin"],
["Diana", 28, "Denver"],
["Eve", 22, "Austin"],
]
self.model = TableModel(data)
self.proxy_model = QSortFilterProxyModel()
self.proxy_model.setSourceModel(self.model)
self.table.setModel(self.proxy_model)
self.table.setSortingEnabled(True)
self.setCentralWidget(self.table)
self.setWindowTitle("QTableView — Sortable!")
self.resize(500, 300)
We've added the following steps:
- Create the
QSortFilterProxyModel. - Tell it which source model to wrap with
setSourceModel(). - Give the proxy model to the view (not the source model), and call
setSortingEnabled(True)on the view.
Now when you click a column header, the rows reorder. Click again to reverse the sort direction. The little arrow indicator on the header shows you which column is currently sorted and in which direction.

Notice that the underlying data list hasn't changed at all. The proxy model handles everything by remapping indexes.
Adding filtering
Filtering works through the same proxy model. You tell the proxy which column to look at and what pattern to match, and it hides rows that don't match.
Let's add a QLineEdit that filters rows as you type. We'll filter on the "City" column (column index 2). If you're interested in a more complete search bar implementation, see the widget search bar tutorial.
import sys
from PyQt6.QtCore import Qt, QAbstractTableModel, QSortFilterProxyModel
from PyQt6.QtWidgets import (
QApplication, QMainWindow, QTableView,
QVBoxLayout, QWidget, QLineEdit, QLabel,
)
class TableModel(QAbstractTableModel):
def __init__(self, data):
super().__init__()
self._data = data
self._headers = ["Name", "Age", "City"]
def data(self, index, role):
if role == Qt.ItemDataRole.DisplayRole:
return self._data[index.row()][index.column()]
def rowCount(self, index):
return len(self._data)
def columnCount(self, index):
return len(self._data[0])
def headerData(self, section, orientation, role):
if role == Qt.ItemDataRole.DisplayRole:
if orientation == Qt.Orientation.Horizontal:
return self._headers[section]
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
data = [
["Alice", 25, "New York"],
["Bob", 30, "Denver"],
["Charlie", 35, "Austin"],
["Diana", 28, "Denver"],
["Eve", 22, "Austin"],
]
self.model = TableModel(data)
self.proxy_model = QSortFilterProxyModel()
self.proxy_model.setSourceModel(self.model)
self.proxy_model.setFilterCaseSensitivity(
Qt.CaseSensitivity.CaseInsensitive
)
self.proxy_model.setFilterKeyColumn(2) # Filter on "City" column
self.table = QTableView()
self.table.setModel(self.proxy_model)
self.table.setSortingEnabled(True)
self.search_input = QLineEdit()
self.search_input.setPlaceholderText("Filter by city...")
self.search_input.textChanged.connect(
self.proxy_model.setFilterFixedString
)
layout = QVBoxLayout()
layout.addWidget(QLabel("Search:"))
layout.addWidget(self.search_input)
layout.addWidget(self.table)
container = QWidget()
container.setLayout(layout)
self.setCentralWidget(container)
self.setWindowTitle("QTableView — Sort & Filter")
self.resize(500, 400)
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
Type "Denver" into the search box and the table instantly filters to show only the rows where the City column matches. Type "aus" and you'll see the Austin rows (because we set case-insensitive matching).

Let's recap what's happening:
setFilterKeyColumn(2)tells the proxy to check column 2 ("City") when deciding which rows to show.setFilterFixedStringperforms a plain substring match. If the filter string appears anywhere in the cell value, the row is shown.- We connected the
textChangedsignal from theQLineEditdirectly tosetFilterFixedStringon the proxy model. Every time the user types, the filter updates automatically.
Filtering across all columns
If you want to search across every column instead of just one, set the filter key column to -1:
self.proxy_model.setFilterKeyColumn(-1) # Search all columns
Now typing "25" will match Alice's row (age 25), and typing "Den" will still match the Denver rows.
Other filter modes
setFilterFixedString is the simplest option — it does a plain text substring match. The proxy model also supports more powerful matching:
- Wildcard matching using
setFilterWildcard()— supports*and?patterns, like"D*ver". - Regular expression matching using
setFilterRegularExpression()— supports full regex patterns for complex matching needs.
For most interactive search boxes, setFilterFixedString with case-insensitive matching is exactly what you need.
Working with proxy indexes correctly
When a proxy model is active, the indexes your view reports are proxy indexes, not source model indexes. If you click on a row in the filtered/sorted view, the QModelIndex you receive refers to the proxy model's row numbering, which may not match the original data.
This matters when you need to do something with the underlying data — like reading values from the source model or modifying a specific row.
Consider this slot connected to the table's clicked signal:
self.table.clicked.connect(self.cell_clicked)
def cell_clicked(self, proxy_index):
# This gives the row number in the proxy (filtered/sorted) view
print(f"Proxy row: {proxy_index.row()}, column: {proxy_index.column()}")
# To get the corresponding row in the SOURCE model:
source_index = self.proxy_model.mapToSource(proxy_index)
print(f"Source row: {source_index.row()}, column: {source_index.column()}")
The method mapToSource() translates a proxy index back to the source model's coordinate system. There's also mapFromSource() for going the other direction — converting a source index into the proxy's index, which is useful if you need to select or highlight a specific row in the view programmatically.
If you forget this step and pass proxy indexes directly to your source model, you'll end up reading or modifying the wrong row. When filtering is active, the mismatch becomes obvious because the proxy's row 0 might correspond to row 3 in the source.
Reading data through the proxy
When you want to read data from a clicked row, you have two options:
def cell_clicked(self, proxy_index):
row = proxy_index.row()
# Option 1: Read through the proxy model (uses proxy indexes)
name = self.proxy_model.data(
self.proxy_model.index(row, 0),
Qt.ItemDataRole.DisplayRole,
)
# Option 2: Map to source and read from source model
source_index = self.proxy_model.mapToSource(proxy_index)
name = self.model.data(
self.model.index(source_index.row(), 0),
Qt.ItemDataRole.DisplayRole,
)
print(f"Clicked on: {name}")
Both options give you the same result. Option 1 is often simpler since you're already working with proxy indexes from the view. Option 2 is necessary when you need to interact directly with the source model — for example, to modify data or get the "real" row position in your data structure.
Avoiding crashes when updating the source model
If you're updating the source model's data while a proxy model and view are connected, you need to properly notify Qt's model/view framework about the changes. Without these notifications, you can get segmentation faults or corrupted displays.
The pattern for updating data looks like this inside your QAbstractTableModel subclass:
def update_data(self, new_data):
self.layoutAboutToBeChanged.emit()
self._data = new_data
self.layoutChanged.emit()
The layoutAboutToBeChanged signal tells the proxy model (and the view) that the structure of the data is about to change. After you've made your changes, layoutChanged tells everything to refresh. Skipping the layoutAboutToBeChanged signal is a common cause of crashes — the proxy model needs that heads-up to properly invalidate its internal mapping of indexes.
For smaller changes, like updating a single cell, you can use dataChanged instead:
def set_value(self, row, col, value):
self._data[row][col] = value
index = self.index(row, col)
self.dataChanged.emit(index, index)
And if you're adding or removing rows, use beginInsertRows/endInsertRows or beginRemoveRows/endRemoveRows:
def add_row(self, row_data):
row_position = len(self._data)
self.beginInsertRows(self.index(row_position, 0).parent(), row_position, row_position)
self._data.append(row_data)
self.endInsertRows()
Getting these signals right is what keeps the proxy model, the view, and your data all in sync.
Custom sorting behavior
The default sorting provided by QSortFilterProxyModel uses Qt.ItemDataRole.DisplayRole and works well for simple string and number comparisons. But sometimes you need more control — for example, sorting a column of dates that are stored as strings, or sorting with a custom priority order.
To customize sorting, subclass QSortFilterProxyModel and override the lessThan method. This method receives two QModelIndex objects (from the source model) and should return True if the left value should come before the right value.
class CustomProxyModel(QSortFilterProxyModel):
def lessThan(self, left, right):
left_data = self.sourceModel().data(left, Qt.ItemDataRole.DisplayRole)
right_data = self.sourceModel().data(right, Qt.ItemDataRole.DisplayRole)
# Example: sort numerically if both values are numbers
try:
return float(left_data) < float(right_data)
except (ValueError, TypeError):
# Fall back to string comparison
return str(left_data).lower() < str(right_data).lower()
Then use CustomProxyModel instead of QSortFilterProxyModel:
self.proxy_model = CustomProxyModel()
self.proxy_model.setSourceModel(self.model)
This is useful when your table has mixed data types or when the display representation doesn't sort the way you'd expect (like date strings in "MM/DD/YYYY" format).
Custom filtering behavior
Similarly, you can customize filtering by subclassing QSortFilterProxyModel and overriding filterAcceptsRow. This method is called for every row in the source model, and it should return True if the row should be visible.
Here's an example that filters to show only rows where the "Age" column (column 1) is above a certain threshold:
class AgeFilterProxyModel(QSortFilterProxyModel):
def __init__(self):
super().__init__()
self._min_age = 0
def set_min_age(self, age):
self._min_age = age
self.invalidateFilter() # Re-apply the filter
def filterAcceptsRow(self, source_row, source_parent):
index = self.sourceModel().index(source_row, 1, source_parent)
age = self.sourceModel().data(index, Qt.ItemDataRole.DisplayRole)
try:
return int(age) >= self._min_age
except (ValueError, TypeError):
return True
After changing your filter criteria, call invalidateFilter() to tell the proxy model to re-evaluate which rows should be shown.
You can combine this with the text-based column filtering too. If you override filterAcceptsRow, you have full control — you can check multiple columns, apply complex logic, or combine several filter conditions:
def filterAcceptsRow(self, source_row, source_parent):
model = self.sourceModel()
# Check age filter
age_index = model.index(source_row, 1, source_parent)
age = model.data(age_index, Qt.ItemDataRole.DisplayRole)
if int(age) < self._min_age:
return False
# Check text filter on city column
city_index = model.index(source_row, 2, source_parent)
city = model.data(city_index, Qt.ItemDataRole.DisplayRole)
if self.filterRegularExpression().pattern():
if not self.filterRegularExpression().match(city).hasMatch():
return False
return True
Putting it all together
Here's a complete example that combines sorting, text filtering, and a numeric age filter using a custom proxy model:
import sys
from PyQt6.QtCore import Qt, QAbstractTableModel, QSortFilterProxyModel
from PyQt6.QtWidgets import (
QApplication, QMainWindow, QTableView,
QVBoxLayout, QHBoxLayout, QWidget,
QLineEdit, QLabel, QSpinBox,
)
class TableModel(QAbstractTableModel):
def __init__(self, data):
super().__init__()
self._data = data
self._headers = ["Name", "Age", "City"]
def data(self, index, role):
if role == Qt.ItemDataRole.DisplayRole:
return self._data[index.row()][index.column()]
def rowCount(self, index):
return len(self._data)
def columnCount(self, index):
return len(self._data[0])
def headerData(self, section, orientation, role):
if role == Qt.ItemDataRole.DisplayRole:
if orientation == Qt.Orientation.Horizontal:
return self._headers[section]
class CustomFilterProxyModel(QSortFilterProxyModel):
def __init__(self):
super().__init__()
self._min_age = 0
self._city_filter = ""
def set_min_age(self, age):
self._min_age = age
self.invalidateFilter()
def set_city_filter(self, text):
self._city_filter = text.lower()
self.invalidateFilter()
def filterAcceptsRow(self, source_row, source_parent):
model = self.sourceModel()
# Age filter (column 1)
age_index = model.index(source_row, 1, source_parent)
age = model.data(age_index, Qt.ItemDataRole.DisplayRole)
try:
if int(age) < self._min_age:
return False
except (ValueError, TypeError):
pass
# City text filter (column 2)
if self._city_filter:
city_index = model.index(source_row, 2, source_parent)
city = model.data(city_index, Qt.ItemDataRole.DisplayRole)
if self._city_filter not in str(city).lower():
return False
return True
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
data = [
["Alice", 25, "New York"],
["Bob", 30, "Denver"],
["Charlie", 35, "Austin"],
["Diana", 28, "Denver"],
["Eve", 22, "Austin"],
["Frank", 40, "New York"],
["Grace", 19, "Denver"],
]
self.model = TableModel(data)
self.proxy_model = CustomFilterProxyModel()
self.proxy_model.setSourceModel(self.model)
self.table = QTableView()
self.table.setModel(self.proxy_model)
self.table.setSortingEnabled(True)
self.table.clicked.connect(self.cell_clicked)
# City filter
self.city_input = QLineEdit()
self.city_input.setPlaceholderText("Filter by city...")
self.city_input.textChanged.connect(self.proxy_model.set_city_filter)
# Age filter
self.age_spin = QSpinBox()
self.age_spin.setRange(0, 100)
self.age_spin.setPrefix("Min age: ")
self.age_spin.valueChanged.connect(self.proxy_model.set_min_age)
filter_layout = QHBoxLayout()
filter_layout.addWidget(QLabel("City:"))
filter_layout.addWidget(self.city_input)
filter_layout.addWidget(self.age_spin)
layout = QVBoxLayout()
layout.addLayout(filter_layout)
layout.addWidget(self.table)
self.status_label = QLabel("Click a row to see details")
layout.addWidget(self.status_label)
container = QWidget()
container.setLayout(layout)
self.setCentralWidget(container)
self.setWindowTitle("QTableView — Sort & Custom Filter")
self.resize(500, 400)
def cell_clicked(self, proxy_index):
source_index = self.proxy_model.mapToSource(proxy_index)
row = proxy_index.row()
name = self.proxy_model.data(
self.proxy_model.index(row, 0),
Qt.ItemDataRole.DisplayRole,
)
age = self.proxy_model.data(
self.proxy_model.index(row, 1),
Qt.ItemDataRole.DisplayRole,
)
city = self.proxy_model.data(
self.proxy_model.index(row, 2),
Qt.ItemDataRole.DisplayRole,
)
self.status_label.setText(
f"Selected: {name}, age {age}, from {city} "
f"(source row: {source_index.row()})"
)
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()

Try playing with this example:
- Click column headers to sort by name, age, or city.
- Type in the city filter to narrow down results.
- Adjust the minimum age spinner to hide younger entries.
- Click on rows and notice how the source row number differs from the visible row number.
Summary
Here's a quick recap of everything we covered:
QSortFilterProxyModelsits between your model and your view. It sorts and filters data without modifying the source.- Call
setSortingEnabled(True)on the view to let users sort by clicking column headers. - Use
setFilterKeyColumn()andsetFilterFixedString()for quick text filtering. Set the column to-1to search all columns. - Always use
mapToSource()when you need to convert a proxy index back to the source model's coordinate system. - Emit
layoutAboutToBeChangedandlayoutChangedwhen replacing data in the source model to avoid crashes. - Subclass
QSortFilterProxyModeland overridelessThan()for custom sorting orfilterAcceptsRow()for custom filtering. - Call
invalidateFilter()after changing filter criteria in a custom proxy model.
The proxy model pattern is one of the most powerful parts of Qt's model/view architecture. Once you have it set up, you get flexible data presentation without ever duplicating or restructuring your underlying data. To explore model/view further, see how to display numpy and pandas data in a QTableView or learn about signals, slots, and events that power these interactions.
For an in-depth guide to building Python GUIs with PyQt6 see my book, Create GUI Applications with Python & Qt6.
Django Weblog
Announcing the Google Summer of Code 2026 contributors for Django
The Django Software Foundation is happy to share the contributors selected for Google Summer of Code 2026.
This year, we received over 200 proposals from contributors across the world. The level of detail and thought in these proposals made the selection process both exciting and challenging.
Accepted Projects
We’re pleased to announce the following projects:
Implementing an experimental API framework for Django core
Contributor: Praful Gulani
Mentor: Andrew Miller
This project explores an approach to introducing experimental APIs in Django by modernizing DEP 2 and defining an opt-in model.
Add support for table-valued expressions in the ORM
Contributor: p-r-a-v-i-n
Mentors: Bhuvnesh Sharma, Jacob Walls
This project develops a way to join against table-valued expressions such as Subquery() or PostgreSQL functions like generate_series() within the ORM.
Unified dark mode and UI consistency for Django’s issue tracker
Contributor: Keha Chandrakar
Mentors: Saptak S, Sarah A
This project adds dark mode support to Django’s issue tracker and brings it closer in visual consistency to the main Django website.
Switch to Playwright tests for integration testing
Contributor: Varun Kasyap Pentamaraju
Mentor: Sarah Boyce
This project focuses on improving Django’s browser integration testing by transitioning from Selenium to Playwright.
Each of these projects focuses on areas of Django that we’re looking to improve over the coming months. Contributors will work closely with their mentors, participate in regular check-ins, and engage with the broader Django community.
To everyone who applied
Thank you to everyone who submitted a proposal this year.
We know the effort it takes to explore ideas, write proposals, and engage with the community. Not being selected this time does not reflect the overall quality or potential of your work.
Given the number of applications and that the program is run by a small group of volunteers, we’re not able to provide individual feedback on proposals. Selections are based on a combination of factors including alignment with project goals, feasibility within the program timeline, prior contributions, and clarity of the proposal.
We encourage you to stay involved. Many contributors to Django started in similar positions. Keep building, keep contributing, and stay connected with the community. There will always be more opportunities.
What’s next
The community bonding period has begun, and contributors will soon start working on their projects. We’ll share updates as the program progresses and highlight the work along the way.
Please join us in welcoming the selected contributors and supporting them during the program.
May 05, 2026
PyCoder’s Weekly
Issue #733: marimo pair, Finding Bugs With LLMs, httpxy, and More (2026-05-05)
#733 – MAY 5, 2026
View in Browser »
Agentic Data Science Pair Programming With marimo pair
How do you add agent skills to your data science workflow? How can a coding agent assist with data wrangling and research? This week on the show, Trevor Manz from marimo joins us to discuss marimo pair.
REAL PYTHON podcast
Using LLMs to Find Python C-Extension Bugs
LLM’s can be powerful tools to help find problems with code, but without a human in the loop things can be problematic. This post talks about how one coder is approaching the challenge.
JAKE EDGE
Positron: The Data Science IDE from Posit PBC
Positron is a free IDE built for Python data science. AI assistance, interactive data frames, Jupyter notebooks, and instant app deployment, all in one place. Stop context-switching. Start shipping. Download free.
POSIT PBC sponsor
httpxyz One Month In
httpxyz is a fork of httpx created one month ago. This blog post describes how the journey has gone so far and where they’re going in the future.
MICHIEL BEIJEN
Articles & Tutorials
Building the Async Python Task Queue I Wished Existed
If you’ve ever written async Python for your API and then switched to synchronous code for your background tasks, you know something is off. Repid v2 is an attempt to fix that - an async-first, AsyncAPI-native task queue built over two years of production use, countless rewrites, and one hand-written AMQP 1.0 implementation.
ALEKSUL.SPACE • Shared by Alex
Inverse Sapir-Whorf and Programming Languages
The Sapir-Whorf hypothesis is the idea that the languages you speak influence the thoughts you can have. The inverse is the idea that your language limits what you can’t but say. When applied to programming this has subtle results determining core ideas like execution order.
LUKE PLANT
B2B AI Agent Auth Support
Your users are asking if they can connect their AI agent to your product, but you want to make sure they can do it safely and securely. PropelAuth makes that possible →
PROPELAUTH sponsor
What’s New in Pip 26.1
pip 26.1 adds support for dependency cooldowns, experimental support for reading/installing from standard lockfiles (pylock.toml), fixes several long-standing limitations of the 2020 resolver, and drops support for Python 3.9.
RICHARD SI
Python Packaging Council Approved
PEP 772 established the Packaging Council, an elected group to set standards for packaging standards and tools. The PEP was recently accepted, and this post talks about the group and what it took to get here.
JAKE EDGE
Choosing a Python Logging Library in 2026
This post compares Python’s standard logging module, structlog, and Loguru. It includes real benchmarks, OpenTelemetry integration paths, and framework specific guidance for Django, FastAPI, and Flask
AYOOLUWA ISAIAH
⏰ Last Chance: Claude Code Live Course May 6–7
Developers who learn to work with AI agents will have an edge that’s hard to close. This 2-day hands-on course takes you from zero to a working Python project built entirely with Claude Code.
REAL PYTHON sponsor
Before GitHub
There is a lot of talk on the net lately about the state of GitHub. This opinion piece by Armin talks about what Open Source was like before GitHub: it was reputation-driven and full of friction.
ARMIN RONACHER
It’s Time to Redesign djangoproject.com
The site that hosts the Django framework is overdue for a refresh. This post describes how they’re going through the process and what to expect next.
DJANGO SOFTWARE FOUNDATION
Testing Your Code With Python’s unittest
Learn how to use Python’s unittest framework to write unit tests for your code, including test cases, fixtures, and test suites.
REAL PYTHON course
AI Coding Agents Guide: A Map of the Four Workflow Types
AI coding agents come in four types: IDE, terminal, PR, and cloud. Learn how each workflow fits into modern Python development.
REAL PYTHON
Self Hosting Apps for Python People
Talk Python interviews Alex Kretzschmar and they talk about what it takes to move from the cloud to hosting things yourself.
TALK PYTHON podcast
Projects & Code
secure: HTTP Security Headers for Python Web Applications
GITHUB.COM/TYPEERROR • Shared by Caleb Kinney
Events
Canberra Python Meetup
May 7, 2026
MEETUP.COM
Sydney Python User Group (SyPy)
May 7, 2026
SYPY.ORG
PiterPy Meetup
May 12, 2026
PITERPY.COM
Leipzig Python User Group Meeting
May 12, 2026
MEETUP.COM
PyCon US 2026
May 13 to May 20, 2026
PYCON.ORG
Happy Pythoning!
This was PyCoder’s Weekly Issue #733.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Django Weblog
Django security releases issued: 6.0.5 and 5.2.14
In accordance with our security release policy, the Django team is issuing releases for Django 6.0.5 and Django 5.2.14. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.
CVE-2026-5766: Potential denial-of-service vulnerability in ASGI requests via file upload limit bypass
ASGI requests with a missing or understated Content-Length header could bypass the FILE_UPLOAD_MAX_MEMORY_SIZE limit, potentially loading large files into memory and causing service degradation.
As a reminder, Django expects a limit to be configured at the web server level rather than solely relying on FILE_UPLOAD_MAX_MEMORY_SIZE.
This issue has severity "low" according to the Django security policy.
This issue was originally highlighted by Kyle Agronick in Trac. Thanks to Jacob Walls for following up and reporting it.
CVE-2026-35192: Session fixation via public cached pages and SESSION_SAVE_EVERY_REQUEST
Response headers did not vary on cookies if a session was not modified, but SESSION_SAVE_EVERY_REQUEST was True. A remote attacker could steal a user's session after that user visits a cached public page.
This issue has severity "low" according to the Django security policy.
CVE-2026-6907: Potential exposure of private data due to incorrect handling of Vary: * in UpdateCacheMiddleware
Previously, django.middleware.cache.UpdateCacheMiddleware would erroneously cache requests where the Vary header contained an asterisk ('*'). This could lead to private data being stored and served.
This issue has severity "low" according to the Django security policy.
Thanks to Ahmad Sadeddin for the report.
Affected supported versions
- Django main
- Django 6.0
- Django 5.2
Resolution
Patches to resolve the issue have been applied to Django's main, 6.0, and 5.2 branches. The patches may be obtained from the following changesets.
CVE-2026-5766: Potential denial-of-service vulnerability in ASGI requests via file upload limit bypass
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
CVE-2026-35192: Session fixation via public cached pages and SESSION_SAVE_EVERY_REQUEST
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
CVE-2026-6907: Potential exposure of private data due to incorrect handling of Vary: * in UpdateCacheMiddleware
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
The following releases have been issued
The PGP key ID used for this release is Sarah Boyce: 3955B19851EA96EF
General notes regarding security reporting
As always, we ask that potential security issues be reported via private email
to security@djangoproject.com, and not via Django's Trac instance, nor via
the Django Forum. Please see
our security policies for further
information.
Real Python
Use Codex CLI to Enhance Your Python Projects
After watching this video course, you’ll be able to use Codex CLI to add features to a Python project directly from your terminal. Codex CLI is an AI-powered coding assistant that runs inside your terminal. It understands your project structure, reads your files, and proposes multi-file changes using natural language instructions.
Instead of copying code from a browser or relying on an IDE plugin, you’ll use Codex CLI to implement a real feature in a multi-file Python project directly from your terminal.
In the following lessons, you’ll install and configure Codex CLI, use it to implement a deletion feature in a contact book app, and then refine that feature through iterative prompting.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

