skip to navigation
skip to content

Planet Python

Last update: May 15, 2026 07:44 PM UTC

May 15, 2026


PyCharm

Pyrefly LSP Integration in PyCharm 2026.1.2

In PyCharm 2026.1.2, you can enable Pyrefly as an external type provider, dramatically increasing the speed of the IDE’s code insight features.

What is the Pyrefly LSP?

“LSP” stands for the Language Server Protocol – a standardized protocol that allows code editors and IDEs to communicate with language servers. The LSP enables language servers to provide code intelligence features, such as:

The key benefit of the LSP is that it allows a single language server to be used across multiple tools. This means that language-specific intelligence does not have to be implemented separately in every editor, IDE, or CI pipeline.

Pyrefly is Meta’s next-generation Python type checker, engineered from the ground up in Rust to replace its predecessor, Pyre (written in OCaml). With the move to Rust, Pyrefly achieves significantly faster performance and improved cross-platform portability. More than just a rewrite, it is designed to be more capable and robust, offering an efficient toolset for maintaining large-scale Python codebases with high precision and minimal overhead.

Pyrefly provides the following benefits:

Pyrefly is highly beneficial for projects and developers dealing with large, complex Python codebases that prioritize performance and robust typing. Integrating Pyrefly via the LSP is part of our ongoing work to enhance code insight performance in PyCharm.

Using Pyrefly in PyCharm

Once enabled, Pyrefly powers all code insight functionality in PyCharm, including type inference and type-related diagnostics, quick documentation, and inlay hints. Delegating analysis to this faster engine delivers significantly improved performance.

To start using Pyrefly in your PyCharm project, go to the Type widget at the bottom of the window. By default, the IDE uses the built-in type engine. Click on the widget and select the option to use Pyrefly. If you do not have Pyrefly installed yet, PyCharm will install it automatically. 

Once you’ve switched to the Pyrefly type engine, you will see a Pyrefly icon at the bottom, which you can hover over to check the version being used.

Please note that the integration currently works for local interpreter configurations. Support for Docker, Docker Compose, WSL, SSH, and multi-module projects is planned for future releases.

Pyrefly vs. the built-in type engine

Now let’s look at how Pyrefly and the built-in type engine behave in a complex Python project. In this FastAPI example, multiple files are typed, but in this file, the variable ref is incorrectly typed, causing four errors. When using the built-in type engine, the IDE identifies that something is wrong, but it suggests running further analysis to fix the problem, which requires an extra step.

Using Pyrefly as the type engine, the IDE reports errors immediately and highlights where they originate. However, it is worth noting that, in our example, there are four errors, but Pyrefly picks up only three of them. It misses the one in self._storage[ref].

Download the latest version of PyCharm and try it out

Ready to experience a dramatic leap in Python development performance? The Pyrefly LSP integration in PyCharm 2026.1.1 delivers the next generation of type checking. Engineered in Rust for unparalleled speed, it resolves files in as little as 0.5–1 seconds, significantly faster than the built-in engine. If you maintain large, complex Python codebases and prioritize robust typing, this feature is essential, as it allows you to delegate analysis to a faster engine and receive immediate type-related diagnostics. Download the latest version of PyCharm (2026.1.1) to unlock superior efficiency, scalability, and code insight.

May 15, 2026 03:31 PM UTC


Real Python

The Real Python Podcast – Episode #295: Agentic Architecture: Why Files Aren't Always Enough

What are the limitations of using a file-based agent workflow? Why do massive context windows tend to collapse? This week on the show, Mikiko Bazeley from MongoDB joins us to discuss agentic architecture and context engineering.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 15, 2026 12:00 PM UTC

Quiz: Python's Array: Working With Numeric Data Efficiently

In this quiz, you’ll test your understanding of Python’s Array: Working With Numeric Data Efficiently.

By working through this quiz, you’ll revisit the differences between Python’s array module and the built-in list, the meaning of type codes, how to create and manipulate arrays as mutable sequences, and the performance trade-offs of using a low-level numeric container.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 15, 2026 12:00 PM UTC


EuroPython

May Newsletter: Sessions, Speakers, Sprints

Hi all Pythonistas! 👋 

Hope you’ve been enjoying these last few weeks, and hopefully planning your trip to Kraków in July! With two months left before the conference, the EuroPython organising team has been firing on all cylinders to create a conference to remember. Here’s the latest from us:

📋 Session and Speaker Lists Are Available

Our Programme Team is busy preparing a detailed schedule for you. We plan to release it in the upcoming days, but in the meantime we’ve got the list of sessions and speakers for you to check out. It’s going to be an exciting conference!

altLists of sessions and speakers are available at https://ep2026.europython.eu/

👉 All conference sessions: https://ep2026.europython.eu/sessions/

👉 Speakers and tutorial leads: https://ep2026.europython.eu/speakers/ 

🗻 Language & Rust Summits

Summits are an opportunity for project contributors to come together during EuroPython. These are invite-only events with limited capacity at the venue, so registration is required.

🐍 Language Summit

The Python Language Summit is an event for the developers of Python implementations (CPython, PyPy, MicroPython, GraalPython, IronPython, and so on) to share information, discuss our shared problems, and — hopefully — solve them.

These issues might be related to the language itself, the standard library, the development process, the status of Python 3.15 (and plans for 3.16), the documentation, packaging, the website, and so forth. The Summit focuses on discussions and consensus-seeking, more than merely on presentations.

👉 Register for the Language Summit: https://ep2026.europython.eu/language-summit/

⚙️ Rust Summit

This full-day summit is dedicated to exploring the intersection of Rust and the Python ecosystem. Attendees can expect an intensive schedule focused specifically on integrating Rust into Python projects and the development of high-performance Python tools (e.g., using technologies like PyO3, Maturin, or writing performant native extensions). 

This summit is designed for developers who already possess some practical experience in these topics and are looking to deepen their expertise, share lessons learned, and contribute to the community&aposs collective knowledge.

👉 Register for the Rust Summit: https://ep2026.europython.eu/session/rust-summit-at-europython

🗣️ Keynote Speakers

We are excited to announce a new keynote: 

Cover image of Leah Wasser, Executive Director and Founder of pyOpenSci, as a keynoter at EuroPythonLeah Wasser will deliver a keynote at EuroPython 2026

Leah Wasser is the Executive Director and founder of pyOpenSci, a community of 400+ researchers, engineers, and maintainers working to make developing and maintaining research software more accessible, sustainable, and human. She organizes the Maintainers Summit at PyCon US and believes the communities behind research software matter as much as the code itself.

Leah has built nationally recognized programs at the National Ecological Observatory Network (NEON) and the University of Colorado Boulder. Leah holds a PhD in ecology and is an active open source maintainer.

✋ Upcoming Call for Volunteers

We&aposre opening our Call for Volunteers next week! Want to be part of the team and help make EuroPython 2026 awesome? Keep an eye on the website, the signup form drops in just a few days. We&aposll be reviewing applications on a rolling basis, so don&apost wait – apply as soon as it goes live! Whether you&aposre a first-timer or a returning volunteer, we&aposd love to have you.

In my opinion, volunteering enriches the enjoyment of the whole event even further. There are many different roles to suit different personalities and abilities — one of them could suit you very well. Also, volunteering is about the team; you will not be left alone in any case.

Jake Balas, Onsite Volunteers Team Lead at EuroPython 2025 and this year’s Operations Team Lead

💙 Read our full interview with Jake https://blog.europython.eu/humans-of-ep-jake/

💰 Sponsorship: Diamond, Platinum, Silver Available 

If you&aposre passionate about supporting EuroPython and helping make this conference accessible to a diverse, global Python community, consider becoming a sponsor or asking your employer to join us in this effort.

By sponsoring EuroPython, you’re not just backing an event – you&aposre gaining highly targeted visibility that will present your company or personal brand to one of the largest and most diverse Python communities in the world! Here’s what one of our sponsors said about their experience at EuroPython 2025:

The Apify team shares their experience sponsoring EuroPython 2025

We still have some Diamond, Platinum, and Silver slots available. Along with our main packages, there are optional add-ons and extras to craft your brand messaging in exactly the way that you need. 

👉 More information at: https://ep2026.europython.eu/sponsorship/sponsor/ 

👉 Contact us at sponsoring@europython.eu

🚧 Speaker Orientation

Anyone interested in receiving speaker training from our experienced mentors is invited to an online workshop on the 3rd June 2026, at 18:00 CEST. We’ve designed the session for people of all experience levels, from first time speakers to seasoned presenters, and we still have spots for you.

👉 Register now to confirm your place: https://forms.gle/uZKwuAiBkUSmx7gn7

🤝 Community Partners

🇪🇸PyConES 

Barcelona is calling, Pythonistas! PyConES 2026 has extended its CFP. New deadline: 17 May, 23:59 CEST. If you’re still thinking about submitting a talk, workshop, or idea to the community which will meet up in that gorgeous city, you have last days.

👉 Submit the proposal for PyConES 2026 https://pretalx.com/pycones-2026/cfp 

🦬PyStok

PyStok #82 meetup lands on 20 May, 18:00 at Zmiana Klimatu in Białystok, Poland, and free registration is officially live. Grab your spot at https://pystok.org/najblizsze-wydarzenie to dive deep into RAG/LLM Wiki and the PLLuM (Polish Large Language Model) project. Between the "speed dating" networking, JetBrains giveaways and the legendary "Podlaskie afterparty", it’s the perfect spot to soak up those unique North-East Polish vibes and talk Python and AI with the local crowd.

📣 Community Outreach

🏖️PyCon US

Several members of the EuroPython Society have traveled across the ocean to join the biggest gathering of Pythonistas, which this year takes place in Long Beach, California. If you’re there this weekend, make sure to look up the EuroPython booth and say “hi” to the team!

🎁 Sponsor Spotlight

We&aposd like to thank Manychat for sponsoring EuroPython.

Manychat builds AI-powered chat automation for 1M+ creators and brands at real production scale.

altView job openings at Manychat

👋 Stay Connected

Follow us on social media and subscribe to our newsletter for all the updates:

👉 Sign up for the newsletter: https://blog.europython.eu/portal/signup

We’ll be announcing more keynotes in the upcoming days, and the detailed schedule will be available soon, so you can plan your conference experience. Just eight weeks are left before we all meet in the City of Castles and Dragons. See you there! 🐍❤️

Cheers,

The EuroPython Team

May 15, 2026 06:00 AM UTC

May 14, 2026


Kay Hayen

Nuitka Release 4.1

This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, “download now”.

This release adds many new features and corrections with a focus on async code compatibility, missing generics features, and Python 3.14 compatibility and Python compilation scalability yet again.

Bug Fixes

Package Support

New Features

Optimization

Anti-Bloat

Organizational

Tests

Cleanups

Summary

This release builds on the scalability improvements established in 4.0, with enhanced Python 3.14 support, expanded package compatibility, and significant optimization work.

The --project option seems usable now.

Python 3.14 support remains experimental, but only barely made the cut, and probably will get there in hotfixes. Some of the corrections came in so late before the release, that it was just not possible to feel good about declaring it fully supported just yet.

May 14, 2026 10:00 PM UTC


Real Python

Quiz: Cursor vs Windsurf: Which AI Code Editor Is Best for Python?

In this quiz, you’ll test your understanding of Cursor vs Windsurf: Which AI Code Editor Is Best for Python?

By working through these questions, you’ll revisit how the two editors differ across code completion, agentic multi-file editing, and debugging.

You’ll also reconnect with the audit points worth applying whenever an AI agent writes Python on your behalf.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 14, 2026 12:00 PM UTC

Quiz: Python Metaclasses

In this quiz, you’ll test your understanding of Python Metaclasses.

Metaclasses sit behind every class you write in Python, and they’re one of the language’s deeper object-oriented concepts. By working through this quiz, you’ll revisit how classes are themselves objects, how type creates them, and how a custom metaclass lets you customize class creation.

You’ll also reflect on when a custom metaclass is actually the right tool and when a simpler technique does the job better.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 14, 2026 12:00 PM UTC


Python Engineering at Microsoft

PyCon US 2026

Come See Us at PyCon US 2026!

Microsoft and GitHub will be at PyCon US 2026, May 14–17 in Long Beach, CA. Stop by our booth, say hello, and tell us about your experience with our tools and services. We’d love to meet you.

Don’t miss the Meta booth on Saturday at 1 p.m., where we’ll be showing off the integration of Pylance with Meta’s new Pyrefly type checker. The integration is currently in early preview in our Insiders build, and we can’t wait to bring it to all our users later this year.

Hands-on Labs at the Booth

Drop in for 10-minute interactive labs covering:

Talks and Sessions

Date & Time Room Session Speaker
Wed, May 13 · 9:00 a.m.–12:30 p.m. 101A Build your first MCP server in Python Pamela Fox
Wed, May 13 · 1:30 p.m.–2:30 p.m. 201B Dungeons and Databases: Build NPC agents to work with data in DocumentDB and Postgres (Microsoft Sponsor session) Marko Hotti, Patty Chow
Thu, May 14 · 2:40 p.m.–3:05 p.m. 104C Education Summit: Big Lessons from Small Models, Teaching Python AI with SLMs Gwyneth Peña-Siguenza
Thu, May 14 · 3:40 p.m.–4:05 p.m. 104C Education Summit: Your Slides, But Faster, Building an AI-powered presentation workflow Pamela Fox
Fri, May 15 · 3:30 p.m.–4:00 p.m. 104C PyCharlas: Cómo pasé de perdida a enseñar Python + IA a miles, en un año Gwyneth Peña-Siguenza
Sat, May 16 · 2:30 p.m.–3:45 p.m. 201A Maintainer Summit Tools Track: Dev Containers Sarah Kaiser
Sun, May 17 · 1:00 p.m.–1:30 p.m. Grand Ballroom A A bridge over (not) troubled waters: Collecting marine data from your couch Sarah Kaiser

Can’t wait to see you there!

The post PyCon US 2026 appeared first on Microsoft for Python Developers Blog.

May 14, 2026 12:18 AM UTC


Bob Belderbos

Learn agentic AI in Python with 10 small exercises

Most "build an AI agent" tutorials hand you a framework and skip the part where you actually understand what it's doing under the hood. When the abstraction breaks, you can't debug it because you never built the layer underneath. Juanjo and I think that gap is worth closing.

Yesterday we shipped 10 small browser-based exercises that walk through that layer one pattern at a time (more on how we run them in the browser with Pyodide here).

This article is the conceptual journey behind them: how you get from "I can call Claude" to a complete agent loop with a testable architecture and a human-in-the-loop workflow. Each stage builds on the previous one.

Stage 1: make a model reply (exercise 1)

Every agent app starts with the same 3-line skeleton. Build a client, call messages.create, read content[0].text. The shape doesn't change much. Only what wraps around it does.

import anthropic

client = anthropic.Anthropic()
msg = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=256,
    messages=[{"role": "user", "content": "Say hi"}],
)
print(msg.content[0].text)

Why content[0].text and not .text? Because content is a list of blocks (text, tool_use, and others). That list is how tool use plugs in later without breaking the response shape. Get this mental model before anything else.

Stage 2: make the reply machine-readable (exercises 2, 3)

Raw LLM strings are unreliable. The fix is two paired habits: a specific system prompt that locks the output shape, and a Pydantic model that validates it on the way back in.

from pydantic import BaseModel

class ExpenseResult(BaseModel):
    category: str
    confidence: float

result = ExpenseResult.model_validate_json(msg.content[0].text)

Treat the system prompt like an API contract. Say "JSON only", show the literal shape, forbid improvisation ("no punctuation, no explanation, nothing else"). The phrase "nothing else" is doing real work; without it, models love to append a friendly sentence that breaks your parser.

Stage 3: make it remember (exercise 4)

LLMs don't remember anything. They have no state, no memory, no context beyond the current call. The "conversation" is a fiction we create by sending the whole message history every time.

To get a continuous conversation, you keep the list of {"role": ..., "content": ...} dicts and send the whole thing every turn. Append the user message before the call, the assistant reply after. Roles must alternate.

history = []

def ask(user_msg):
    history.append({"role": "user", "content": user_msg})
    reply = client.messages.create(
        model="claude-sonnet-4-6",
        max_tokens=512,
        messages=history,
    ).content[0].text
    history.append({"role": "assistant", "content": reply})
    return reply

State lives in your code, not the model. That single realization clears up most of the confusion students have about context windows and "memory."

Stage 4: give the model hands (exercise 5)

Tool use turns a chatbot into something that can act. The loop is dumber than people think:

while True:
    response = client.messages.create(..., tools=TOOLS, messages=messages)
    if response.stop_reason == "end_turn":
        return response.content[0].text
    # else: run the tool the model asked for, append the result, loop again

Two gotchas: append the full response.content as the assistant turn (it contains the tool_use blocks the model needs to see), and tool results come back wrapped in a user message, not assistant.

Stage 5: make it swappable and testable (exercises 6, 7, 8)

By exercise 6 the chatbot works, but it's also often a highly coupled mess importing external dependencies like anthropic and sqlite3 into the business logic. Time for three common patterns, applied to LLM apps:

That's the four-layer agent architecture, built piece by piece instead of dumped on you all at once.

Stage 6: keep a human in the loop (exercise 9)

When the model returns a confidence score, use it. Above the threshold: auto-accept. Below: show the suggestion and let the user confirm or override.

def process(result, threshold=0.8):
    if result.confidence >= threshold:
        return result.category
    answer = input(f"Accept '{result.category}'? (Enter to confirm): ").strip()
    return answer or result.category

Make the accept path the cheapest action (empty input or y). Users pay the manual handling cost only when overriding. This is what separates a trusted assistant from one that quietly mislabels things, and it's the gap between "AI demo" and production-ready workflow.

Stage 7: generalize the loop (exercise 10)

The agent is exercise 5 with one change: replace the hardcoded function call with a TOOL_FUNCTIONS[name] lookup.

TOOL_FUNCTIONS = {
    "add": lambda a, b: a + b,
    "multiply": lambda a, b: a * b,
}
# inside the loop:
content = str(TOOL_FUNCTIONS[block.name](**block.input))

Now adding a tool is one schema entry plus one dict entry. Swap add/multiply for search_web, query_db, send_email and the loop is identical. Look at agent frameworks under the hood (LangChain, OpenAI Assistants) and you'll see this same pattern.

What the journey teaches

Frameworks make sense once you can write the layer underneath. Skip that, and you are stuck the first time the abstraction leaks. After coaching many developers through this, the dividing line is clear: have they ever written the loop themselves?

The 10 exercises are deliberately small. The arc matters more than any single one. Once you've done them, "agentic AI" stops being "magic" and starts being a loop, schema, and some patterns you might already know.

Try them out:

  1. In the browser: pythonagenticai.com/exercises. No install, no API key, no dependencies. Loads fast.
  2. Locally: clone the repo and work through them in your IDE.

Keep reading

May 14, 2026 12:00 AM UTC

May 13, 2026


"Michiel's Blog"

httpx2!

It’s six weeks after we forked httpx and named our package httpxyz. Yesterday, the Pydantic people started their own fork, httpx2.

TL;DR: while we think httpxyz was definitely needed, we welcome httpx2 and think it should be the ‘blessed’ fork.

httpxyz logo

About httpx2

Our fork

We did a bunch of work on httpx, merging old open pull requests, forking httpcore, and making serious improvements fixing performance and other issues.

The Pydantic fork

Straight after we made our fork, I contacted Kludex, who is among other things maintainer of Starlette, about our fork. He said that he had also been thinking about doing a fork, but that he might prefer to do one himself, and also that he thought that ours could not get popular because it’s on Codeberg instead of on GitHub.

I’m not really sure about that last one. While it’s true that there are still no big examples of popular Python packages on Codeberg, more and more projects are currently moving there. Also, even though we are on Codeberg, every single day we were still gaining ‘stars’ and if the Pydantic team would have backed our fork, with their power we definitely could have made it a success. The majority of users don’t care at what forge the code is hosted, they install from PyPI, via pip or uv. Where the code is hosted is not really a factor in the popularity.

The way forward

The reason I started httpxyz was because of the impasse httpx was in, and that I felt something had to be done. It’s not that I wanted to be the maintainer of an HTTP library per se ;-)

So now that Pydantic, with their skillful team and their powerful ecosystem of packages, is creating their own fork, there is no point really in trying to compete with them. We’ll keep httpxyz up; but we will support httpx2 and will urge anyone who is trying to switch away from httpx to consider httpx2.

The current situation

As it stands, httpx2 is lacking the performance improvements we added to httpxyz. But it will not be long before they will add those, too.

Also they already made some smart decisions I had been unsure about:

I have great trust in their stewardship of the module. We don’t need ‘competing’ forks; we’ll fully support httpx2 and will encourage the community to do the same!

Thanks, and have fun!

Discussion on Hacker News

May 13, 2026 07:00 PM UTC


Python Software Foundation

PSF Welcomes Hudson River Trading (HRT) as a Visionary Sponsor

[May 13, 2026] – The Python Software Foundation (PSF) is excited to announce that Hudson River Trading (HRT), a global leader in quantitative trading, has made a commitment to support Python and the PSF as a Visionary Sponsor. 

HRT’s "Visionary" sponsorship—our highest tier—will help to support the foundation’s core work of advancing and protecting the Python programming language and supporting a diverse and international community of Python programmers. HRT is the first quantitative trading firm to become a PSF Visionary Sponsor, alongside companies including NVIDIA, Google, Fastly, Bloomberg, Meta, and Anthropic. Contributions at this level directly fund the critical work that keeps Python thriving, including:

A Shared Commitment to Python

Hudson River Trading is no stranger to the power of Python. As a leading multi-asset class quantitative trading firm, HRT relies on Python for research, data analysis, and engineering workflows. With this donation, HRT is giving back to the tools that empower their engineers and helping to ensure that Python remains flexible, effective, and welcoming in the ways that have made it one of the most popular programming languages in the world. Read more about Open Source at HRT on this page.

“Python is a cornerstone of HRT’s research and trading infrastructure. Our engineers use Python extensively to build cutting-edge tooling that enhances our developer workflows, and we believe strongly in contributing to the open source software that makes our work possible. We are proud to support the PSF as a Visionary Sponsor helping to safeguard Python as a robust, accessible, and community-driven language for years to come.”  – Prashant Lal, Partner at Hudson River Trading

“Part of HRT's edge is our engineering, and one of our core values is 'Make It Better'. Our support of the Python Software Foundation – alongside our contributions to many other open source projects – reflects our desire to remain active, collaborative participants in the OSS engineering community over the long term, for the benefit of all.” – Hashem, Lead Software Engineer at Hudson River Trading

“At HRT, we’ve always believed that the best way to advance Python is by working hand-in-hand with the community. Our internal work on lazy imports gave us deep expertise in the problem space, and we channeled that experience directly into open collaboration by contributing to the development of PEP 810. We pride ourselves on being exemplary participants in both the trading markets and the open source community, and our sponsorship of the Python Software Foundation reflects that genuine spirit of collaboration.” – Pablo Galindo Salgado, Lead Software Engineer at Hudson River Trading

As part of its ongoing participation in the Python ecosystem, HRT will be open sourcing some of its own projects and announcing additional OSS contributions later this year. To learn more about HRT’s open engineering, research, and data science roles, visit https://www.hudsonrivertrading.com/careers/. 

The PSF is grateful for Hudson River Trading’s support, alongside that of each of our Visionary Sponsors, and we hope you will join us in thanking them for their commitment to  the PSF and the Python community!

About Hudson River Trading (HRT)

Hudson River Trading (HRT) is a leading quantitative trading firm at the forefront of technical innovation in global financial markets. Every day, we bring together the world’s sharpest minds to collaboratively solve challenging problems and build technology that will drive the future of trading. Leveraging one of the world’s most sophisticated computing environments for research and development, we trade across asset classes and time horizons on more than 200 markets worldwide. We are a leading voice advocating for fair and transparent markets everywhere and dedicated to creating a better trading landscape for all. For more information, visit www.hudsonrivertrading.com. 

About the Python Software Foundation (PSF)

The Python Software Foundation is a US non-profit whose mission is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers. The PSF supports the Python community using corporate sponsorships, grants, and donations. Are you interested in sponsoring or donating to the PSF so we can continue supporting Python and its community? Check out our sponsorship program, donate directly, or contact our team at sponsors@python.org!

May 13, 2026 05:19 PM UTC


Real Python

How to Use OpenCode for AI-Assisted Python Coding

OpenCode is an open-source AI coding agent that runs in your terminal and lets you analyze and refactor a Python project through conversational commands. In this guide, you’ll install it on your system, set it up with a free Google Gemini API key, and learn the basics of how to use it in your daily programming work.

Here’s what OpenCode’s main interface looks like:

OpenCode's Initial ScreenOpenCode's Initial Screen

OpenCode works as a conversational assistant you explicitly direct. Ask it to analyze functions, refactor code, or explain issues. Press Enter to send your query, and you’ll get a response with full awareness of your project context. It supports more than seventy-five AI providers, including Anthropic, OpenAI, and Google Gemini.

If you’re a Python developer who prefers working in the terminal, OpenCode offers deliberate, context-aware assistance and a customizable AGENTS.md configuration file.

Take the Quiz: Test your knowledge with our interactive “How to Use OpenCode for AI-Assisted Python Coding” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

How to Use OpenCode for AI-Assisted Python Coding

Quiz yourself on OpenCode: install it, connect an AI provider, and use it to analyze and refactor Python from your terminal.

Prerequisites

Before you start working with OpenCode, you’ll need to fulfill the following prerequisites regarding your current system and working environment:

  • Python 3.11 or higher for the sample project
  • A modern terminal emulator

You also need an AI provider account. In this guide, you’ll use Google AI Studio to get a free Gemini API key. The free Gemini tier lets you follow along without any additional costs. However, you can also use Anthropic, OpenAI, or GitHub Copilot if you already have subscriptions to those services.

This guide uses a sample project consisting of a dice-rolling script. You’ll find the full source code in a collapsible block at the start of Step 2. The download below includes the starting script and the final refactored version so you can compare your work when you’re done:

Get Your Code: Click here to download the free sample code you’ll use to learn about AI-assisted Python coding with OpenCode.

You’ll also need some background knowledge of Python programming and basic experience with your operating system’s terminal or command line.

Step 1: Install and Set Up OpenCode

It’s time to install OpenCode and get it talking to a model. You’ll install the tool on your system, authenticate with Gemini using a free API key, configure a default model, and verify that OpenCode responds correctly to your Python questions before you start coding with it.

Install and Launch OpenCode

The quickest way to install OpenCode is to use the official installation script, which you can do with the following command:

Language: Shell
$ curl -fsSL https://opencode.ai/install | bash

This script detects your platform, downloads the appropriate binary, installs the tool, and adds it to your PATH.

If you prefer a package manager, you can also install OpenCode with Homebrew on macOS or Linux:

Language: Shell
$ brew install anomalyco/tap/opencode

Note that the Homebrew team maintains the official formula and updates it less frequently than the installation script above.

Alternatively, you can install it as a Node.js package using npm if you already have this tool on your system:

Language: Shell
$ npm install -g opencode-ai

If you’re on Windows, the best experience comes from using WSL (Windows Subsystem for Linux). Set up WSL first by following Microsoft’s WSL installation guide, then open a WSL terminal and run the curl command above. For optimal performance, you should store your project within the WSL filesystem rather than on a Windows drive.

Read the full article at https://realpython.com/opencode-guide/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 13, 2026 02:00 PM UTC


PyCharm

Support for uv, Poetry, and Hatch Workspaces (Beta)

Workspaces are increasingly the go-to choice for companies and open-source teams aiming to manage shared code, enforce consistency, and simplify dependency management across multiple services. Working within massive codebases often means juggling many interdependent Python projects simultaneously.

To streamline this experience, PyCharm 2026.1.1 introduced built-in support for uv workspaces, as well as those managed by Poetry and Hatch. This new functionality – currently in Beta – allows the IDE to automatically manage dependencies and environments across your entire workspace.

Intelligent workspace detection

When you open a workspace, PyCharm can now derive its entire structure and all its dependencies directly from your pyproject.toml files. This allows the IDE to understand relationships between projects deeply, significantly reducing the amount of configuration you have to do manually.

Because this is a fundamental change to how PyCharm handles your workspace, we’ve implemented it as an opt-in feature. Here is what you need to know about the transition:

Managing workspaces and their projects

PyCharm now provides an integrated experience that handles the complexities of multi-package setups in uv workspaces automatically. When you open a uv workspace, the IDE identifies the individual projects and their interdependencies, ensuring the project structure is ready for you to work with.

Visualizing workspace dependencies

Once the workspace is loaded, you can verify how your projects relate to one another. PyCharm presents these dependencies in Settings | Project Dependencies.

These relationships are derived directly from your configuration and are shown as read-only in the UI. To make changes to the dependency graph, you can edit the pyproject.toml file manually – PyCharm will then update its internal model.

Automatic environment configuration

PyCharm prioritizes a zero-config approach to your Python SDK. When you open a .py or pyproject.toml file within a project, the IDE performs an immediate check.

If a compatible environment already exists on your system, PyCharm automatically configures it as the SDK for that project. If no environment is detected, a file-level notification will appear suggesting that you create a new uv environment and install the necessary dependencies for that project.

Maintaining environment consistency

Beyond the initial setup, PyCharm continuously monitors the health of your environment to ensure it stays in sync with your defined requirements. 

If a dependency is not defined in your pyproject.toml file but is imported in your code, PyCharm will trigger a warning with a Sync project quick-fix to resolve these discrepancies.

Import management

PyCharm also assists when you are actively writing code by identifying gaps in your project configuration.

If you import a package that isn’t present in the environment and is not yet listed in the project’s pyproject.toml, the IDE will detect the omission. A quick-fix will suggest adding the package to the environment and updating the corresponding .toml file simultaneously.

Transparency via the Python Process Output tool window

While PyCharm automates the backend execution of commands – such as uv sync –all-packages – it still remains fully transparent.

You can track all executed commands and their live output in the Python Process Output tool window. If synchronization fails for an environment, you can analyze the specific error logs to quickly identify the root cause.

Poetry and Hatch workspaces

The logic for Poetry and Hatch workspaces follows this exact same workflow. PyCharm detects projects via their pyproject.toml files and manages the environments with the same automated precision.

The only minor difference is in tool selection – the suggested environment tool is determined by what you have specified in your pyproject.toml. If no tool is specified, PyCharm will prioritize uv (if installed) or a standard virtual environment to get you up and running quickly.

Looking ahead

This Beta version of the functionality is just the beginning of our focus on supporting complex workspace structures. We are already working on expanding the UI to allow creating new projects, linking dependencies, and activating the terminal for specific projects.

As we refine these features, your feedback is our best guide – please share your thoughts or report any issues on our YouTrack issue tracker.

May 13, 2026 12:28 PM UTC


Python GUIs

How to Add Custom Widgets to Qt Designer — Use widget promotion to integrate your own Python widgets into Qt Designer layouts

Can I use custom widgets in Qt Designer?

When you're building Python GUI applications with PyQt6 and Qt Designer, you'll reach a point where the built-in widgets aren't enough. Maybe you've created a custom plotting widget or a specialized input control in Python, and you want to place it into your Qt Designer layouts alongside all the standard widgets.

The good news is that Qt Designer supports exactly this through a feature called widget promotion. In this tutorial, you'll learn how to take any custom Python widget and integrate it into your Qt Designer .ui files, so you can position and size it visually just like any built-in widget.

The bad news is that since Qt Designer is a C++ application, it can't run your Python code. That means you won't see your custom widget rendered in the Designer preview. Instead, you'll see a placeholder (the base widget type you promoted from). Once you load the .ui file in your running Python application, your custom widget appears in all its glory.

With that caveat aside, let's look at how we can use custom widgets in Qt Designer.

What is Widget Promotion?

Widget promotion is Qt Designer's way of letting you swap a standard widget for a custom one. You start by placing a regular widget on your form, a plain QWidget for example, and then tell Qt Designer: "When this UI is actually used, replace this placeholder with my custom widget class instead."

Behind the scenes, this adds some extra information to the .ui file. When you load that file in Python using uic.loadUi() or compile it with pyuic6, the loader knows to import your custom class and use it in place of the base widget.

Creating a Custom Widget

Before we get into Qt Designer, let's create a simple custom widget in Python. We'll make a basic colored widget that draws a gradient background—something you'd never get from a standard widget.

Create a new file called custom_widgets.py:

python
from PyQt6.QtWidgets import QWidget
from PyQt6.QtGui import QPainter, QLinearGradient, QColor
from PyQt6.QtCore import Qt


class GradientWidget(QWidget):
    """A custom widget that displays a gradient background."""

    def __init__(self, parent=None):
        super().__init__(parent)

    def paintEvent(self, event):
        painter = QPainter(self)
        gradient = QLinearGradient(0, 0, self.width(), self.height())
        gradient.setColorAt(0.0, QColor("#2c3e50"))
        gradient.setColorAt(1.0, QColor("#3498db"))
        painter.fillRect(self.rect(), gradient)
        painter.end()

This widget overrides paintEvent to draw a diagonal gradient from dark blue to lighter blue. It's a straightforward example, but the same promotion process works for any custom widget—complex plotting canvases, custom controls, or anything else you build by subclassing a Qt widget.

Setting Up Your Project Structure

For widget promotion to work, the Python file containing your custom widget needs to be importable when your application runs. The simplest way to achieve this is to keep everything in the same directory:

python
my_project/
&boxvr&boxh&boxh custom_widgets.py      # Your custom widget classes
&boxvr&boxh&boxh mainwindow.ui          # Your Qt Designer file
&boxur&boxh&boxh main.py                # Your application entry point

The file name and class name matter here—you'll need to tell Qt Designer both of these during the promotion step.

Promoting a Widget in Qt Designer

Now we can open Qt Designer and set up the promotion.

Place a base widget on your form

Open Qt Designer and create a new Main Window (or open your existing .ui file). From the widget box on the left, drag a plain Widget (QWidget) onto your form. Position and resize it however you like—this is where your custom widget will appear when the application runs.

You can use any base widget class as your starting point. If your custom widget subclasses QPushButton, promote a QPushButton. If it subclasses QLabel, promote a QLabel. For our GradientWidget, which subclasses QWidget, a plain QWidget is the right choice.

Open the Promote Widgets dialog

Right-click on the widget you just placed. In the context menu, select Promote to.... This opens the Promoted Widgets dialog.

Promote to option in Qt Designer context menu

Fill in the promotion details

In the dialog, you'll see fields for three pieces of information:

Promoted Widgets dialog filled in

Leave the Global include checkbox unchecked.

Add and promote

Click Add to add your class to the list of known promoted widgets. Then, with your class selected in the list, click Promote. The dialog closes, and you'll notice the widget's class name in the Object Inspector (top-right panel) now shows GradientWidget instead of QWidget.

That's it for the Designer side. Save your .ui file.

Promoting additional widgets

Once you've added a promoted class through this dialog, it becomes available for reuse. The next time you want to promote a widget to GradientWidget, just right-click the widget and you'll see it listed directly in the Promote to submenu—no need to open the full dialog again.

Loading the UI in Python

Now let's write the Python code to load the .ui file and see our custom widget in action. Create main.py:

python
import sys
from PyQt6.QtWidgets import QApplication, QMainWindow
from PyQt6 import uic


class MainWindow(QMainWindow):
    def __init__(self):
        super().__init__()
        uic.loadUi("mainwindow.ui", self)


app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()

When you run this, uic.loadUi() reads the .ui file and sees that one of the widgets has been promoted to GradientWidget from the custom_widgets module. It automatically does the equivalent of:

python
from custom_widgets import GradientWidget

...and creates an instance of GradientWidget wherever you placed that promoted widget in your layout. Instead of a blank QWidget, you'll see your gradient background.

Using Compiled UI Files

If you prefer to compile your .ui files to Python using pyuic6 rather than loading them at runtime, promotion works the same way. Run:

sh
pyuic6 mainwindow.ui -o ui_mainwindow.py

If you open the generated ui_mainwindow.py, you'll find an import line near the bottom:

python
from custom_widgets import GradientWidget

The compiled code creates your GradientWidget instance in the right place automatically. You can then use the generated file in your application:

python
import sys
from PyQt6.QtWidgets import QApplication, QMainWindow
from ui_mainwindow import Ui_MainWindow


class MainWindow(QMainWindow, Ui_MainWindow):
    def __init__(self):
        super().__init__()
        self.setupUi(self)


app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()

Both approaches—runtime loading and compiled files—handle promoted widgets in the same way.

A More Practical Example: Embedding PyQtGraph

One of the most common reasons to promote widgets is to embed third-party plotting libraries like PyQtGraph into your Designer layouts. PyQtGraph's PlotWidget is a subclass of QGraphicsView, so you'd promote a QGraphicsView in Designer.

Here's how you'd fill in the promotion dialog for PyQtGraph:

That's all it takes. When your application runs, the placeholder QGraphicsView becomes a fully functional PlotWidget that you can plot data on.

python
import sys
from PyQt6.QtWidgets import QApplication, QMainWindow
from PyQt6 import uic


class MainWindow(QMainWindow):
    def __init__(self):
        super().__init__()
        uic.loadUi("mainwindow.ui", self)

        # self.graphWidget is the promoted PlotWidget
        # (use the objectName you set in Designer)
        self.graphWidget.plot([1, 2, 3, 4, 5], [10, 20, 15, 30, 25])


app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()

Promoting Widgets from Submodules

If your custom widget lives in a submodule or package, you can use dotted import paths in the Header file field. For example, if your project structure looks like this:

python
my_project/
&boxvr&boxh&boxh widgets/
&boxv   &boxvr&boxh&boxh __init__.py
&boxv   &boxur&boxh&boxh gradient.py    # contains GradientWidget
&boxvr&boxh&boxh mainwindow.ui
&boxur&boxh&boxh main.py

You would enter widgets.gradient as the header file in the promotion dialog. The loader will then do:

python
from widgets.gradient import GradientWidget

This keeps things organized as your project grows.

Troubleshooting Common Issues

"No module named 'custom_widgets'" — This means Python can't find the file containing your custom widget class. Make sure the module file is in the same directory as your script (or somewhere on your Python path), and that the name in the promotion dialog matches the file name exactly (without .py).

The widget appears blank or as a plain QWidget — Double-check that the promoted class name matches your Python class name exactly, including capitalization. GradientWidget and gradientwidget are different classes as far as Python is concerned.

The widget doesn't resize properly — Make sure you've added the promoted widget to a layout in Qt Designer. Widgets outside of layouts won't resize with the window, regardless of whether they're promoted or not.

Changes to your custom widget don't appear in Designer — Remember, Qt Designer can't render Python widgets. You'll always see the base widget type in the Designer preview. Run your application to see your custom widget.

Summary

Widget promotion is a straightforward way to bridge the gap between Qt Designer's visual layout tools and your custom Python widgets. The process is always the same:

  1. Place a base widget of the appropriate type in Qt Designer.
  2. Right-click and promote it, specifying your custom class name and module path.
  3. Save the .ui file and load it in your Python application.

Your custom widget won't be visible in the Designer preview—that's expected. But when your application runs, the promoted widget is swapped in seamlessly, giving you the best of both worlds: visual layout design with the full power of custom Python widgets.

For an in-depth guide to building Python GUIs with PyQt6 see my book, Create GUI Applications with Python & Qt6.

May 13, 2026 06:00 AM UTC


Bob Belderbos

Coding exercises that run in the browser with Pyodide

I've built coding-exercise platforms before (Python, Rust). AWS API Gateway + Lambda, Docker, etc. It works great, but that's a lot of infrastructure to teach someone a four-line function.

For our new Agentic AI cohort I wanted a free warm-up: ten short Python exercises that introduce the AI vendor SDK patterns (in this case Anthropic). The hard constraint was that visitors should be able to click "Run" without signing up, without bringing an API key, and without complex third party infrastructure. As this site is built on Cloudflare Pages, that meant an in-browser Python runtime. Enter Pyodide ...

Unlike toy Python interpreters, Pyodide runs real CPython compiled to WebAssembly (listen to my interview Elmer Bulthuis why Wasm is cool), which enables broad compatibility with the Python ecosystem, including native extension packages.

Getting it working was easy with some Claude Code prototyping; the interesting part was the last 20%. Some of the challenges I faced and how I worked around them.

Mocked tests + a stubbed SDK

Every exercise has a solution.py and a test_exercise.py. The tests look like this:

from unittest.mock import MagicMock, patch
from solution import get_completion

def test_returns_text():
    mock_client = MagicMock()
    mock_client.messages.create.return_value.content = [MagicMock(text="Hello, Pythonista!")]
    with patch("solution.anthropic.Anthropic", return_value=mock_client):
        assert get_completion("Say hello") == "Hello, Pythonista!"

patch("solution.anthropic.Anthropic") replaces the class with a mock for the duration of the with block. The original Anthropic class is never instantiated. Which means the only thing the real SDK contributes is the name anthropic.Anthropic existing somewhere on the Python path.

So I don't install it. I write a tiny stub package straight to Pyodide's in-browser filesystem:

const ANTHROPIC_INIT = `
class Anthropic:
    def __init__(self, *args, **kwargs):
        pass
`;
const ANTHROPIC_TYPES = `
class TextBlock: ...
class MessageParam: ...
class ToolParam: ...
class ToolUseBlock: ...
`;

await pyodide.loadPackage(["pytest", "pydantic"]);
pyodide.FS.mkdirTree("/home/pyodide/anthropic");
pyodide.FS.writeFile("/home/pyodide/anthropic/__init__.py", ANTHROPIC_INIT);
pyodide.FS.writeFile("/home/pyodide/anthropic/types.py", ANTHROPIC_TYPES);

It's a package, not a single file, because some exercises also do from anthropic.types import TextBlock, which I needed to fix ty type errors. Both modules exist only so the imports resolve. The bodies never execute under test thanks to the mocking.

# Inside Pyodide, before running pytest:
sys.path.insert(0, "/home/pyodide")
# `import anthropic` finds the stub. `patch` replaces it. Tests run.

That one decision cuts ~3 seconds and several megabytes off the boot. The real anthropic package pulls in pydantic-core, httpx, httpcore, anyio, sniffio, idna, distro, certifi, typing-extensions. Every byte irrelevant to learning the pattern, because the test never lets the SDK run anyway.

If you've read build the data layer before you touch the LLM, this is the same strategy: cut the AI piece down to its smallest shape so the rest of the engineering is more flexible.

Lazy-loading the runtime

Pyodide is 5MB+ over the network. I don't want this to load on the homepage, not even on the exercise index page. Even on an exercise page, visitors might skim and leave. So the pyodide.js script tag isn't in the HTML. The page ships a ~250-line runner.js and that script injects Pyodide on demand:

// Module-level constants, defined once at the top of runner.js:
const PYODIDE_VERSION = "0.27.7";
const PYODIDE_URL = `https://cdn.jsdelivr.net/pyodide/v${PYODIDE_VERSION}/full/`;
const PYODIDE_JS_SRI = "sha384-90so5tCKvl0xs9agU29IMKlAVzhfzFX7QO//YxQkRhJG58bBZrFN+2ZTRB026X5X";

async function ensurePyodide() {
  if (pyodide) return pyodide;
  if (bootPromise) return bootPromise;
  bootPromise = (async () => {
    if (typeof loadPyodide !== "function") {
      await new Promise((resolve, reject) => {
        const s = document.createElement("script");
        s.src = PYODIDE_URL + "pyodide.js";
        s.integrity = PYODIDE_JS_SRI;
        s.crossOrigin = "anonymous";
        s.onload = resolve;
        s.onerror = () => reject(new Error("Failed to load pyodide.js"));
        document.head.appendChild(s);
      });
    }
    pyodide = await loadPyodide({ indexURL: PYODIDE_URL });
    await pyodide.loadPackage(["pytest", "pydantic"]);
    // write the anthropic stub here…
    return pyodide;
  })();
  return bootPromise;
}

Two triggers prewarm the runtime before the user clicks Run:

cm.on("focus", prewarm);
runBtn.addEventListener("mouseenter", prewarm, { once: true });

The moment they tab into the editor or hover the button, the 3-second cold start starts ticking. By the time they're done typing, the runtime is usually ready. The cached bootPromise deduplicates: focus and hover both await the same in-flight promise, never two parallel boots.

Tracking progress without a backend

No users, no database, no sessions, but I still want:

One localStorage key holds the whole state:

const STORAGE_KEY = "pyai_progress_v1";
// { "first-api-call": { passed: true, code: "...", lastRun: 1736... } }

Three operations carry the state: saveCode(slug, code) runs on every CodeMirror change, markPassed(slug) runs when pytest returns 0, and get(slug) reads on page load to restore drafts and badges.

In a similar vein, the Solution tab stays locked until the tests pass. The point of an exercise is the struggle, not the answer.

Once markPassed(slug) writes to localStorage, it also fires a pyai:passed event, and a separate tabs.js listener flips the solution from <div data-solution-locked> to <div data-solution-revealed> and lazy-fetches solution.py for a side-by-side compare. No reload. One key, three consumers (runner, list page, solution tab).

And the key is versioned: pyai_progress_v1. The day I want to change the shape, I can bump it to _v2 and old state cleanly stops loading. No migration code, no schema check.

The list page reads the same store on render and walks the DOM:

document.querySelectorAll(".exercises-list-item").forEach((item) => {
  const slug = item.dataset.exerciseSlug;
  const { passed } = window.PyAIProgress.get(slug);
  if (passed) item.classList.add("is-passed");
});

When passedCount() >= total, a hidden next-step block flips visible. That's the whole mechanism: ten green checks reveal one element, all computed in the browser from that one localStorage key.

All static, all local

The whole thing is a static site. Cloudflare serves the HTML, JS, and the synced exercise files. The browser does the rest. Zero extra cost. It scales for free because the load is on the client, not a server.

For development, uv runs the end-to-end check with a single command:

uv run scripts/e2e_test.py

It walks every exercise in headless Chromium, pastes the reference solution, clicks Run, asserts the test suite passes. Ten exercises in ~22 seconds. Anytime the upstream content changes I know in under half a minute whether all ten warm-ups still pass end-to-end. I will save the details of this Playwright end-to-end testing for another article.

Starter code

The site this runs on is standalone so I put together a single-file Pyodide starter gist of a mini coding platform experience: code in the browser, click "Run tests", pytest runs against your code, all in the browser. Lazy boot and the Solution/Tests tabs are wired up. The SDK stub and localStorage progress I left out for simplicity, but the core Pyodide integration is there. You can download and build on it if you want to try your hand at a browser-based Python coding experience.

Try it out

Back to the 10 exercises, you can try them out here. They cover the basics that show up in the typical production Agentic AI app: a first API call, structured outputs with Pydantic, system prompts, multi-turn state, tool use, then the architectural patterns (Protocol, Repository, Service layer, HITL, the agent loop).

Keep reading

One bigger lesson I'm taking away from this: every time I've built a thing server-side over the years, I was usually paying a complexity tax for flexibility I didn't need. Sometimes the right architecture is to push the work to the client, especially where modern browsers and Wasm can handle this performantly and securely.

May 13, 2026 12:00 AM UTC

May 12, 2026


PyCoder’s Weekly

Issue #734: Dunder-Gets, Django Tasks in Prod, Codex CLI, and More (2026-05-12)

#734 – MAY 12, 2026
View in Browser »

The PyCoder’s Weekly Logo


Do You Get It Now?

Learn about Python’s .__getitem__(), .__getattr__(), .__getattribute__(), and .__get__(): how they’re different and where to use them.
STEPHEN GRUPPETTA

Using Django Tasks in Production

Django added a generic API for dealing with concurrent tasks in version 6. This post talks about how it has been used in production.
TIM SCHILLING

Use Codex CLI to Enhance Your Python Projects

Learn how to use Codex CLI to add features to Python projects directly from your terminal, without needing a browser or IDE plugins.
REAL PYTHON course

Depot CI: Built for the Agent era

alt

Depot CI: A new CI engine. Fast by design. Your GitHub Actions workflows, running on a fundamentally faster engine — instant job startup, parallel steps, full debuggability, per-second billing. One command to migrate →
DEPOT sponsor

PEP 828: Supporting ‘Yield From’ in Asynchronous Generators (Deferred to 3.16)

PYTHON.ORG

PEP 797: Shared Object Proxies (Deferred to 3.16)

PYTHON.ORG

Django Security Releases: 6.0.5 and 5.2.14

DJANGO SOFTWARE FOUNDATION

Articles & Tutorials

Handling Schema Issues in Polars

You’ve got this great data pipeline going until one day it stops working. A schema error causes by a column upstream has stopped you in your tracks. This post talks about the four different causes of schema errors and what to do about them.
THIJS NIEUWDORP

Textual: An Intro to DOM Queries (Part II)

Textual is a TUI framework library for building terminal applications. It uses a DOM to represent the widgets in the application, and that DOM is queryable. This is part 2 in a series on how to find things in your Textual DOM.
MIKE DRISCOLL

Everything You Always Wanted to Know About PyCon Sprints!

PyCon US includes coding sprints to work on CPython itself, or projects in the ecosystem like Django, Flask, and BeeWare. This post tells you all about sprints and how you can join in on the fun.
DEB NICHOLSON

Why TUIs Are Back

Terminal User Interfaces are seeing a resurgence in the tools space. This opinion piece briefly talks about the history of interfaces and why we are where we are now.
ALCIDES FONSECA

Parallel Python at Anyscale With Ray

Talk Python interviews Richard Liaw and Edward Oakes. They talk about Ray, an open source Python framework a distributed execution engine for AI workloads.
TALK PYTHON podcast

Python 3.14.5 Release Candidate

Normally nobody fusses over a release candidate of a point release, but 3.14.5 includes a major change: rolling back of the incremental garbage collector.
HUGO VAN KEMENADE

Wagtail 7.4: Custom Page Explorer, Preview Checks & More

Between autosave improvements, new ways to sort your pages, and a content checker upgrade, you’ll have a lot of reasons to move to Wagtail 7.4
MEAGEN VOSS

The Simplest MCP Example Possible in Python

This guide introduces you to connecting your code to a local LLM model. It covers Ollama and FastMCP and what you can do with these tools.
AL SWEIGART

ChatterBot: Build a Chatbot With Python

Build a Python chatbot with the ChatterBot library. Clean real conversation data, train on custom datasets, and add local AI with Ollama.
REAL PYTHON

Hardening Firefox With Claude Mythos Preview

New details about what Mozilla found and how agentic harnesses helped them reproduce real bugs and dismiss false positives.
MOZILLA

Projects & Code

pytest-fly: pytest Observer

GITHUB.COM/JAMESABEL

Pymetrica: A Codebase Analysis Tool

GITHUB.COM/JUANJFARINA • Shared by Juan José Farina

PyWry: Cross-Platform Rendering Engine and UI Toolkit

GITHUB.COM/DEELEERAMONE

secure: HTTP Security Headers for FastAPI, Flask, Django

GITHUB.COM/TYPEERROR • Shared by Caleb Kinney

Kirokyu: Modular Task Management System

GITHUB.COM/AMRYOUNIS • Shared by Amr Younis

Events

Weekly Real Python Office Hours Q&A (Virtual)

May 13, 2026
REALPYTHON.COM

PyCon US 2026

May 13 to May 20, 2026
PYCON.ORG

Python Atlanta

May 14 to May 15, 2026
MEETUP.COM

Chattanooga Python User Group

May 15 to May 16, 2026
MEETUP.COM

PyDelhi User Group Meetup

May 16, 2026
MEETUP.COM

PyData London

June 5 to June 7, 2026
PYDATA.ORG • Shared by Tomara Youngblood


Happy Pythoning!
This was PyCoder’s Weekly Issue #734.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

May 12, 2026 07:30 PM UTC


Marcos Dione

Monitoring Apache with SQL and Grafana

Ever since my last job I have been wanting to make this. I think it's not the first time I do it, but for one reason or another, I did it (again?) in two evenings only.

In that job we had an internet facing API with Apache as the router in front of several services. All our metrics and even our billing was based on the Apache logs. We had a system that ingested the logs into a PostgreSQL database, and we tried to create Grafana panels and alerts based on that info. At the same time, I wanted to reproduce awstats in Grafana, and found it was almost impossible.

Another problem is that the usual tools to solve this, Loki or Prometheus, have big problems to handle this type of too arbitrary data (think of the referer or user_agent columns) or whose space is too big (client is an IPv4, with 4Bi different values). They effectively suffer (in principle) of what they call "cardinality bomb": since they build one time series database (TSDB) per combination of fields (they call them "labels"), storage use is big and aggregation operations inter TSDBs are expensive.

Last night I sat down to reimplement the ingestion side. Instead of PostgreSQL I used SQLite mostly because almost all of my services (with low traffic and mostly only me as user) already use it. To be fair, and one really can't expect anything else, the script is quite straight forward. It uses regexps to parse the logs, which for the moment is good enough. I'm "releasing" it as is, because I'm tired, but you'll find some surprises around parsing the request line (see request_re and its handling); some janky ways to convert from str to int or datetime; and an iteration trick to use dataclasses as execute() argument. I omited some comments and all the testing:

#! /usr/bin/env python3

from dataclasses import dataclass
from datetime import datetime, timedelta, timezone, tzinfo
import pathlib
import re
import sqlite3
import sys

# I miss dinant

# no 0-255 range check since this is written by apache
# if the number is not in that range, we have bigger problems
octet_re = r'\d{1,3}'
ip_re = r'\.'.join([ octet_re ] * 4)

word_re = r'[^ ]+'
identd_user_re = word_re  # it can be '-'
user_id_re = word_re      # it can be '-'

month_names = [ 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec' ]

day_re = r'\d{1,2}'
month_re = f"(?:{'|'.join(month_names)})"
year_re = r'\d{4}'
date_re = f"({day_re})/({month_re})/({year_re})"

time_re = r'(\d{2}):(\d{2}):(\d{2})'

utc_offset_re = r'(?:\+|\-)\d{4}'  # no capture

# fscking double escaping :(
date_time_re = f"\\[{date_re}:{time_re} ({utc_offset_re})\\]"

method_re = word_re
url_re = word_re  # technically not a word, but word_re is too generic

proto_re = r'HTTP'      # who are we kidding
version_re = r'\d\.\d'  # who are we kidding
proto_and_version_re = f"({proto_re})/({version_re})"

# idiot skrip kidz send no method or proto/version!
# and re is silly? enough to produce empty matches for the ()s here
# oh, but re.compile().match().groups() returns things like
# (None, None, None, None, '', '\\x16\\x03\\x02\\x01o\\x01', '', '')
# so we gained nothing
request_re = f'"(?:({method_re}) ({url_re}) {proto_and_version_re}|()({url_re})()())"'

number_re = r'\d+'

http_status_re = number_re
bytes_rx_re = number_re
bytes_tx_re = number_re
ttfb_re = f"(?:{number_re}|-)"
response_time_re = number_re

double_quoted_text_re = r'"([^"]+)"'
referer_re = double_quoted_text_re
user_agent_re = double_quoted_text_re

log_line_re = re.compile(f"^({ip_re}) ({identd_user_re}) ({user_id_re}) {date_time_re} {request_re} ({http_status_re}) ({bytes_rx_re}) ({bytes_tx_re}) ({ttfb_re}) ({response_time_re}) {referer_re} {user_agent_re}$")


@dataclass
class LogRecord:
    client_ip: str         # 0
    indent_user: str
    user_id: str
    date_time: datetime
    method: str
    url: str               # 5
    protocol: str
    protocol_version: str  # could be float, but we don't really care; besides, x.y.z?
    status: int
    bytes_rx: int
    bytes_tx: int          # 10
    ttfb: int  # maybe -!
    response_time: int
    referer: str
    user_agent: str

    @classmethod
    def from_log_line(cls, line):
        match = log_line_re.match(line)

        if match is None:
            raise ValueError(f"Malformed line: {line.strip()}")

        data = list(match.groups())
        new_data = []

        group_index = 0
        for field_index, (name, field) in enumerate(cls.__dataclass_fields__.items()):
            if field.type == datetime:
                # [11/May/2026:20:15:28 +0200]

                # convert month str to number
                data[group_index+1] = month_names.index(data[group_index+1]) + 1

                # convert to ints
                data[group_index:group_index+6] = [ int(x) for x in data[group_index:group_index+6] ]

                new_data.append( datetime(data[group_index+2], data[group_index+1], data[group_index],
                                          data[group_index+3], data[group_index+4], data[group_index+5], 0,
                                          utc_offset2tzinfo(data[group_index+6])) )

                group_index += 7

                continue

            # handle ttfb as -
            if field_index == 11 and data[group_index] == '-':
                # 
                data[group_index] = data[group_index+1]

            if data[group_index:group_index+4] == [ None, None, None, None ]:
                if group_index in (10, 14):
                    # handle (None, None, None, None, '', '\\x16\\x03\\x02\\x01o\\x01', '', '')
                    # handle ('GET', '/', 'HTTP', '1.0', None, None, None, None)

                    # no need to add anything, it's handled by the fallback
                    # but we still need to skip this cruft
                    group_index += 4
                else:
                    raise ValueError(f"Got confused: {(field_index, field.type, group_index, data[group_index], new_data)}")

            # convert ints
            if field.type == int:
                data[group_index] = int(data[group_index])

            # fallback
            new_data.append(data[group_index])
            group_index += 1

        return cls(*new_data)


    # implement the iterator protocol so we can mostly be passed as argument to execute()
    def __iter__(self):
        return self


    def __iter__(self):
        for value in self.__dict__.values():
            # the whole protocol could be replaced with .__dataclass_fields__.values() :shrug:
            # but this way I can do further conversions
            if type(value) == datetime:
                value = int(value.timestamp())

            yield value


def utc_offset2tzinfo(offset: str) -> tzinfo:
    # +0200
    hours = int(offset[:3])    # +02
    minutes = int(offset[3:])  # 00

    return timezone(timedelta(hours=hours, minutes=minutes), offset)


def connect():
    # if we test after sqlite3.connect(), the file is already created
    create = not pathlib.Path('./apache_logs.db').exists()

    conn = sqlite3.connect('./apache_logs.db')

    if create:
        conn.cursor().execute('''
CREATE TABLE "logs" (
    "client"    TEXT,
    "indent_user"   TEXT,
    "user_id"   TEXT,
    "timestamp" INTEGER,
    "method"    TEXT,
    "url"   TEXT,
    "protocol"  TEXT,
    "protocol_version"  TEXT,
    "status"    INTEGER,
    "bytes_rx"  INTEGER,
    "bytes_tx"  INTEGER,
    "ttfb"  INTEGER,
    "response_time" INTEGER,
    "referer"   TEXT,
    "user_agent"    TEXT
);''')

    return conn



def main():
    conn = connect()
    cursor = conn.cursor()

    for line in sys.stdin:
        try:
            log_record = LogRecord.from_log_line(line)
        except ValueError as e:
            print(e.args[0])
            continue

        cursor.execute('''INSERT INTO logs VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''',
                       tuple(log_record))
        conn.commit()


if __name__ == '__main__':
    main()

One of the things I didn't do was to further play with the URLs. One could make list of different apps based on whether there is any routing to different services, like in the cases of my previous job and my own server; or even different subdivisions on a single app, like for NextCloud:

ocs/v2.php/apps/serverinfo
remote.php/dav/files/USER
remote.php/dav/calendars/USER/CALENDAR/

etc. I haven't really thought about it; it could be implemented either as more columns or extra tables.

Today I managed to finish the rest.

The next step is to install this so it runs constantly with the output of tail --follow=name --retry piped into its stdin1. Left as an exercise for the reader; use SystemD units :)

Next is installing Grafana's plugin to read SQLite and declare the new Grafana datasource.

The hard part was to query the data in a way that was useful for Grafana. I managed to get a query like:

-- round to the minute
SELECT timestamp/60*60 AS time, status, COUNT(status) as "count"
FROM logs
-- $__from and $__to are defined by Grafana based on the dashboards's time range
WHERE timestamp >= $__from / 1000 and timestamp < $__to / 1000
GROUP BY timestamp/60, status

to get the count of different status codes per minute2. But this returns a table that looks like:

      time | status | count
1778533620 |    200 |    30
1778533620 |    207 |     3
1778533620 |    403 |     1

while Grafana is expecting one line per sample (but remember we're aggregating data) and one column per data series:

      time | 200 | 207 | 403
1778533620 |  30 |   3 |   1

I read how to pivot this in SQL, but it mostly works only if you know the different values for the status column beforehand. This might be feasible with HTTP status codes (I count 63 standard ones, including the joke 418 I'm a teapot), but that would be impossible for the referer or user_agent columns.

Thanks to iRobbery#postgresql@libera.chat I found about Grafana's Partition by values data transformation. Aplying it to the column that defines the time series (status, etc), it gives us exactly what we want!

And one can even include a pure table with all the logs to inspect when one finds weird spikes or values. I made almost impossible queries like transferred bytes per URL, methods per client and more! One missing piece, i possible, would be to implement histograms, like last time we looked into this.


  1. One could cite the UNIX philosophy, but seriously, who wants to reimplement all the corner cases of that tail invocation? See for instance the 113 bugs found in the coreutils Rust reimplementaiton 

  2. One could use a dashboard variable to control this arbitrarily. One could get granularity per second! 

May 12, 2026 06:07 PM UTC


Real Python

Building Type-Safe LLM Agents With Pydantic AI

Pydantic AI is a Python framework for building LLM agents that return validated, structured outputs using Pydantic models. Instead of parsing raw strings from LLMs, you get type-safe objects with automatic validation.

If you’ve used FastAPI or Pydantic before, then you’ll recognize the familiar pattern of defining schemas with type hints and letting the framework handle the type validation for you.

By the end of this video course, you’ll understand that:


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 12, 2026 02:00 PM UTC


Django Weblog

2026 Django Developers Survey

The Django Software Foundation is once again partnering with JetBrains to run the 2026 Django Developers Survey 🌈

It’s an important metric of Django usage and is immensely helpful to guide future technical and community decisions.

After the survey closes, we will publish the aggregated results. JetBrains will also randomly select 10 winners (from those who complete the survey in full with meaningful answers) who will each receive a $100 Amazon voucher or the equivalent in local currency.

How you can help

Once you’ve done the survey, take a moment to re-share on socials and with your communities. The more diverse the answers, the better the results for all of us.

Please use the following links:

May 12, 2026 12:00 PM UTC


Real Python

Quiz: Building Type-Safe LLM Agents With Pydantic AI

In this quiz, you’ll test your understanding of Building Type-Safe LLM Agents With Pydantic AI.

By working through this quiz, you’ll revisit how Pydantic AI returns structured outputs from LLMs, how validation retries improve reliability, how tools and function calling work, how dependency injection flows through RunContext, and what trade-offs to expect when running agents in production.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 12, 2026 12:00 PM UTC

Quiz: The LEGB Rule & Understanding Python Scope

In this quiz, you’ll test your understanding of The LEGB Rule & Understanding Python Scope.

By working through this quiz, you’ll revisit how Python resolves names using the LEGB rule, what the local, enclosing, global, and built-in scopes look like in practice, and how the global and nonlocal statements let you reach across scope boundaries.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 12, 2026 12:00 PM UTC


Python Software Foundation

Announcing PSF Community Service Award Recipients!

The PSF Community Service Awards (CSAs) are a formal way for the PSF Board of Directors to offer recognition of work which, in its opinion, significantly improves the Foundation's fulfillment of its mission to build a vibrant, welcoming, global Python community. These awards shine a light on the incredible people who are the heart and soul of our community– those whose dedication, creativity, and generosity help the PSF fulfill its mission. The PSF CSAs celebrate individuals who have been truly invaluable, inspiring others through their example, and demonstrates that service to the Python community leads to recognition and reward. If you know of someone in the Python community deserving of a PSF CSA award, please submit them to the PSF Board via psf@python.org at any time. You can read more about PSF CSA’s on our website

The PSF Board is excited to announce 5 new CSAs, awarded to Inessa Pawson, Kafui Alordo, Kalyan Prasad, Maria Jose Molina Contreras, and Paul Everitt, for their contributions to the Python community. Read more about their work and impact below. 

Inessa Pawson 

Inessa Pawson has been a tireless and dedicated contributor to the Python ecosystem for over eight years. She has led the PyCon US Maintainers Summit since 2020, not only shaping the event but actively opening doors for others to participate–onboarding new contributors and supporting attendees with characteristic warmth and care. 
 
Beyond PyCon US, Inessa has spearheaded the Maintainers and Community Track, the mentorship program, and the Teen Track at the SciPy Conference, and co-founded the Contributor Experience project, reflecting her deep commitment to making the Python community more inclusive and accessible. She brings that same dedication to her roles on the NumPy Steering Committee, the scikit-learn survey team, and the SPEC (Scientific Python Ecosystem Coordination) Steering Committee. As a leader on the pyOpenSci Advisory Council, Inessa has been instrumental in advancing the organization's mission to support open and reproducible science.

Kafui Alordo

Kafui Alordo has spent years building and nurturing the Python community in Ho, in the Volta Region of Ghana. What began for Kafui as volunteer coaching at the first Django Girls Ho workshop grew into co-organizing the second and third editions, and eventually leading the workshop as its primary organizer, while also lending his expertise as a coach and co-organizer at Django Girls events across Ghana. Recognizing that sustainable community growth starts with welcoming total beginners, Kafui introduced a coding bootcamp initiative for his user group that has broadened participation and helped new learners find their footing in Python. 

Kafui’s landmark achievement came with the organization of PyHo, the first-ever regional Python conference in Ho, which drew attendees from diverse backgrounds across the country. His impact has also extended well beyond Ghana, most recently stepping into the role of remote chair on the PyCascades organizing team.

Kalyan Prasad

Kalyan Prasad's journey in the Python community began in 2019 as a volunteer with the Hyderabad Python User Group (HydPy), one of India's largest Python communities, and he has grown steadily into one of its most consequential leaders. His dedication to PyConf Hyderabad has been especially remarkable–contributing across the CFP, program, and sponsorship teams, serving as co-chair in 2022, and stepping up as chair in both 2025 and 2026, representing four consecutive years of conference leadership at the regional and national level. 

At the national scale, Kalyan also served as co-chair for PyCon India 2023. Kalyan's commitment extends well beyond India, as he actively contributes to the broader Python ecosystem as a reviewer, mentor, and program committee member for conferences around the world. His care for community safety is further reflected in two years of service on the NumFOCUS Code of Conduct squad, ensuring that Python spaces remain welcoming and respectful for everyone. Kalyan has also joined the PSF Diversity & Inclusion Working Group this year, contributing to inclusion efforts. 

Maria Jose Molina Contreras

Maria Jose Molina Contreras has been a dedicated and wide-ranging contributor to the Python community, with deep roots in both Spanish-language and PyLadies initiatives. She has been a core organizer of PyLadiesCon since its inaugural edition in 2023, serving as co-chair in 2024 and 2025, and her tireless leadership helped make the most recent edition the most successful in the conference's history, raising over $55,000 in funds to support PyLadies members and chapters around the world. 

Maria’s commitment to Spanish-speaking Pythonistas is equally impressive: she contributes to the Python Docs ES initiative, coordinates events for Python en Español on Discord, and co-founded the PyLadies en Español initiative, including leading the PyLadies presence at PyCon US. At EuroPython, Maria has volunteered since 2023 and taken on growing responsibility, leading community booths, PyLadies events, and community organizer efforts in 2024 and 2025. She has also served as a reviewer for PyCon US Charlas since 2020 and has been a speaker at numerous conferences including PyCon US, EuroPython, and PyConES, sharing her expertise with audiences across the global community. 

Paul Everitt

Paul Everitt's relationship with Python stretches back to the very beginning! Paul was present at the early PyCons and played a foundational role as an incorporating member and director on the PSF's first Board of Directors, helping to establish the organization that supports Python to this day. Decades later, his commitment to the community remains as strong as ever, demonstrated through his long tenure as a Developer Advocate at JetBrains/PyCharm, where he has championed the company's sustained investment in Python open source. 

Paul’s advocacy extends beyond any one project, as he has provided support to smaller but important ecosystem projects like HTMX and remained a regular, encouraging presence at Python conferences and on podcasts. Most recently, Paul proved that his contributions are not merely historical–he co-authored PEP 750, introducing template strings (t-strings) as a significant new feature in Python 3.14, demonstrating a continued willingness to roll up his sleeves and shape the language itself. Whether writing PEPs, giving conference talks, or simply championing the people who make Python great, Paul’s generous and enthusiastic spirit is an invaluable gift to the Python community. 

May 12, 2026 11:55 AM UTC


Bob Belderbos

A Race Condition Rust Wouldn't Have Let Me Write

A two-agent Python service ran fine in tests. Two concurrent users hit it and one user's search results showed up in the other user's response. The pattern looked safe. The Rust port doesn't compile.

The pattern that looked fine

A former student walked me through this one. It's another case of module-level globals biting in concurrent code.

The agent in this service had a tool-call budget per query. Five tool calls, then stop. The implementation was the kind of thing I see in a lot of Python codebases:

_call_count = 0
_sources: list[str] = []
_lock = threading.Lock()

def reset() -> None:
    global _call_count, _sources
    with _lock:
        _call_count = 0
        _sources = []

def _check_and_increment() -> int:
    global _call_count
    with _lock:
        if _call_count >= MAX_CALLS:
            raise ToolCallLimitExceeded()
        _call_count += 1
        return _call_count

def _add_source(source: str) -> None:
    with _lock:
        _sources.append(source)

Every operation locks. Looks safe. Locally it is.

The orchestrator runs the Cypher and Mongo agents in parallel via asyncio.gather. A single user's request is fine because the two agents touch different modules. Streamlit puts each session on its own thread, so when two users query at the same time, both threads share the same _call_count, _sources, and _lock. Because Python modules are cached in sys.modules, _call_count isn't just a variable; it's a piece of memory shared by every thread in that process.

The race

Two users, each plans four tool calls (within their own five-call budget). Output from the repro:

[userA] DONE: made=2 expected=4 | sources: 2 own + 2 foreign
[userA] LEAKED foreign sources: ['userB:q0', 'userB:q1']
[userB] DONE: made=3 expected=4 | sources: 3 own + 2 foreign
[userB] LEAKED foreign sources: ['userA:q0', 'userA:q1']

Two failures at once.

The shared counter hits 5 before either user finishes, so each one gets their budget eaten. And get_sources() returns whatever happens to be in the shared list, mixed across users.

A timeline makes the leak obvious:

T+0  userA: lock, count 0->1, unlock     # userA's call 1 of 4
T+1  userB: lock, count 1->2, unlock     # userB's call 1 of 4
T+2  userA: lock, count 2->3, unlock     # userA's call 2 of 4
T+3  userB: lock, count 3->4, unlock     # userB's call 2 of 4
T+4  userA: lock, count 4->5, unlock     # userA's call 3 of 4
T+5  userB: lock, sees 5 >= MAX, raises  # userB barely started, budget gone

userA looks at the counter after two of its own increments and sees 4. "Wait, why is my count already 4?" Because userB has been incrementing the same number.

The locks were doing their job. Each individual op is atomic. They don't give per-request isolation, because there is no per-request anything. The data is one global.

The fix: contextvars

contextvars.ContextVar was built for this. Each thread, and each asyncio Task, gets its own copy. Default values give every fresh context a clean slate.

This matters more in asyncio than in threads. threading.local would catch the threaded case, but every asyncio task runs on the same thread — they all share one threading.local. Picture two tasks on one event loop: task A sets foo = 2, hits await, the loop runs task B, B reads foo and sees 2. There's no isolation, because there's no separate thread to key off. ContextVar keys on context instead, and asyncio.Task copies the context when it's created, so each Task gets its own slot. A's set() is invisible to B even though they're on the same thread.

import contextvars

_call_count: contextvars.ContextVar[int] = contextvars.ContextVar(
    "call_count", default=0
)
_sources: contextvars.ContextVar[tuple[str, ...]] = contextvars.ContextVar(
    "sources", default=()
)

def reset() -> None:
    _call_count.set(0)
    _sources.set(())

def _check_and_increment() -> int:
    n = _call_count.get()
    if n >= MAX_CALLS:
        raise ToolCallLimitExceeded()
    _call_count.set(n + 1)
    return n + 1

def _add_source(source: str) -> None:
    _sources.set(_sources.get() + (source,))

Same demo, fixed:

[userA] DONE: made=4 expected=4 | sources: 4 own + 0 foreign
[userB] DONE: made=4 expected=4 | sources: 4 own + 0 foreign

One subtle point: _sources is a tuple, not a list. With ContextVar(default=[]), every context that hasn't called set() shares the same default list object. A stray cv.get().append(x) would silently leak across contexts, mutating the default that every other context still points at. Tuples make that mistake non-expressible, which is close to Rust where immutable data is the default and mutable state has to be explicitly marked (mut).

What would Rust make of this?

If you mostly write Python, the gist of Rust's model is: by default everything is immutable, and the type system tracks who is allowed to read or write each piece of memory. That tracking is what blocks the bug shape from existing. Porting the Python pattern naively, the compiler refuses it four different ways.

1. Module globals can't just exist.

The Python _call_count = 0 at module scope has no clean Rust equivalent. The closest thing is static mut CALL_COUNT: u32 = 0 (static is the Rust word for a true module-level value, mut opts into mutability), and every read or write of it requires an unsafe { ... } block. The compiler is flagging the same risk we hit in Python (module-level mutable state is shared by every thread), but it forces you to acknowledge it in the syntax. You cannot accidentally write the buggy pattern.

2. Two threads cannot share a &mut reference.

In Python you pass an object reference into a thread and trust that locks will sort it out at runtime. Rust tracks references at compile time. The rule is aliasing XOR mutability: a value is either readable by many or writable by one, never both at once. &mut T is the "writable by one" case — while it exists, no other reference of any kind is allowed. That single rule is what blocks the Python bug; two threads writing the same counter is exactly the case it forbids. Moving the same Tracker into two thread::spawn closures doesn't compile:

error[E0382]: use of moved value

The exact aliasing the Python bug relied on, two threads writing to one shared counter, is not a thing the type system will let you express.

3. Shared global state must be wrapped in a lock.

Try a global without a lock and the compiler refuses with a different error:

error: `Tracker` cannot be shared between threads safely

Rust calls this the Sync trait, "safe to access from multiple threads at once". Tracker doesn't qualify because mutating its fields would race. To opt in, you wrap it: Mutex<Tracker>, similar to a threading.Lock in Python but with a critical difference. The lock wraps the data, not the operations. In our Python version we had a _lock and three free functions that called it; nothing prevented a fourth function from forgetting. In Rust, the only way to read the counter is to call .lock() on the mutex first, because the counter lives inside it. The bug class of "I forgot to take the lock here" is structurally absent.

4. The idiomatic version doesn't share state at all.

The cleanest Rust port doesn't go anywhere near a global. Each thread owns its own Tracker on its own stack:

fn run_agent(user_id: String) {
    let mut tracker = Tracker::new();
    for i in 0..CALLS_PER_USER {
        call_tool(&mut tracker, &user_id, &format!("q{i}"));
    }
}

There is no global to reset. The Python reset() race, where userA's reset zeroes userB's mid-flight counter, has no syntax in this design. This is the kind of explicitness that I described in what Rust structs taught me about state ownership: the compiler refuses to let you store state in places it shouldn't live.

What Rust doesn't save you from

Worth pinning down two terms that get conflated:

Bug classWhat it isDoes Rust prevent it?
Data raceTwo threads touching the same memory without synchronizationYes, won't compile
Race conditionLogic that breaks because operations interleave in an unexpected orderNo, even with Arc<Mutex<T>>

Wrap the tracker in Arc<Mutex<Tracker>> and share it across users, and the compiler is satisfied. Two pieces of jargon there, but they map directly onto Python ideas:

Arc<T> is Python's reference counting, made explicit. When you write data = [] in Python and pass it to two threads, both hold the same list — CPython tracks how many references exist and frees it when the count hits zero. That bookkeeping is automatic and invisible. Rust doesn't do it for you by default. When you genuinely want "many owners, last one out cleans up" across threads, you opt in with Arc (Atomic Reference Count). Same model as Python; you just ask for it by name.

Mutex<T> is threading.Lock, except the lock owns the data. In Python you write:

lock = threading.Lock()
data = []
# somewhere else, hopefully:
with lock:
    data.append(x)

Two separate objects, held together by convention. Nothing stops a caller from touching data without the with. In Rust the data lives inside the mutex:

let mutex = Mutex::new(Vec::new());
let mut guard = mutex.lock().unwrap();
guard.push(x);

The only way to reach the vec is to call .lock(), which hands back a guard that auto-releases when it goes out of scope. "I forgot the with lock:" doesn't compile.

Arc<Mutex<T>> is the two combined. Think of it as the Python idiom (threading.Lock(), shared_data) welded into one type, with the compiler enforcing that you never use the data without the lock.

No data race. But you have reintroduced the Python bug at a higher level, because userA's reset() still clobbers userB's counter under that same lock. Rust rules out memory unsafety. Per-request isolation is still your design decision.

The fix is the same in both languages: one tracker per request. Rust rules out the data-corruption variant at compile time.

Keep reading

The bug here is hard to see in tests. It needs concurrent traffic to fire, and that's the painful kind. Lesson: in Python you can guard against it, but it takes knowledge and discipline. In Rust, the compiler does more of the work: it makes illegal states unrepresentable. The more bugs you can design out of the syntax, the fewer you debug at runtime.

May 12, 2026 12:00 AM UTC

May 11, 2026


PyCon

Introducing the 8 Companies on Startup Row at PyCon US 2026

Each year at PyCon US, Startup Row highlights a select group of early-stage companies building ambitious products with Python at their core. The 2026 cohort reflects a rapidly evolving landscape, where advances in AI, data infrastructure, and developer tooling are reshaping how software is built, deployed, and secured.

This year’s companies aim to solve an evolving set of problems facing independent developers and large-scale organizations alike: securing AI-driven applications, managing multimodal data, orchestrating autonomous agents, automating complex workflows, and extracting insight from increasingly unstructured information. Across these domains, Python continues to serve as a unifying layer: encouraging experimentation, enabling systems built to scale, and connecting open-source innovation with real-world impact.

Startup Row brings these emerging teams into direct conversation with the Python community at PyCon US. Throughout the conference, attendees can meet founders, explore new tools, and see firsthand how these companies are applying Python to solve meaningful problems. For the startups in attendance, it’s an opportunity to share their work, connect with users and collaborators, and contribute back to the ecosystem that helped shape them. Register now to experience Startup Row and much more at PyCon US 2026.

Supporting Startups at PyCon US

There are many ways to support Startup Row companies, during PyCon US and long after the conference wraps:
  • Stop by Startup Row: Spend a few minutes with each team, ask what they’re building, and see their products in action. 
  • Try their tools: Whether it’s an open-source library or a hosted service, hands-on usage (alongside constructive feedback) is one of the most valuable forms of support. If a startup seems compelling, consider a pilot project and become a design partner.
  • Share feedback: Early-stage teams benefit enormously from thoughtful questions, real-world use cases, and honest perspectives from the community.
  • Contribute to their open source projects: Many Startup Row companies are deeply rooted in open source. Startup Row companies with open-source roots welcome bug reports, documentation improvements, and pull requests. Contributions and constructive feedback are always appreciated.
  • Help spread the word: If you find something interesting, tell a friend, post about it, or share it with your team. (And if you're posting to social media, consider using tags like #PyConUS and #StartupRow to share the love.)
  • Explore opportunities to work together: Many of these companies are hiring, looking for design partners, or open to collaborations; don’t hesitate to ask.
  • But, most importantly, be supportive. Building a startup is hard, and every team is learning in real time. Curiosity, patience, and encouragement make a meaningful difference. 
Without further ado, let's...

Meet Startup Row at PyCon US 2026

We’re excited to introduce the companies selected for Startup Row at PyCon US 2026.

Arcjet

Embedding security directly into application code is fast becoming as indispensable as logging, especially as AI services open new attack surfaces. Arcjet offers a developer‑first platform that lets teams add bot detection, rate limiting and data‑privacy checks right where the request is processed.

The service ships open‑source JavaScript and Python SDKs that run a WebAssembly module locally before calling Arcjet’s low‑latency decision API, ensuring full application context informs every security verdict. Both SDKs are released under a permissive open‑source license, letting developers integrate the primitives without vendor lock‑in while scaling usage through Arcjet’s SaaS tiered pricing.

The JavaScript SDK alone has earned ≈1.7 k GitHub stars and the combined libraries have attracted over 1,000 developers protecting more than 500 production applications. Arcjet offers a free tier and usage‑based paid plans, mirroring Cloudflare’s model to serve startups and enterprises alike.

Arcjet is rolling out additional security tools and deepening integrations with popular frameworks such as FastAPI and Flask, aiming to broaden adoption across AI‑enabled services. In short, Arcjet aims to be the security‑as‑code layer every modern app ships with.

CapiscIO

As multi‑agent AI systems become the backbone of emerging digital workflows, developers lack a reliable way to verify agent identities and enforce governance. CapiscIO steps into that gap, offering an open‑core trust layer built for the nascent agent economy.

CapiscIO offers cryptographic Trust Badges, policy enforcement, and tamper‑evident chain‑of‑custody wrapped in a Python SDK. Released under Apache 2.0, it ships a CLI, LangChain integration, and an MCP SDK that let agents prove identity without overhauling existing infrastructure.

The capiscio‑core repository on GitHub hosts the open‑source core and SDKs under Apache 2.0, drawing early contributors building agentic pipelines.

Beon de Nood, Founder & CEO, brings two decades of enterprise development experience and a prior successful startup to the table. “AI governance should be practical, not bureaucratic. Organizations need visibility into what they have, confidence in what they deploy, and control over how agents behave in production,” he says.

CapiscIO is continuously adding new extensions, expanding its LangChain and MCP SDKs, and preparing a managed agent‑identity registry for enterprises. In short, CapiscIO aims to be the passport office of the agent economy, handing each autonomous component an unspoofable ID and clear permissions.

Chonkie

The explosion of retrieval‑augmented generation (RAG) is unlocking AI’s ability to reason over ever‑larger knowledge bases. Yet the first step of splitting massive texts into meaningful pieces still lags behind.

Chonkie offers an open‑core suite centered on Memchunk, a Python library with Cython acceleration that delivers up to 160 GB/s throughput and ten chunking strategies under a permissive license. It also ships Catsu, a unified embeddings client for nine providers, and a lightweight ingestion layer; the commercial Chonkie Labs service combines them into a SaaS that monitors the web and synthesises insights.

Co‑founder and CEO Shreyash  Nigam, who grew up in India and met his business partner in eighth grade, reflects the team’s open‑source ethos, saying “It’s fun to put a project on GitHub and see a community of developers crowd around it.” That enthusiasm underpins Chonkie’s decision to release its core tooling openly while building a commercial deep‑research service.

Backed by Y  Combinator’s Summer  2025 batch, Chonkie plans to grow from four to six engineers and launch the next version of Chonkie Labs later this year, adding real‑time web crawling and multi‑modal summarization. In short, Chonkie aims to be the Google of corporate intelligence.

Phemeral

Running production‑grade Python services used to mean wrestling with containers, VMs, or complex CI pipelines. 

Phemeral, launched in April 2026, offers Python developers a managed hosting platform that turns a GitHub repo into an instantly deployable, scale‑to‑zero backend.

Phemeral provides builds for popular frameworks (like Django, Flask, and FastAPI), integrations with popular package managers (e.g. uv, Pip, and Poetry), as well as continuous deployment on every push while charging only for actual request execution under a usage‑based model.

Founder & CEO Chinmaya Joshi says, "Building with Python is easier than ever, but hosting and deployment remain a pain. Phemeral is building the easiest way to deploy Python web apps."

Joshi is focused on expanding framework support and refining the platform so that Python developers (from vibe-coders and solo devs, to agencies and enterprises) can enjoy the same zero‑config experience modern front‑end platforms provide. 

Pixeltable

Multimodal generative AI is turning simple datasets into sprawling collections of video, images, audio and text, forcing engineers to stitch together ad‑hoc pipelines just to keep data flowing. That complexity has created a new bottleneck for teams trying to move from prototype to production.

The open‑source Python library from Pixeltable offers a declarative table API that lets developers store, query and version multimodal assets side by side while embedding custom Python functions. Built with incremental update capabilities, combined lineage and schema tracking, and a development‑to‑production mirror, the platform also provides orchestration capabilities that keep pipelines reproducible without rewriting code.

The project has earned ≈1.6 k  GitHub stars and a growing contributor base, closed a $5.5  million seed round in December 2024, and is already used by early adopters such as Obvio and Variata to streamline computer‑vision workflows.

Co‑founder and CTO Marcel  Kornacker, who previously founded Apache Impala and co-founded Apache Parquet, says “Just as relational databases revolutionized web development, Pixeltable is transforming AI application development.”

The company's roadmap centers on launching Pixeltable Cloud, a serverless managed service that will extend the open core with collaborative editing, auto‑scaling storage and built‑in monitoring. In short, Pixeltable aims to be the relational database of multimodal AI data.

SubImage

The sheer complexity of modern multi‑cloud environments turns security visibility into a labyrinth, and  SubImage offers a graph‑first view that cuts through the noise.

It builds an infrastructure graph using the open‑source Cartography library (Apache‑2.0, Python), then highlights exploit chains as attack paths and applies AI models to prioritize findings based on ownership and contextual risk.

Cartography, originally developed at Lyft and now a Cloud Native Computing Sandbox project, has ≈3.7 k GitHub stars, is used by over 70 organizations, and SubImage’s managed service already protects security teams at Veriff and Neo4j; the company closed a $4.2  million seed round in November 2025.

Co‑founder Alex  Chantavy, an offensive‑security engineer, says “The most important tool was our internal cloud knowledge graph because it showed us a map of the easiest attack paths … One of the most effective ways to defend an environment is to see it the same way an attacker would.”

The startup is focusing on scaling its managed service and deepening AI integration as it targets larger enterprise customers. In short, SubImage aims to be the map of the cloud for defenders.

Tetrix

Private‑market data pipelines still rely on manual downloads and spreadsheet gymnastics, leaving analysts chasing yesterday’s numbers. Tetrix’s AI investment intelligence platform is part of a wave that brings automation to this lagging workflow.

Built primarily in Python, Tetrix automates document collection from fund portals and other sources, extracts structured data from PDFs and other unstructured sources using tool-using language models, then presents exposures, cash flows, and benchmarks through an interactive dashboard that also accepts natural‑language queries.

The company is growing quickly, doubling revenue quarter over quarter and, at least so far, maintains an impressive record of zero customer churn. In the coming year or so, Tetrix plans to triple its headcount from fifteen to forty‑five employees.

TimeCopilot

Time‑series forecasting has long been a tangled mix of scripts, dashboards, and domain expertise, and the recent surge in autonomous agents is finally giving it a unified voice. Enter TimeCopilot, an open‑source framework that brings agentic reasoning to the heart of forecasting.

The platform, built in Python under a permissive open‑source license, lets users request forecasts in plain English. It automatically orchestrates more than thirty models from seven families, including Chronos and TimesFM, while weaving large language model reasoning into each prediction. Its declarative API was born from co‑founder  Azul Garza‑Ramírez’s economics background and her earlier work on TimeGPT for Nixtla (featured SR'23), evolving from a weekend experiment started nearly seven years ago.

The TimeCopilot/timecopilot repository has amassed roughly 420  stars on GitHub, with the release of OpenClaw marking a notable spike in community interest.

Upcoming plans include a managed SaaS offering with enterprise‑grade scaling and support, the rollout of a benchmarking suite to measure agentic forecast quality, and targeted use cases such as predicting cloud‑compute expenses for AI workloads.

Thank You's and Acknowledgements

Startup Row is a volunteer-driven program, co-led by Jason D. Rowley and Shea Tate-Di Donna (SR'15; Zana, acquired Startups.com), in collaboration with the PyCon US organizing team. Thanks to everyone who makes PyCon US possible.

We also extend a gracious thank-you to all startup founders who submitted applications to Startup Row at PyCon US this year. Thanks again for taking the time to share what you're building. We hope to help out in whatever way we can.

Good luck to everyone, and see you in Long Beach, CA!

May 11, 2026 05:33 PM UTC


Talk Python to Me

#548: Event Sourcing Design Pattern

What if your database worked more like Git? Every change captured as an immutable event you can replay, instead of a single mutating row that quietly forgets its own history. That's event sourcing, and Chris May is back on Talk Python, fresh off our Datastar panel, to walk us through what it actually looks like in Python. We'll cover the core patterns, the libraries to reach for, when not to use it, and why event sourcing turns out to be a surprisingly good fit for AI-assisted coding.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code talkpython26</a><br> <a href='https://talkpython.fm/temporal'>Temporal</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guest</strong><br/> <strong>Chris May</strong>: <a href="https://everydaysuperpowers.dev?featured_on=talkpython" target="_blank" >everydaysuperpowers.dev</a><br/> <br/> <strong>Intro to event sourcing e-book</strong>: <a href="https://everydaysuperpowers.gumroad.com/l/es_intro?featured_on=talkpython" target="_blank" >everydaysuperpowers.gumroad.com</a><br/> <br/> <strong>Domain-Driven Design: The Power of CQRS and Event Sourcing: How CQRS/ES Redefine Building Scalable System</strong>: <a href="https://ricofritzsche.me/cqrs-event-sourcing-projections/?featured_on=talkpython" target="_blank" >ricofritzsche.me</a><br/> <strong>DDD</strong>: <a href="https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215?featured_on=talkpython" target="_blank" >www.amazon.com</a><br/> <strong>Understanding Eventsourcing (Martin Dilger)</strong>: <a href="https://www.amazon.com/Understanding-Eventsourcing-Planning-Implementing-Eventmodeling/dp/B0DNXQJM9Z/ref=sr_1_1?dib=eyJ2IjoiMSJ9.LqdaOIXJSPbgGuz_Akil-snFyMZVys1Y2IhnqvPv_CGK3R6Vwvu6AN1PHBi6twz-c3bPG5mdbhLJQyYs30LXh2pT6wiqXPrz0RKmfeYzq_sT18tc2UAWVG8rFBN1C-H46AHiiDqusp6SyDm2W15n4ZBKn11xW4yNvazjq3pg369c53KDFONnWqe9AB4xzAF2VeQ4n64hOk30-GmG_1K6_zIPBw4PXkVX9UDYq0QDIAQ.0Kvsl2V8aqDO4Av47g881GGoRPCpF0gCrbF6GJZbjRE&amp;dib_tag=se&amp;keywords=understanding+event+sourcing&amp;qid=1777078561&amp;sbo=RZvfv%2F%2FHxDF%2BO5021pAnSA%3D%3D&amp;sr=8-1&amp;featured_on=talkpython" target="_blank" >www.amazon.com</a><br/> <strong>Event Sourcing Explained using Football Video</strong>: <a href="https://www.youtube.com/watch?v=xPmQxYIi5fA" target="_blank" >www.youtube.com</a><br/> <strong>Why I finally embraced event sourcing and why you should too article</strong>: <a href="https://everydaysuperpowers.dev/articles/why-i-finally-embraced-event-sourcingand-why-you-should-too/?featured_on=talkpython" target="_blank" >everydaysuperpowers.dev</a><br/> <strong>valkey</strong>: <a href="https://valkey.io/?featured_on=talkpython" target="_blank" >valkey.io</a><br/> <strong>diskcache</strong>: <a href="https://talkpython.fm/episodes/show/534/diskcache-your-secret-python-perf-weapon" target="_blank" >talkpython.fm</a><br/> <strong>eventsourcing package</strong>: <a href="https://github.com/pyeventsourcing/eventsourcing?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>eventsourcing docs</strong>: <a href="https://eventsourcing.readthedocs.io/en/stable/topics/tutorial/part1.html?featured_on=talkpython" target="_blank" >eventsourcing.readthedocs.io</a><br/> <strong>John Bywater</strong>: <a href="https://github.com/johnbywater?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Datastar</strong>: <a href="https://data-star.dev/?featured_on=talkpython" target="_blank" >data-star.dev</a><br/> <strong>Microconf</strong>: <a href="https://microconf.com/?featured_on=talkpython" target="_blank" >microconf.com</a><br/> <strong>Event Modeling &amp; Event Sourcing Podcast</strong>: <a href="https://podcast.eventmodeling.org?featured_on=talkpython" target="_blank" >podcast.eventmodeling.org</a><br/> <strong>Python Package Guides for AI Agents</strong>: <a href="https://github.com/mikeckennedy/python-package-guides-for-agents?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Iodine tablets AI joke</strong>: <a href="https://x.com/pr0grammerhum0r/status/2046650199930458334?s=46&amp;featured_on=pythonbytes" target="_blank" >x.com</a><br/> <strong>KurrentDb</strong>: <a href="https://www.kurrent.io?featured_on=talkpython" target="_blank" >www.kurrent.io</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=s37d6yN2P70" target="_blank" >youtube.com</a><br/> <strong>Episode #548 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/548/event-sourcing-design-pattern#takeaways-anchor" target="_blank" >talkpython.fm/548</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/548/event-sourcing-design-pattern" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

May 11, 2026 04:36 PM UTC