skip to navigation
skip to content

Planet Python

Last update: April 29, 2026 07:43 AM UTC

April 29, 2026


Python GUIs

Actions in one thread changing data in another — How to communicate between threads and windows in PyQt6

I have a main window that starts background threads (e.g., handling GPIO data). From the main window I open secondary windows using buttons. When I press a button in a secondary window, I can't change anything in the background threads. But if I press a button in the main window, everything works. How do I communicate between a secondary window and a thread that was started from the main window?

This is a common problem when building PyQt6 applications with multiple windows and background threads. The good news is that Qt's signal and slot system is designed to handle this and it works safely across threads.

The core idea is that your secondary window doesn't need direct access to the thread or the worker object. Instead the secondary window and the worker just need access to the same signals, and can then use them to communicate with one another. Qt handles the cross-thread communication automatically.

Why doesn't direct access work?

When you create a background thread from the main window, you'll often store a reference to that thread on the main window. If that main window then creates a sub-window, it doesn't have any access to the objects on it's parent. Even if it did calling the methods on the thread directly is not usually the right approach.

You can access the attributes of a parent window using .parent() but this is a bad habit, because it tightly couples the parts of your application together. If you modify the structure of the parent window, you now need to also edit the sub-window. There are better ways that keep things nicely isolated.

The solution is to avoid calling methods directly across threads. Instead, use signals and slots. When a signal is emitted in one thread and connected to a slot in another, Qt automatically queues the call and delivers it safely.

Setting up a background worker

First, let's create a simple worker class that runs in a background thread. This worker simulates handling incoming data (like GPIO data) and also accepts commands from the GUI.

python
from PyQt6.QtCore import QObject, pyqtSignal, pyqtSlot
import time


class Worker(QObject):
    """A worker that runs in a background thread."""
    data_updated = pyqtSignal(str)

    def __init__(self):
        super().__init__()
        self.running = True
        self.current_value = 0

    @pyqtSlot()
    def run(self):
        """Simulate continuous data handling."""
        while self.running:
            self.current_value += 1
            self.data_updated.emit(f"Data: {self.current_value}")
            time.sleep(1)

    @pyqtSlot(int)
    def set_value(self, value):
        """Receive a new value from the GUI."""
        self.current_value = value
        self.data_updated.emit(f"Value set to: {self.current_value}")

The set_value slot is what we'll trigger from the secondary window. Because it's a slot connected via a signal, Qt will deliver the call on the correct thread.

Creating the secondary window

The secondary window has a button and a spin box. When the user clicks the button, the window emits a signal carrying the new value. The secondary window doesn't know anything about the worker — it just emits a signal.

python
from PyQt6.QtWidgets import QWidget, QVBoxLayout, QPushButton, QSpinBox, QLabel
from PyQt6.QtCore import pyqtSignal


class SecondaryWindow(QWidget):
    """A secondary window that emits a signal when the user sets a value."""
    value_changed = pyqtSignal(int)

    def __init__(self):
        super().__init__()
        self.setWindowTitle("Secondary Window")

        layout = QVBoxLayout()

        self.label = QLabel("Set a new value for the worker:")
        layout.addWidget(self.label)

        self.spinbox = QSpinBox()
        self.spinbox.setRange(0, 1000)
        layout.addWidget(self.spinbox)

        self.button = QPushButton("Send to Worker")
        self.button.clicked.connect(self.send_value)
        layout.addWidget(self.button)

        self.setLayout(layout)

    def send_value(self):
        self.value_changed.emit(self.spinbox.value())

The value_changed signal is the only interface this window exposes. This keeps things clean and decoupled.

Wiring everything together in the main window

The main window is where all the connections happen. It creates the worker, starts the thread, opens the secondary window, and connects the secondary window's signal to the worker's slot.

python
from PyQt6.QtWidgets import QMainWindow, QVBoxLayout, QPushButton, QLabel, QWidget
from PyQt6.QtCore import QThread


class MainWindow(QMainWindow):
    def __init__(self):
        super().__init__()
        self.setWindowTitle("Main Window")

        # Set up the UI
        layout = QVBoxLayout()

        self.status_label = QLabel("Waiting for data...")
        layout.addWidget(self.status_label)

        self.open_button = QPushButton("Open Secondary Window")
        self.open_button.clicked.connect(self.open_secondary)
        layout.addWidget(self.open_button)

        container = QWidget()
        container.setLayout(layout)
        self.setCentralWidget(container)

        # Keep a reference to the secondary window
        self.secondary_window = None

        # Set up the background thread and worker
        self.thread = QThread()
        self.worker = Worker()
        self.worker.moveToThread(self.thread)

        # Connect signals
        self.thread.started.connect(self.worker.run)
        self.worker.data_updated.connect(self.update_status)

        # Start the thread
        self.thread.start()

    def update_status(self, text):
        self.status_label.setText(text)

    def open_secondary(self):
        if self.secondary_window is None:
            self.secondary_window = SecondaryWindow()

            # Connect the secondary window's signal to the worker's slot.
            # This is the connection that makes cross-window,
            # cross-thread communication work.
            self.secondary_window.value_changed.connect(self.worker.set_value)

        self.secondary_window.show()

    def closeEvent(self, event):
        self.worker.running = False
        self.thread.quit()
        self.thread.wait()
        super().closeEvent(event)

The line that connects everything together is:

python
self.secondary_window.value_changed.connect(self.worker.set_value)

This connects a signal from the secondary window (running in the main/GUI thread) to a slot on the worker (which has been moved to a background thread). Qt sees that the sender and receiver live in different threads, so it automatically uses a queued connection. The slot call is placed into the background thread's event queue and executed there.

Understanding why the main window worked but the secondary didn't

In the original question, buttons in the main window could affect the background threads, but buttons in a secondary window could not. This usually happens because:

  1. The main window had direct signal-slot connections to the worker (set up when both the worker and the connections were created).
  2. The secondary window was created later, and its signals were never connected to the worker.

To solution is to connect its signals to the appropriate worker slots, when you create the secondary window, just as you would for the main window. The worker doesn't care where the signal comes from — it just responds to whatever signals are connected to its slots. For more on managing multiple windows in PyQt6, see our tutorial on creating multiple windows.

A note about QThreadPool vs QThread

The original question mentions using QThreadPool. If you're using QRunnable with a QThreadPool, the pattern is slightly different because QRunnable doesn't inherit from QObject and can't have slots directly. In that case, you typically create a separate QObject-based signals class and attach it to your runnable. For a detailed walkthrough of that approach, see Multithreading PyQt6 applications with QThreadPool.

However, for long-running background tasks that need two-way communication with the GUI (like GPIO handling), QThread with moveToThread() is usually a better fit. It gives you a proper event loop in the background thread, which means signals and slots work naturally in both directions.

Complete working example

Here's everything in a single file you can copy, run, and experiment with. If you're new to PyQt6, you may want to start with creating your first window before diving in.

python
import sys
import time

from PyQt6.QtCore import QObject, QThread, pyqtSignal, pyqtSlot
from PyQt6.QtWidgets import (
    QApplication,
    QLabel,
    QMainWindow,
    QPushButton,
    QSpinBox,
    QVBoxLayout,
    QWidget,
)


class Worker(QObject):
    """A worker that runs in a background thread."""

    data_updated = pyqtSignal(str)

    def __init__(self):
        super().__init__()
        self.running = True
        self.current_value = 0

    @pyqtSlot()
    def run(self):
        """Simulate continuous data handling."""
        while self.running:
            self.current_value += 1
            self.data_updated.emit(f"Data: {self.current_value}")
            time.sleep(1)

    @pyqtSlot(int)
    def set_value(self, value):
        """Receive a new value from the GUI."""
        self.current_value = value
        self.data_updated.emit(f"Value set to: {self.current_value}")


class SecondaryWindow(QWidget):
    """A secondary window that emits a signal when the user sets a value."""

    value_changed = pyqtSignal(int)

    def __init__(self):
        super().__init__()
        self.setWindowTitle("Secondary Window")

        layout = QVBoxLayout()

        self.label = QLabel("Set a new value for the worker:")
        layout.addWidget(self.label)

        self.spinbox = QSpinBox()
        self.spinbox.setRange(0, 1000)
        layout.addWidget(self.spinbox)

        self.button = QPushButton("Send to Worker")
        self.button.clicked.connect(self.send_value)
        layout.addWidget(self.button)

        self.setLayout(layout)

    def send_value(self):
        self.value_changed.emit(self.spinbox.value())


class MainWindow(QMainWindow):
    def __init__(self):
        super().__init__()
        self.setWindowTitle("Main Window")

        # Set up the UI
        layout = QVBoxLayout()

        self.status_label = QLabel("Waiting for data...")
        layout.addWidget(self.status_label)

        self.open_button = QPushButton("Open Secondary Window")
        self.open_button.clicked.connect(self.open_secondary)
        layout.addWidget(self.open_button)

        container = QWidget()
        container.setLayout(layout)
        self.setCentralWidget(container)

        # Keep a reference to the secondary window
        self.secondary_window = None

        # Set up the background thread and worker
        self.thread = QThread()
        self.worker = Worker()
        self.worker.moveToThread(self.thread)

        # Connect signals
        self.thread.started.connect(self.worker.run)
        self.worker.data_updated.connect(self.update_status)

        # Start the thread
        self.thread.start()

    def update_status(self, text):
        self.status_label.setText(text)

    def open_secondary(self):
        if self.secondary_window is None:
            self.secondary_window = SecondaryWindow()
            # Connect the secondary window's signal to the worker's slot
            self.secondary_window.value_changed.connect(
                self.worker.set_value
            )
        self.secondary_window.show()

    def closeEvent(self, event):
        self.worker.running = False
        self.thread.quit()
        self.thread.wait()
        super().closeEvent(event)


app = QApplication(sys.argv)
window = MainWindow()
window.show()
sys.exit(app.exec())

When you run this, you'll see the main window counting up once per second. Click "Open Secondary Window", enter a number, and click "Send to Worker" — the worker's counter will jump to your chosen value and continue counting from there.

The secondary window communicates with the background thread entirely through signals and slots, with no direct method calls across threads. This pattern scales well — you can connect as many windows as you like to the same worker, or connect one window to multiple workers. As long as you use signals and slots for cross-thread communication, Qt handles the thread safety for you.

For an in-depth guide to building Python GUIs with PyQt6 see my book, Create GUI Applications with Python & Qt6.

April 29, 2026 06:00 AM UTC

April 28, 2026


Talk Python Blog

Introducing the new Talk Python web player

We expect that most people who listen to Talk Python do so through their podcast player apps on their phone or even on their laptops. But there are plenty of times that people end up on an episode page and would love to have a nice experience interacting with that episode as well. One really common example: you go back to an episode you discovered several years ago, and the chances it’s still on your device are low. Though we do keep our entire back catalog available in the RSS feed, most podcast players trim down what they keep locally.

April 28, 2026 07:40 PM UTC


PyCoder’s Weekly

Issue #732: Web Scraping, Altair Charts, OpenAI's API, and More (April 28, 2026)

#732 – APRIL 28, 2026
View in Browser »

The PyCoder’s Weekly Logo


browser-use vs. Playwright: Which to Pick for Web Scraping?

Follow along in this walk-through building a Hacker News synthesizer with browser-use, then see it fail on a harder Newegg scraping task. Includes a side-by-side comparison with Playwright and a breakdown of when each tool is the right call.
CODECUT.AI • Shared by Khuyen Tran

Altair: Declarative Charts With Python

Build interactive Python charts the declarative way with Altair. Map data to visual properties and add linked selections. No JavaScript required.
REAL PYTHON

Positron: The Data Science IDE from Posit PBC

alt

Positron is a free IDE built for Python data science. AI assistance, interactive data frames, Jupyter notebooks, and instant app deployment, all in one place. Stop context-switching. Start shipping. Download free.
POSIT PBC sponsor

Leverage OpenAI’s API in Your Python Projects

Learn how to use the ChatGPT API with Python’s openai library to send prompts, control AI behavior with roles, and get structured outputs.
REAL PYTHON course

Quiz: Leverage OpenAI’s API in Your Python Projects

REAL PYTHON

Python Software Foundation Fellow Members for Q1 2026!

PYTHON SOFTWARE FOUNDATION

PEP 708: Extending the Repository API to Mitigate Dependency Confusion Attacks (Rejected)

PYTHON.ORG

PEP 806: Mixed Sync/Async Context Managers With Precise Async Marking (Rejected)

PYTHON.ORG

PEP 833: Freezing the HTML Simple Repository API (Draft)

PYTHON.ORG

Articles & Tutorials

Fixing a Memory “Leak” From Python 3.14’s Incremental Garbage Collection

Adam encountered an out-of-memory error while migrating a client project to Python 3.14. The issue occurred when running Django’s database migration command on a limited-resource server, and seemed to be caused by the new incremental garbage collection algorithm in Python 3.14.
ADAM JOHNSON

Logging to File and to Textual Console

When writing TUI applications in Textual you can’t just print out your debug info since the terminal is controlled by the framework. This article shows you how to log and use Textual’s built-in debug console.
MIKE DRISCOLL

Beyond Basic RAG: Build Persistent AI Agents

Master next-gen AI with Python notebooks for agentic reasoning, memory engineering, and multi-agent orchestration. Scale apps using production-ready patterns for LangChain, LlamaIndex, and high-performance vector search. Explore & Star on GitHub.
ORACLE sponsor

Read the Docs Now Supports uv Natively

Popular open source documentation site Read the Docs has announced they now support native uv in .readthedocs.yaml for Python dependency installation. Learn how to use it in your configurations
READ THE DOCS

PyTexas 2026 Recap

Per-talk notes from PyTexas 2026 in Austin: Hynek on domain modeling, Dawn Wages on specialization, MCP security, PEP 810 lazy imports, free-threading, Ruff, ty, uv, supply chain.
BERNÁT GÁBOR

The Carbon Footprint of Wagtail AI

One of the package maintainers for Wagtail AI shares his method for measuring the carbon impact of the different AI tasks users can do and goes over the initial results.
WAGTAIL.ORG • Shared by Meagen Voss

Gemini CLI vs Claude Code: Which to Choose for Python Tasks

Gemini CLI vs Claude Code: compare setup, performance, code quality, and cost to find the right Python AI coding tool for your workflow.
REAL PYTHON

Learn the Agentic Coding Workflow That Actually Works on Real Projects

65% of Python developers are stuck using AI for small tasks that fall apart on anything real. This 2-day live course (May 6-7 via Zoom) walks you through building a complete Python CLI app with Claude Code, from an empty directory to a shipped project on GitHub.
REAL PYTHON

Implementing OpenTelemetry in FastAPI

Learn how you can observe your FastAPI web apps with OpenTelemetry, including how to integrate it and why it is important.
SIGNOZ.IO • Shared by Dhruv Ahuja

Building a Python Library in 2026

So you want to build a Python library in 2026? Here’s everything you need to know about the state of the art.
STEPHEN IF

Projects & Code

Local Usage PyPI Alternative With Vulnerability Scanning

Very interesting project
GITHUB.COM/RUSTEDBYTES • Shared by Yehor Smoliakov

typeform: Type-Safe UI/CLI Generator Powered by Pydantic

GITHUB.COM/STHITAPRAJNAS

vibescore: One-Command Quality Score for Any Python Project

GITHUB.COM/STEF41 • Shared by Anonymous

dash: Data Apps & Dashboards for Python

GITHUB.COM/PLOTLY

profiling-explorer: Table-Based Profile Exploration Tool

GITHUB.COM/ADAMCHAINZ

Events

Weekly Real Python Office Hours Q&A (Virtual)

April 29, 2026
REALPYTHON.COM

PyCamp Spain 2026

April 30 to May 4, 2026
PYCAMP.ES

PyDelhi User Group Meetup

May 2, 2026
MEETUP.COM

PyBodensee Monthly Meetup

May 4, 2026
PYBODENSEE.COM

IndyPy: Lightning Talks

May 5 to May 6, 2026
MEETUP.COM


Happy Pythoning!
This was PyCoder’s Weekly Issue #732.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

April 28, 2026 07:30 PM UTC


Django Weblog

Renew Your PyCharm License and Support Django

Only a few days remain to support the Django Software Foundation through our annual JetBrains fundraiser.

You can now use the offer for new purchases and annual renewals. If your PyCharm Professional subscription expires this year, this is a great time to renew or extend it for up to 12 months.

Get 30% off PyCharm Professional, and 100% of proceeds from qualifying purchases and renewals go to the DSF to help fund Django Fellows, community programs, events, and the future of Django.

👉 Offer ends May 1: Learn more about the fundraiser

👉 Claim 30% off here: Get the JetBrains offer

April 28, 2026 07:20 PM UTC


Mariatta

PyCascades 2026 Recap

PyCascades 2026 Recap

PyCascades 2026 took place in Vancouver this year. I only get to attend on the first day, because I had a 5 a.m. flight to Washington DC the morning after.

Still, the first day’s talks were all very insightful and interesting. I’m waiting for all the talks to be published so that I could catch up on the ones I missed.

Here are notes on the talks I got to see.

April 28, 2026 04:36 PM UTC


Real Python

Testing Your Code With Python's unittest

The Python standard library ships with a testing framework named unittest, which you can use to write automated tests for your code. The unittest package has an object-oriented approach where test cases derive from a base class, which has several useful methods.

The framework supports many features that will help you write consistent unit tests for your code. These features include test cases, fixtures, test suites, and test discovery capabilities.

In this video course, you’ll learn how to:

To get the most out of this video course, you should be familiar with some important Python concepts, such as object-oriented programming, inheritance, and assertions. Having a good understanding of code testing is a plus.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 28, 2026 02:00 PM UTC

Quiz: Use Codex CLI to Enhance Your Python Projects

In this quiz, you’ll test your understanding of Use Codex CLI to Enhance Your Python Projects.

By working through this quiz, you’ll revisit how to install and configure Codex CLI, use Plan mode to review changes before they land, and refine features through iterative prompting in your terminal.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 28, 2026 12:00 PM UTC

Quiz: Testing Your Code With Python's unittest

In this quiz, you’ll test your understanding of Testing Your Code With Python’s unittest.

By working through this quiz, you’ll revisit key concepts like structuring tests with TestCase, using assertion methods, skipping tests conditionally, parameterizing with subtests, and preparing test data with fixtures.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 28, 2026 12:00 PM UTC


PyPy

PyPy v7.3.22 release

PyPy v7.3.22: release of python 2.7, 3.11

The PyPy team is proud to release version 7.3.22 of PyPy after the previous release on March 13, 2026. This is a bug-fix release that fixes several issues in the JIT. Among them, a long-standing JIT bug that started appearing when some instance optimizations exposed it. We also cleaned up many of the remaining stdlib test suite failures, which improves CPython compatibility around line numbers in dis.dis, signatures and objclass attributes for builtins, and other quality of life features.

There is now an RPython _pickle module that mirrors the CPython one, greatly speeding up pickling operations. Where before PyPy was 5.7x slower than CPython on the pickle benchmark from the pyperformance benchmark suite, now it is only 1.6x slower [0]. We also added pypy pickler extensions to dump and load lists using list strategies, and enabled them in the ForkingPickler used by multiprocessing, speeding up cases where such objects are passed between PyPy multiprocessing instances.

We also added an RPython json encoder, speeding up json_bench from being 2.6x slower than CPython to being 0.7x (meaning faster).

The release includes two different interpreters:

The interpreters are based on much the same codebase, thus the double release. This is a micro release, all APIs are compatible with the other 7.3 releases.

We recommend updating. You can find links to download the releases here:

https://pypy.org/download.html

We would like to thank our donors for the continued support of the PyPy project. If PyPy is not quite good enough for your needs, we are available for direct consulting work. If PyPy is helping you out, we would love to hear about it and encourage submissions to our blog via a pull request to https://github.com/pypy/pypy.org

We would also like to thank our contributors and encourage new people to join the project. PyPy has many layers and we need help with all of them: bug fixes, PyPy and RPython documentation improvements, or general help with making RPython's JIT even better.

If you are a python library maintainer and use C-extensions, please consider making a HPy / CFFI / cppyy version of your library that would be performant on PyPy. In any case, cibuildwheel supports building wheels for PyPy.

Footnotes

[0]

Once a PR to pyperformance to use the _pickle module on PyPy is accepted

What is PyPy?

PyPy is a Python interpreter, a drop-in replacement for CPython It's fast (PyPy and CPython performance comparison) due to its integrated tracing JIT compiler.

We also welcome developers of other dynamic languages to see what RPython can do for them.

We provide binary builds for:

PyPy supports Windows 32-bit, Linux PPC64 big- and little-endian, Linux ARM 32 bit, RISC-V RV64IMAFD Linux, and s390x Linux but does not release binaries. Please reach out to us if you wish to sponsor binary releases for those platforms. Downstream packagers provide binary builds for debian, Fedora, conda, OpenBSD, FreeBSD, Gentoo, and more.

What else is new?

For more information about the 7.3.22 release, see the full changelog.

Please update, and continue to help us make pypy better.

Cheers, The PyPy Team

April 28, 2026 10:00 AM UTC


Armin Ronacher

Before GitHub

GitHub was not the first home of my Open Source software. SourceForge was.

Before GitHub, I had my own Trac installation. I had Subversion repositories, tickets, tarballs, and documentation on infrastructure I controlled. Later I moved projects to Bitbucket, back when Bitbucket still felt like a serious alternative place for Open Source projects, especially for people who were not all-in on Git yet.

And then, eventually, GitHub became the place, and I moved all of it there.

It is hard for me to overstate how important GitHub became in my life. A large part of my Open Source identity formed there. Projects I worked on found users there. People found me there, and I found other people there. Many professional relationships and many friendships started because some repository, issue, pull request, or comment thread made two people aware of each other.

That is why I find what is happening to GitHub today so sad and so disappointing. I do not look at it as just the folks at Microsoft making product decisions I dislike. GitHub was part of the social infrastructure of Open Source for a very long time. For many of us, it was not merely where the code lived; it was where a large part of the community lived.

So when I think about GitHub’s decline, I also think about what came before it, and what might come after it. I have written a few times over the years about dependencies, and in particular about the problem of micro dependencies. In my mind, GitHub gave life to that phenomenon. It was something I definitely did not completely support, but it also made Open Source more inclusive. GitHub changed how Open Source feels, and later npm and other systems changed how dependencies feel. Put them together and you get a world in which publishing code is almost frictionless, consuming code is almost frictionless, and the number of projects in the world explodes.

That has many upsides. But it is worth remembering that Open Source did not always work this way.

A Smaller World

Before GitHub, Open Source was a much smaller world. Not necessarily in the number of people who cared about it, but in the number of projects most of us could realistically depend on.

There were well-known projects, maintained over long periods of time by a comparatively small number of people. You knew the names. You knew the mailing lists. You knew who had been around for years and who had earned trust. That trust was not perfect, and the old world had plenty of gatekeeping, but reputation mattered in a very direct way. We took pride (and got frustrated) when the Debian folks came and told us our licensing stuff was murky or the copyright headers were not up to snuff, because they packaged things up.

A dependency was not just a package name. It was a project with a history, a website, a maintainer, a release process, a lot of friction, and often a place in a larger community. You did not add dependencies casually, because the act of depending on something usually meant you had to understand where it came from.

Not all of this was necessarily intentional, but because these projects were comparatively large, they also needed to bring their own infrastructure. Small projects might run on a university server, and many of them were on SourceForge, but the larger ones ran their own show. They grouped together into larger collectives to make it work.

We Ran Our Own Infrastructure

My first Open Source projects lived on infrastructure I ran myself. There was a Trac installation, Subversion repositories, tarballs, documentation, and release files served from my own machines or from servers under my control. That was normal. If you wanted to publish software, you often also became a small-time system administrator. Georg and I ran our own collective for our Open Source projects: Pocoo. We shared server costs and the burden of maintaining Subversion and Trac, mailing lists and more.

Subversion in particular made this “running your own forge” natural. It was centralized: you needed a server, and somebody had to operate it. The project had a home, and that home was usually quite literal: a hostname, a directory, a Trac instance, a mailing list archive.

When Mercurial and Git arrived, they were philosophically the opposite. Both were distributed. Everybody could have the full repository. Everybody could have their own copy, their own branches, their own history. In principle, those distributed version control systems should have reduced the need for a single center. But despite all of this, GitHub became the center.

That is one of the great ironies of modern Open Source. The distributed version control system won, and then the world standardized on one enormous centralized service for hosting it.

What GitHub Gave Us

It is easy now to talk only about GitHub’s failures, of which there are currently many, but that would be unfair: GitHub was, and continues to be, a tremendous gift to Open Source.

It made creating a project easy and it made discovering projects easy. It made contributing understandable to people who had never subscribed to a development mailing list in their life. It gave projects issue trackers, pull requests, release pages, wikis, organization pages, API access, webhooks, and later CI. It normalized the idea that Open Source happens in the open, with visible history and visible collaboration. And it was an excellent and reasonable default choice for a decade.

But maybe the most underappreciated thing GitHub did was archival work: GitHub became a library. It became an index of a huge part of the software commons because even abandoned projects remained findable. You could find forks, and old issues and discussions all stayed online. For all the complaints one can make about centralization, that centralization also created discoverable memory. The leaders there once cared a lot about keeping GitHub available even in countries that were sanctioned by the US.

I know what the alternative looks like, because I was living it. Some of my earliest Open Source projects are technically still on PyPI, but the actual packages are gone. The metadata points to my old server, and that server has long stopped serving those files.

That was normal before the large platforms. A personal domain expired, a VPS was shut down, a developer passed away, and with them went the services they paid for. The web was once full of little software homes, and many of them are gone 1.

npm and the Dependency Explosion

The micro-dependency problem was not just that people published very small packages. The hosted infrastructure of GitHub and npm made it feel as if there was no cost to create, publish, discover, install, and depend on them.

In the pre-GitHub world, reputation and longevity were part of the dependency selection process almost by necessity, and it often required vendoring. Plenty of our early dependencies were just vendored into our own Subversion trees by default, in part because we could not even rely on other services being up when we needed them and because maintaining scripts that fetched them, in the pre-API days, was painful. The implied friction forced some reflection, and it resulted in different developer behavior. With npm-style ecosystems, the package graph can grow faster than anybody’s ability to reason about it.

The problem that this type of thinking created also meant that solutions had to be found along the way. GitHub helped compensate for the accountability problem and it helped with licensing. At one point, the newfound influx of developers and merged pull requests left a lot of open questions about what the state of licenses actually was. GitHub even attempted to rectify this with their terms of service.

The thinking for many years was that if I am going to depend on some tiny package, I at least want to see its repository. I want to see whether the maintainer exists, whether there are issues, whether there were recent changes, whether other projects use it, whether the code is what the package claims it is. GitHub became part of the system that provides trust, and more recently it has even become one of the few systems that can publish packages to npm and other registries with trusted publishing.

That means when trust in GitHub erodes, the problem is not isolated to source hosting. It affects the whole supply chain culture that formed around it.

GitHub Is Slowly Dying

GitHub is currently losing some of what made it feel inevitable. Maybe that’s just the life and death of large centralized platforms: they always disappoint eventually. Right now people are tired of the instability, the product churn, the Copilot AI noise, the unclear leadership, and the feeling that the platform is no longer primarily designed for the community that made it valuable.

Obviously, GitHub also finds itself in the midst of the agentic coding revolution and that causes enormous pressure on the folks over there. But the site has no leadership! It’s a miracle that things are going as well as they are.

For a while, leaving GitHub felt like a symbolic move mostly made by smaller projects or by people with strong views about software freedom. I definitely cringed when Zig moved to Codeberg! But I now see people with real weight and signal talking about leaving GitHub. The most obvious one is Mitchell Hashimoto, who announced that Ghostty will move. Where it will move is not clear, but it’s a strong signal. But there are others, too. Strudel moved to Codeberg and so did Tenacity. Will they cause enough of a shift? Probably not, but I find myself on non-GitHub properties more frequently again compared to just a year ago.

One can argue that this is good: it is healthy for Open Source to stop pretending that one company should be the default home of everything. Git itself was designed for a world with many homes.

Dispersion Has a Cost

Going back to many forges, many servers, many small homes, and many independent communities will increase decentralization, and in many ways it will force systems to adapt. This can restore autonomy and make projects less dependent on the whims of Microsoft leadership. It can also allow different communities to choose different workflows. What’s happening in Pi‘s issue tracker currently is largely a result of GitHub’s product choices not working in the present-day world of Open Source. It was built for engagement, not for maintainer sanity.

It can also make the web forget again. I quite like software that forgets because it has a cleansing element. Maybe the real risk of loss will make us reflect more on actually taking advantage of a distributed version control system.

But if projects move to something more akin to self-hosted forges, to their own self-hosted Mercurial or cgit servers, we run the risk of losing things that we don’t want to lose. The code might be distributed in theory, but the social context often is not. Issues, reviews, design discussions, release notes, security advisories, and old tarballs are fragile. They disappear much more easily than we like to admit. Mailing lists, which carried a lot of this in earlier years, have not kept up with the needs of today, and are largely a user experience disaster.

We Need an Archive

As much as I like the idea of things fading out of existence, we absolutely need libraries and archives.

Regardless of whether GitHub is here to stay or projects find new homes, what I would like to see is some public, boring, well-funded archive for Open Source software. Something with the power of an endowment or public funding to keep it afloat. Something whose job is not to win the developer productivity market but just to make sure that the most important things we create do not disappear.

The bells and whistles can be someone else’s problem, but source archives, release artifacts, metadata, and enough project context to understand what happened should be preserved somewhere that is not tied to the business model or leadership mood of a single company.

GitHub accidentally became that archive because it became the center of Open Source activity. Once that no longer holds, we should not assume some magic archival function will emerge or that GitHub will continue to function as such. We have already seen what happens when project homes are just personal servers and good intentions, and we have seen what happened to Google Code and Bitbucket.

I hope GitHub recovers, I really do, in part because a lot of history lives there and because the people still working on it inherited something genuinely important. But I no longer think it is responsible to let the continued memory of Open Source depend on GitHub remaining a healthy product.

The world before GitHub had more autonomy and more loss, and in some ways, we’re probably going to move back there, at least for a while. Whatever people want to start building next should try to keep the memory and lose the dependence. It should be easier to move projects, easier to mirror their social context, easier to preserve releases, and harder for one company’s drift to become a cultural crisis for everyone else.

I do not want to go back to the old web of broken tarball links and abandoned Trac instances. I also do not want Open Source to pretend that the last twenty years were normal or permanent. GitHub wrote a remarkable chapter of Open Source, and if that chapter is ending, the next one should learn from it and also from what came before.

  1. This is also a good reminder that we rely so very much on the Internet Archive for many projects of the time.

April 28, 2026 12:00 AM UTC

April 27, 2026


Python Engineering at Microsoft

Python Environments Extension for VS Code- April Update

Python Environments — April 2026 Release

We’re excited to announce the latest update to the Python Environments extension for Visual Studio Code. This release focuses on startup performance, reliability, and quality-of-life improvements for terminals and package management.

Faster startup

Activation is now noticeably snappier, especially on remote and containerized workspaces. We made three key changes:

Lazy manager discovery. Pipenv, pyenv, and poetry environments are no longer discovered eagerly on startup. Instead, detection is deferred until you actually interact with one of those managers — for example, by opening a project that uses a Pipfile or pyproject.toml with a poetry backend. This eliminates unnecessary work for the majority of users who rely on venv, uv, or conda. (#1423, #1408)

Faster environment resolution. The path from “extension activated” to “interpreter ready” is shorter. Resolution during startup and interpreter selection now completes with less overhead. (#1419)

Narrower default workspace scanning. The default search pattern for virtual environments was ./**/.venv, which triggered a recursive scan of the entire workspace tree. On large projects — and especially over Remote-SSH — this could cause the Python Environment Tools (PET) process to hang for 30+ seconds during configuration, leading to cascading timeouts and restart loops (see #1460, #1434). The default is now .venv and */.venv, which covers the standard layout without deep traversal. If you have virtual environments nested more than one level deep, you can add custom paths via the python-envs.workspaceSearchPaths setting. (#1419)

Improved reliability

PET crash recovery. When the PET process crashed mid-refresh, the extension could end up in a broken state with no environments visible. We now retry the refresh after a crash and handle empty or malformed responses defensively, so a transient PET failure no longer leaves you with a blank environment list. (#1442, #1447, #1444)

Conda base environment fix. After a window reload, the conda base environment could be incorrectly restored as a different named environment — making it appear that your interpreter selection had silently changed. This is now fixed. (#1412)

Environment updates and terminals

Auto-refreshing package lists. You no longer need to manually refresh the package view after running pip install or pip uninstall. The extension now watches for metadata changes in site-packages and updates the package list automatically. (#1420)

Multi-project terminal creation. In workspaces with multiple Python projects, creating a new terminal now prompts you to choose which project’s environment to activate, rather than picking one silently. (#1401)

PowerShell activation on Windows. Virtual environment activation via PowerShell could fail if the system execution policy blocked scripts. The extension now sets a process-scoped execution policy before running activation, so .ps1 activate scripts work out of the box without requiring system-wide policy changes. (#1414)


Try the update today and let us know how it works for you. If you run into issues, please file them on GitHub.

The post Python Environments Extension for VS Code- April Update appeared first on Microsoft for Python Developers Blog.

April 27, 2026 08:07 PM UTC


Talk Python to Me

#546: Self hosting apps for Python people

The cloud is convenient until it isn't. You upload your photos, sync your contacts, click through the cookie banners. Then prices go up again or you read about a family that lost their entire Google account over a medical photo sent to a doctor. At some point, the question shifts from "why would I run this myself?" to "why aren't I?" <br/> <br/> My guest this week is Alex Kretzschmar, head of DevRel at Tailscale, longtime host of the Self-Hosted podcast, and co-founder of Linuxserver.io. We cover what self-hosting really means in 2026, the apps worth running yourself like Immich and Home Assistant, why Docker Compose ties it all together, and how Tailscale lets you reach any of it from anywhere, without opening a single port. If you've been thinking about pulling your digital life back behind your own walls, this is your roadmap.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/temporal-replay'>Temporal</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guest</strong><br/> <strong>Alex Kretzschmar</strong>: <a href="https://alex.ktz.me/?featured_on=talkpython" target="_blank" >alex.ktz.me</a><br/> <br/> <strong>Bitflip podcast</strong>: <a href="https://bitflip.show?featured_on=talkpython" target="_blank" >bitflip.show</a><br/> <strong>Self-Hosted podcast (Alex's previous show)</strong>: <a href="https://selfhosted.show?featured_on=talkpython" target="_blank" >selfhosted.show</a><br/> <strong>Perfect Media Server</strong>: <a href="https://perfectmediaserver.com?featured_on=talkpython" target="_blank" >perfectmediaserver.com</a><br/> <strong>KTZ Systems on YouTube</strong>: <a href="https://youtube.com/@ktzsystems" target="_blank" >youtube.com/@ktzsystems</a><br/> <strong>Linuxserver.io (co-founded by Alex)</strong>: <a href="https://linuxserver.io?featured_on=talkpython" target="_blank" >linuxserver.io</a><br/> <strong>"How Tailscale Works" blog post</strong>: <a href="https://tailscale.com/blog/how-tailscale-works?featured_on=talkpython" target="_blank" >tailscale.com/blog/how-tailscale-works</a><br/> <strong>https://tailscale.com/</strong>: <a href="https://tailscale.com/?featured_on=talkpython" target="_blank" >tailscale.com</a><br/> <br/> <strong>Self-hosted apps discussed</strong><br/> <strong>Awesome Self-Hosted (GitHub list)</strong>: <a href="https://github.com/awesome-selfhosted/awesome-selfhosted?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Immich (Google Photos alternative)</strong>: <a href="https://immich.app?featured_on=talkpython" target="_blank" >immich.app</a><br/> <strong>Home Assistant</strong>: <a href="https://home-assistant.io?featured_on=talkpython" target="_blank" >home-assistant.io</a><br/> <strong>Open Home Foundation</strong>: <a href="https://openhomefoundation.org?featured_on=talkpython" target="_blank" >openhomefoundation.org</a><br/> <strong>Plausible Analytics</strong>: <a href="https://plausible.io?featured_on=talkpython" target="_blank" >plausible.io</a><br/> <strong>Umami Analytics</strong>: <a href="https://umami.is?featured_on=talkpython" target="_blank" >umami.is</a><br/> <strong>Python integration for umami</strong>: <a href="https://pypi.org/project/umami-analytics/?featured_on=talkpython" target="_blank" >pypi.org</a><br/> <strong>Pi-hole</strong>: <a href="https://pi-hole.net?featured_on=talkpython" target="_blank" >pi-hole.net</a><br/> <strong>AdGuard Home</strong>: <a href="https://adguard.com/adguard-home?featured_on=talkpython" target="_blank" >adguard.com</a><br/> <strong>NextDNS</strong>: <a href="https://nextdns.io?featured_on=talkpython" target="_blank" >nextdns.io</a><br/> <strong>Coolify</strong>: <a href="https://coolify.io?featured_on=talkpython" target="_blank" >coolify.io</a><br/> <strong>Docker + ufw</strong>: <a href="https://docs.docker.com/engine/network/packet-filtering-firewalls/#docker-and-ufw" target="_blank" >docs.docker.com</a><br/> <br/> <strong>Storage, backup &amp; filesystem</strong><br/> <strong>OpenZFS</strong>: <a href="https://openzfs.org?featured_on=talkpython" target="_blank" >openzfs.org</a><br/> <strong>ZFS.rent (offsite ZFS replication)</strong>: <a href="https://zfs.rent?featured_on=talkpython" target="_blank" >zfs.rent</a><br/> <strong>Backblaze</strong>: <a href="https://backblaze.com?featured_on=talkpython" target="_blank" >backblaze.com</a><br/> <strong>Hetzner Storage Box</strong>: <a href="https://hetzner.com/storage/storage-box?featured_on=talkpython" target="_blank" >hetzner.com</a><br/> <strong>DigitalOcean</strong>: <a href="https://digitalocean.com?featured_on=talkpython" target="_blank" >digitalocean.com</a><br/> <br/> <strong>Secrets management mentioned</strong><br/> <strong>OpenBao (open-source Vault fork)</strong>: <a href="https://openbao.org?featured_on=talkpython" target="_blank" >openbao.org</a><br/> <strong>HashiCorp Vault</strong>: <a href="https://hashicorp.com/products/vault?featured_on=talkpython" target="_blank" >hashicorp.com</a><br/> <strong>Bitwarden</strong>: <a href="https://bitwarden.com?featured_on=talkpython" target="_blank" >bitwarden.com</a><br/> <strong>1Password</strong>: <a href="https://1password.com?featured_on=talkpython" target="_blank" >1password.com</a><br/> <br/> <strong>Hardware mentioned</strong><br/> <strong>Proxmox VE</strong>: <a href="https://proxmox.com?featured_on=talkpython" target="_blank" >proxmox.com</a><br/> <strong>Minisforum MS01</strong>: <a href="https://minisforum.com?featured_on=talkpython" target="_blank" >minisforum.com</a><br/> <strong>Zima Board / Zima OS</strong>: <a href="https://zimaspace.com?featured_on=talkpython" target="_blank" >zimaspace.com</a><br/> <br/> <strong>Other references</strong><br/> <strong>Cory Doctorow on "enshittification" (Cory's blog where he coined the term)</strong>: <a href="https://pluralistic.net?featured_on=talkpython" target="_blank" >pluralistic.net</a><br/> <strong>Linus Tech Tips' WAN Show (Linus mentioned NAS-building going mainstream)</strong>: <a href="https://linustechtips.com?featured_on=talkpython" target="_blank" >linustechtips.com</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=1iAQRY7hiVA" target="_blank" >youtube.com</a><br/> <strong>Episode #546 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/546/self-hosting-apps-for-python-people#takeaways-anchor" target="_blank" >talkpython.fm/546</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/546/self-hosting-apps-for-python-people" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

April 27, 2026 07:53 PM UTC


EuroPython

Humans of EuroPython: Martin Borus

EuroPython wouldn’t exist if it weren’t for all the volunteers who put in countless hours to organize it. Whether it’s contracting the venue, ordering catering for a week-long conference, selecting and confirming talks & workshops, hundreds of hours of loving work have been put into making each edition the best one yet.

Today, we’d like to share an interview with Martin Borus, a member of the EuroPython 2025 Operations team and a returning conference contributor.

Thank you for making EuroPython such a welcoming conference, Martin!

altMartin Borus, member of the Operations Team at EuroPython 2025 Prague & Remote

EP: What first inspired you to volunteer for EuroPython?

When visiting EuroPython - which was my first big Python conference - I got to know some volunteers. From the next year on I got gradually into helping. It seemed like a good idea to help.

EP: How did contributing to EuroPython impact your relationships within the community?

It was an entry point into the Python community. I met a lot of people I would not have met otherwise. Which led to a lot of interesting conversions and specific help for my journey into Python.

EP: Was there a moment when you felt your contribution really made a difference?

One of these moments comes from the Beginners’ Orientation sessions. I still remember the problems I had being alone on my first EuroPython that motivated me to give others a better start. I got feedback that this helped others to enjoy their first conference more.

EP: What&aposs one thing you took away from contributing to EuroPython that you still use today?

The experiences gained in working with a team coming from all over Europe.

EP: If you could add one thing to make the volunteer experience even better, what would it be?

If there was a single thing, we’d have implemented it already, because each year the volunteers try to improve based on the experiences of the previous years.

EP: What tips do you have for people attending the conference?

For anybody coming to EuroPython, volunteer or attendee, I can highly recommend having a note on your phone about what topics you’re interested in. Collect questions in the weeks before the conference, so you can pull them out in conversations. I call this my “EuroPython wish list” and usually get large parts of it covered during the week.

EP: What would you say to someone considering volunteering at EuroPython but feeling hesitant?

Even if it’s at the cost of missing some of the talks, as a volunteer you are where the action is and you have a chance to get more experiences out of the conference.

EP: Thank you for your contribution, Martin!

April 27, 2026 07:02 PM UTC


PyCon

Asking the Key Questions: Q&A with the PyCon US 2026 keynote speaker Lin Qiao

This is a blog series where we're asking each of our PyConUS 2026 keynote speakers about their journey into tech, how excited they are for PyconUS and any tips they can provide for an awesome conference experience! Here's our interview with Lin Qiao




Without giving too many spoilers, tell us what your keynote is about?

Most AI products are built on rented land. If your competitor can make the same API call, you do not have a moat. I will break down what the teams pulling ahead are doing differently, with real examples from Cursor, Notion, and Vercel, and get into the hard tradeoffs nobody talks about enough.

How did you get started in tech/Python?

My path into tech started pretty naturally. I studied STEM all through high school and undergrad, so it was always the space I gravitated toward. Python specifically came later, during my PhD, where I started using it to run experiments and support my research papers.

What do you think the most important work you've ever done is?

Co-creating PyTorch was a defining chapter, because it became the foundation for how the world does AI research. But I think the most important work is really what I’m doing now. I founded Fireworks because I spent years watching companies outside Big Tech struggle to get AI into production. They had the ambition but not the infrastructure, and we’re changing that.

Have you been to PyCon US before? What are you looking forward to?

PyTorch was built on Python, so this community is close to my heart. I am most looking forward to the hallway conversations. The best ideas come from talking to people who are deep in the work.

Any advice for first-time conference goers?

Talk to people. The sessions are recorded, but the people are only there for a few days. Go to the hallway track, sit at lunch tables where you do not know anyone, and if a talk resonated with you, go tell the speaker. That’s how the best professional relationships start.

Can you tell us about an open source or open culture project that you think not enough people know about?

I am biased, but I think the broader open model ecosystem does not get the credit it deserves. Everyone knows the big names, but there is an incredible amount of work happening in specialized open models, evaluation frameworks, and fine-tuning tooling that is quietly making AI more accessible. The pattern I keep seeing is that the most impactful open-source projects are the ones that lower the barrier for the next person to build something better. That was true for PyTorch, and it is true today for the tools that help developers go from an off-the-shelf model to something truly customized for their use case.


April 27, 2026 04:11 PM UTC


Ari Lamstein

A Web App for Exploring Foreign‑Born Population Trends

I just created a web app for exploring trends in the foreign-born population in the United States. The app lets you pick a location and see how the size of the foreign-born population there has changed over time. More importantly, it gives people a way to track how the foreign‑born population shifts as the Trump administration’s immigration enforcement efforts unfold.

The app is built in Python with Streamlit, and the data comes from the American Community Survey (ACS) 1‑year estimates. Everything is powered by the acs‑nativity package I recently published to PyPI. The ACS currently covers 2005–2024, and the 2025 release is expected in September — I’ll update the app as soon as the new data becomes available. Data is available for the nation, all states, and any county or city with at least 65,000 residents.

Here’s a screenshot from the app providing data on Chicago, Illinois:

Chicago’s foreign‑born population has risen and fallen sharply at different points between 2005 and 2024. Last September President Trump launched an immigration enforcement action in the city called Operation Midway Blitz. When the 2025 ACS estimates come out in September, we’ll get the first chance to see whether that enforcement action shows up in the data – and how any change compares with the kinds of fluctuations Chicago has experienced in the past.

Exploratory Data Analysis

In addition to the graphs generated by the acs-nativity package, the app provides two additional tabs to help you explore nativity trends: the Table tab and the Compare Years tab.

Table Tab

The Table tab shows the full dataset for the selected geography level, and you can sort by any column. Sorting makes it easy to spot outliers. For example, in 2024 the location with the highest share of foreign‑born residents was Hialeah, Florida (77.1%), while the lowest was Muskingum County, Ohio (0.7%):

Compare Years Tab

The Compare Years tab lets you create a table showing how a demographic has changed between two years. This often surfaces surprising results. For example, between 2023 and 2024, New York City saw an estimated increase of 205,767 in the Native-born population – slightly larger than California’s increase of 204,056, despite California’s population being several times larger:

Try the App

If you’re interested in how these patterns play out in your own community, you can explore the app here.

April 27, 2026 04:00 PM UTC


Rodrigo Girão Serrão

TIL #143 – Resolve a lazy import manually

Learn how to work around the Python machinery to resolve an explicit lazy import manually.

A couple of articles ago I wrote about how you could inspect a lazy import.

Apparently, you can use a similar trick to check the attributes and methods that a lazy import has:

>>> lazy import json
>>> dir(globals()["json"])
['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'resolve']

Apart from a large number of dunder methods and dunder attributes, you'll find the method resolve. You can run help(globals()["json"].resolve) to get the help text on that method:

Help on built-in function resolve:

resolve() method of builtins.lazy_import instance
    resolves the lazy import and returns the actual object

This shows that it's the method resolve that resolves a lazy import.

If you call the method, you can get access to the resolved module:

>>> lazy import json
>>> resolved_json = globals()["json"].resolve()
>>> resolved_json
<module 'json' from '/Users/rodrigogs/.local/share/uv/python/cpython-3.15.0a8-macos-aarch64-none/lib/python3.15/json/__init__.py'>

After calling resolve, the lazy module doesn't disappear automatically:

>>> globals()["json"]
<lazy_import 'json'>

Which shows that the mechanism that's responsible for reification most likely calls the method resolve and then reassigns the name of the module to the module returned by resolve. In a way, it's as if the reification process ran something like

globals()["json"] = globals()["json"].resolve()

In hindsight, this isn't too surprising. After all, Python tends to be very consistent. The only mistery that remains is what triggers the reification process. How is it that Python can detect when something touches the lazy import..?

April 27, 2026 03:18 PM UTC


Real Python

How to Conceptualize Python Fundamentals for Greater Mastery

Struggling to conceptualize Python fundamentals is a common problem learners face. If you’re unable to put a fundamental concept into perspective and form a clear mental picture of what it’s about, it’ll be difficult to understand and apply it.

In this guide, you’ll walk through a framework of steps to help you better conceptualize Python fundamentals. This process is helpful for Python developers and learners at any experience level, but especially for beginners. If you are just starting out, this guide will help you build a solid understanding of the basics.

You might want to set aside twenty minutes or so to read through the tutorial, and another thirty minutes to practice on a few key concepts. You should also gather a list of difficult topics, your preferred learning resources, and a note-taking app or pen and paper.

Click the link below to download a free cheat sheet that covers the framework steps you’ll walk through in this guide:

Get Your Cheat Sheet: Click here to download a free PDF that outlines the framework of steps for conceptualizing Python fundamentals.

Take the Quiz: Test your knowledge with our interactive “How to Conceptualize Python Fundamentals for Greater Mastery” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

How to Conceptualize Python Fundamentals for Greater Mastery

Check your understanding of a framework for conceptualizing Python fundamentals, from defining concepts to comparing similar ideas.

Step 1: Define the Concept in Your Own Words

Begin by briefly describing the concept in your own words. You can write your definition in the downloadable worksheet provided with this tutorial. Note that writing is a powerful tool for reinforcing learning, as educator and former Rutgers University professor Janet Emig asserted in her paper, Writing as a Mode of Learning.

Answer Key Questions for Defining a Concept

As a framework for your definition, consider these key questions:

  • What: What is a short description of the concept?
  • Why: Why is the concept important in the broader Python context?
  • How: How is the concept used in a Python program?

These questions will help you establish a core understanding of the concept you’re learning.

You might feel intimidated when you’re trying to define a Python concept. If you need help, there are many resources that can assist you. Real Python’s Reference section has concise definitions of Python keywords, built-in types, standard library modules, and more to help you build your own descriptions.

If you’re a visual learner, using an illustration can be a powerful way to enhance your understanding. In addition to a written definition, you can draw a picture or diagram to illustrate the concept. For example, the Variables in Python: Usage and Best Practices tutorial shows some example images of how you might picture variables. If you look at the Lists vs Tuples in Python tutorial, you can see a diagram of a Python list.

While pictures can be helpful, being able to conceptualize doesn’t necessarily mean you have to think visually. There are different thinking styles. Some researchers suggest that people can be visual or verbal thinkers. Pattern-based thinking is another style. Several of the tips in this tutorial encourage you to explore different aspects of these styles, depending on which works best for you.

View Examples of Concept Definitions

You might find a couple of examples helpful in understanding how to define difficult concepts. Suppose you’re studying variables. Here are possible responses to the key questions:

  • What: A variable is a name that points to an object stored in the program’s memory.
  • Why: Variables are key for data processing.
  • How: Assigning a value to a variable using the assignment operator (=) allows you to access your program’s data in a user-friendly way. You can then access and change the value by name throughout the program as needed.

This description provides a concise summary of what a variable is, why it matters, and how to use one. You can also include an example of variable usage as an addendum to your definition:

Language: Python
>>> age = 25

Here, you created a variable called age and assigned it a value of 25. From now on, you can use the variable name age to access, modify, or use the variable’s value.

Or, you might be learning about lists. Your definitions could look like this:

  • What: A list is a sequence of values or objects.
  • Why: Working with sequences of items is a common, foundational task in programming. Python lists make this important work easier.
  • How: You can create a list by writing a pair of square brackets, with a comma-separated sequence of items inside them. Assign the list to a variable to use it throughout your program.

Here’s a short Python list that demonstrates the points in the definitions above:

Read the full article at https://realpython.com/conceptualize-python-fundamentals/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 27, 2026 02:00 PM UTC


Django Weblog

It&#x27;s time to redesign djangoproject.com

If you've felt like djangoproject.com could use a refresh, you're not alone. The site has served the community well for a long time, it’s beloved by a lot of people but doesn’t reflect where Django is today or who we want to reach. We've been working on a redesign behind the scenes, and we want to share where we're headed and how you can get involved.

Why a redesign

The case has been building for a while. The excellent user research report from 20tab documented in detail what current site users struggle with, and the more recent community discussion on homepage redesigns on the forum focuses on the image issue.

In her recent talk Debunking Django Myths, Sarah Boyce, one of our Django Fellows who helps maintain the project, walked through the gap between how Django is perceived and what it actually offers in 2026. Our website is one of the places where the gap is widest, and we need to close it.

Debunking Django Myths - Sarah Boyce @ Python Unplugged on PyTV

It’s harder than it looks on the surface, as it’s essential the site serves both as a showcase of the value of Django for newcomers; and as a central information space for our users; and as an online and in-person community hub; and a fundraising and sustainability tool for our Django Software Foundation.

How we're approaching this

We're planning the work in three phases.

Discovery and groundwork. This is where we’re at right now. Before anything gets designed, we need clarity on what the site should communicate: Django's value, who we're speaking to, and what success looks like. That means a marketing strategy (at least bigger-picture). Possibly additional user research focused on new users. Definitely site analytics so we know how different aspects of the site are working. And a redesign brief we can share with UX and visual design experts. We also need to be building up capacity in UX, Information Architecture (IA), and marketing, since those areas of expertise are essential for the success of the website but not well represented in our working groups.

Design. From there we'll move into IA, mockups, and low-fidelity prototypes. We expect this visual work will be component-driven, producing a small design system and pattern library that can support a section-by-section rollout rather than a big-bang launch. The homepage is the most visible surface and a natural focus, but it might be easier for our volunteers to first look at more specific sections (docs, donation flows, community) before tackling the more complex multi-purpose areas.

Build. For that, we want to work with our existing volunteer contributors as much as possible, so implementation will be incremental against mockups that reflect the long-term goal. This keeps the site working and evolving while we make progress on the design.

Who's doing the work

We hope to do most of this with existing volunteers. The Website working group, the Accessibility team, and the Social Media working group. Working with paid contractors for specific tasks if Django Software Foundation finances allow. A project this size really needs both: the continuity of volunteers who know Django and our community and Foundation, and focused professional time for the pieces that need it.

Where you come in

If you have relevant experience in any of the following, we'd genuinely love to hear from you:

Check out the Django forum thread we’re using for ongoing updates, come say hi in DMs, or chime in on the tracking issue for this work. Our Discord server is a good place to reach out too.
And separately - a good redesign will cost real money. We'd like some of this work to be handled by paid contractors where it makes sense, and that depends on what the Foundation can afford. If you're in a position to support the DSF financially, it directly helps us make that possible. Thanks for caring about this! Let's make djangoproject.com as good as the framework and community it represents.

April 27, 2026 01:00 PM UTC


Caktus Consulting Group

Easily Stream LLM Responses with Django-Bolt and PydanticAI

I like how easy it is to create an async streaming endpoint with django-bolt and PydanticAI from scratch. With only a few commands you can set it up.

April 27, 2026 01:00 PM UTC


Real Python

Quiz: Python's __all__: Packages, Modules, and Wildcard Imports

In this quiz, you’ll test your understanding of Python’s __all__: Packages, Modules, and Wildcard Imports.

By working through this quiz, you’ll revisit how wildcard imports work, what role the __all__ variable plays in modules and packages, and how to define a clean public API for your Python code.

You’ll need to know the basics of Python modules and packages and the import system to get the most out of this quiz. Good luck!


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 27, 2026 12:00 PM UTC


"Michiel's Blog"

httpxyz one month in

It has been roughly a month since we forked httpx and named our package httpxyz. For the reasons why I refer you to Why I forked httpx. In this post I will explain where we are now, one month ‘into’ having created the fork.

TL;DR: httpxyz has now many bug fixes and significantly better performance than httpx, and we would encourage anyone to move to our version!

httpxyz logo

Where we are now

Initial version of fork

Our first version of httpxyz contained just the fixes to get zstd working, and the fixes to get the test suite running on python 3.14, some ‘housekeeping’ changes related to the renaming, as well as some minor and trivial fixes that had already been merged in httpx and had not been released yet.

End of March: bugfixes, and performance improvements

Then at the end of March we did another release of httpxyz, containing a compatibility shim that allows you to use httpxyz even with third-party packages that import httpx themselves, as long as you import httpxyz first. We’ve added a nice documentation page for this.

This release also included a change that lazily imports the CLI for httpxyz, which is typically not used in your app; in my measurements this makes httpxyz import in half the time needed by httpx. This PR has been sitting idle at the httpx repository for over a year ref and we’ve kindly adopted it, thanks to Nate Hardison for providing it!

And apart from that we adopted a bunch of other smaller and bigger bug fixes and improvements in this release.

httpcorexyz!

But we realized pretty soon that there were some problems that httpx was having, that were actually caused by the underlying ‘transport’ which is defined in a different module, httpcore. This module also had no release in over a year, and there were a bunch of issues reported by users, with fixes even, that were not landing. So we ended up also forking httpcore and created httpcorexyz.

Here we fixed a WHOLE bunch of performance related issues:

These are serious issues, and it’s not even all. Almost all of these were already fixed by ‘the community’ and were living in unmerged pull requests. We code reviewed them, fixed them up where needed and adapted them to our code base. We also now added a benchmarks section to the documentation; httpxyz is now MUCH faster than httpx in many serious use cases.

Then we released a new version of HTTPXYZ earlier this week: 0.31.0 with this updated version of httpcorexyz and also more bug fixes of its own.

Where I previously stated having the fork is about future-proofing and I would not necessarily recommend people to switch to httpx, with this new version 0.31.0 I now feel that you should definitely move to httpxyz. With All new fixes and performance improvements it now makes ‘business sense’ to switch from httpx to httpxyz, the ‘churn’ is worth it now!

Adoption

We’re happy to see projects moving to using our package, even though it’s not much yet. We encourage everyone to do so, and please tell others and convince them! If you would find issues with your switchover, which I do not expect, please just open an issue and I’m sure we can figure it out!

Quotes

I'm a big user of httpx... Thanks for the fork. Here's to hoping it gained some traction. (Michael Kennedy, Founder, Talk Python)

Thank you for forking httpx. httpx is a joy to work with, but clearly a bit dead in the water. Really nice to see it moving forward again. (hhartzer)

If you want to have traction, you should probably move to GitHub instead of using Codeberg (Marcelo Trylesinski aka Kludex / FastAPI)

On Codeberg

We explained the last time why we chose Codeberg; we’re not unhappy about our choice. We got 39 ‘stars’ so far on our Codeberg repository and many of those are from ‘fresh’ accounts. If us being on codeberg helps a tiny little bit making github a bit less dominant, I think this is a good thing, and I hope it can inspire other projects to do the same. And after all, for most users, who would just pip install httpxyz or uv add httpxyz it does not matter where the source is hosted.

Thanks, and have fun!

April 27, 2026 07:00 AM UTC


Seth Michael Larson

pip v26.1 adds support for relative dependency cooldowns

My work as the Security Developer-in-Residence at the Python Software Foundation is sponsored by Alpha-Omega. Thanks to Alpha-Omega for supporting security in the Python ecosystem.

I published a blog post two months ago about how to hack relative dependency cooldowns into pip v26.0 with crontab. Now with pip v26.1 available, this hack is no longer required! Time to upgrade my pip and delete that cron job...

Now in pip v26.1 you can use uploaded-prior-to in your ~/.config/pip/pip.conf file or --uploaded-prior-to= as a CLI option with relative RFC 3339 duration values. pip supports setting days using “PND” where N is the number of days.

For example, using the following as your ~/.config/pip/pip.conf file will only install packages that are at least 7 days old on the Python Package Index:

[install]
uploaded-prior-to = P7D

Because this setting is in your global pip config, it means that you won't have to remember to set the option when invoking pip install. Using a relative value also means you won't have to repeatedly set new dates to receive new releases of the packages you use.

Using relative dependency cooldowns means that installing directly from a public index such as the Python Package Index (PyPI) will benefit from manual malware reporting, triaging, and removal efforts. The vast majority of malware and supply chain attacks published are detected and removed within hours of being uploaded to the index. Using relative dependency cooldowns means indexes have time to respond to malicious software and keep you safe.

Reminder that dependency cooldowns should be paired with a dependency management strategy that prioritizes dependency releases that fix vulnerabilities. You don't want to be waiting for days for a dependency cooldown to clear while your service is vulnerable. Managing, reviewing, upgrading, and deploying vulnerability patches should be a deliberate task, not one that happens "on-accident" due to an upgrade-by-default installation strategy.

Andrew Nesbitt has published a comprehensive review of dependency cooldowns across many different package managers. Thanks to William Woodruff who originally published this approach.



Thanks for keeping RSS alive! ♥

April 27, 2026 12:00 AM UTC

April 26, 2026


Paolo Melchiorre

My DjangoCon Europe 2026

A timeline of my DjangoCon Europe 2026 journey, from Lecce to Bari and then Athens, told through the Mastodon posts I shared along the way.

April 26, 2026 10:00 PM UTC


death and gravity

reader 3.23 released – OPML, hosted reader intro

Hi there!

I'm happy to announce version 3.23 of reader, a Python feed reader library.

What's new? #

Here are the highlights since reader 3.22.

OPML support #

reader can finally import and export feeds as OPML subscription lists!

I initially wanted to use listparser, but I ended writing my own reader.opml module, mainly to keep dependencies down; it's just ~100 lines of code, including the work-around for an xml.etree bug I found.

Protip

utf-8 is a valid XML encoding name, utf8 is not.

Here are some web app screenshots:

export feeds export feeds
import feeds (select) import feeds (select)
import feeds (result) import feeds (result)

Hosted reader status update #

As I said last time, I'm working on a hosted version of reader. It's still some ways off from a proper launch, but I have to start writing about it eventually, so it might as well be here.

Why another feed reader web app? #

While reader the library allows you to write your own feed reader, I don't expect most people to do that, and nor should they; for reader to be truly useful, it needs to reach all the way to the end user.

But I am making the web app for myself anyway, why not share it with others?

Why not just self-host it? #

Because for most people, "self-host it" is not the answer – it takes knowledge, time, and if you don't already have a server you can use, it can cost a bit too.

If someone were to host reader for me, I'd gladly pay for it; what matters more is that it is possible to self-host, should the need arise; none of that sunsetting bullshit.

But I am self-hosting for myself anyway, so sharing it with others wouldn't be that big of a stretch. And while it is a stretch, it's going to make reader better overall.

OK, so what now? #

This is what is finished so far:

(More on architecture and so on in a dedicated future article.)

Remaining work to an MVP:

And then there's the promotional stuff:

Meanwhile, if this sounds like something you'd like to use, get in touch.


That's it for now. For more details, see the full changelog.

Want to contribute? Check out the docs and the roadmap.

Learned something new today? Share it with others, it really helps!

What is reader? #

reader takes care of the core functionality required by a feed reader, so you can focus on what makes yours different.

reader in action reader allows you to:

...all these with:

To find out more, check out the GitHub repo and the docs, or give the tutorial a try.

Why use a feed reader library? #

Have you been unhappy with existing feed readers and wanted to make your own, but:

Are you already working with feedparser, but:

... while still supporting all the feed types feedparser does?

If you answered yes to any of the above, reader can help.

The reader philosophy #

April 26, 2026 06:00 PM UTC

April 24, 2026


Real Python

The Real Python Podcast – Episode #292: Becoming a Better Python Developer Through Learning Rust

How can learning Rust help make you a better Python Developer? How do techniques required by a compiled language translate to improving your Python code? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 24, 2026 12:00 PM UTC