Planet Python
Last update: August 02, 2025 10:42 AM UTC
August 01, 2025
Real Python
The Real Python Podcast – Episode #259: Design Patterns That Don't Translate to Python
Do the design patterns learned in other programming languages translate to coding in Python? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Zero to Mastery
[July 2025] Python Monthly Newsletter 🐍
68th issue of Andrei Neagoie's must-read monthly Python Newsletter: Useless Design Patterns, Django turns 20, 330× faster Python, and much more. Read the full newsletter to get up-to-date with everything you need to know from last month.
HoloViz
Plotting made easy with hvPlot: 0.12 release
meejah.ca
ShWiM: peer-to-peer terminal sharing
SHell WIth Me combines magic-wormhole and tty-share for e2ee, p2p terminal sharing
July 31, 2025
Python Morsels
Nested functions in Python
Functions in Python can be defined within another function.
A function defined within a function
Python's functions can be defined pretty much anywhere.
You can even define a function inside a function:
def greet_me(name="friend"):
def greet():
print("Hello", name)
greet()
When we call this greet_me
function, it defines a greet
a function and then calls that function:
>>> greet_me()
Hello friend
Note that the inner function is allowed to use the name
from the outer function.
A function returned from another function
Instead of calling our inner …
Read the full article: https://www.pythonmorsels.com/nested-functions/
Django Weblog
Djangonaut Space is looking for contributors to be mentors
Hello Django 🌌 Universe!
🛰️ This is Djangonaut Space phoning home about Session 5! We're recruiting technical mentors (Navigators) to join our next 🌟stellar🌟 mission.
👩🚀 We are looking for people who regularly contribute to Django or a Django related package, that want to mentor others. Our next session will be Oct-Nov.
🚀 Come join us and be a cosmic contributor! Express your interest to be a mentor here.
📚 Want to learn more about what it means to be a Navigator:
🤝 Interested people will have to complete a 30 minute meet & greet type interview with organizers.
✋ If you're interested in applying to be a Djangonaut, applications will open and close in September (dates to be determined). The latest information will be posted on our site, djangonaut.space. Please follow our social media accounts or subscribe to our newsletter for announcements.
☄️ We'll see you around the cosmos!
Djangonaut Space session organizers
PyCharm
The Bazel Plugin for IntelliJ IDEA Is Now Generally Available!
After much anticipation, we are finally ready to announce the general availability (GA) of the new Bazel plugin for IntelliJ IDEA, PyCharm, and GoLand – now developed by JetBrains! After months of focused development and valuable feedback from our EAP users, we’re officially launching our revamped Bazel experience.
While we’ve been shipping updates regularly, the leap to our 2025.2 GA release marks a major milestone. Even though our primary focus for this release was on creating the best experience we can for Java, Kotlin, and Scala developers, we also brought support for the Python and Go ecosystems, and we will continue to maintain and improve it in the coming releases.
If you’re migrating from the previous plugin originally released by Google, you’ll notice a more straightforward workflow that aligns with the standard JetBrains IDE experience you expect from other build tool integrations such as Maven and Gradle. Now, let’s dive into what’s new!
Key features in 2025.2

Bazel Query in Action
- Go is a go. We’re officially rolling out support for Go. You can now import your Go targets in Bazel projects into both IntelliJ IDEA (with the Go plugin) and GoLand. This brings the full IDE experience you rely on: code highlighting, completion, navigation, and the ability to run, debug, and get coverage for your tests.
- Built-in Bazel Query tool window: Go beyond sync and build with Bazel queries integrated directly into your IDE via their own dedicated tool window. Craft your queries with syntax completion and a helpful UI for flags to explore your project’s dependency graph without ever leaving the editor.
- Dramatically faster indexing: We’ve optimized indexing to get you to your code faster. You can now use the
import_depth
andimport_ijars
settings in your.bazelproject
file to prevent the indexing of deep transitive dependencies and index only header jars instead of full jars. What’s more, only the files directly referenced in your.bazelproject
view are fully indexed for code intelligence, which can slash indexing times and memory usage in large projects with many auxiliary files.
New plugin, new user experience
Back in December, we publicly announced the EAP (Early Access Program) version of our new plugin and defined what it would take to release it into GA, with an overview of the main differences between the original plugin and the new one.
Here’s a quick recap for those moving from the older plugin: We’ve smoothed out the rough edges to make Bazel feel like a natural part of the IDE.
- Simplified project import: The old import wizard is a thing of the past. Now, simply open a directory containing your
MODULE.bazel
orWORKSPACE
file. For more control, you can open a specific.bazelproject
view file. If you manage a large monorepo, you can provide a default template for your team by checking in a template attools/intellij/.managed.bazelproject
. - Redesigned UI elements: The Bazel tool window is now your central hub for actions like resyncing your project (with a Build and Resync option for generating sources) and keeping track of targets in your working set. We’ve also added a widget listing all targets the currently opened file belongs to. It allows you to run actions on these targets (build / test / jump to BUILD file / copy target label)
- Reworked target mapping for JVM projects: A core improvement is the new internal representation for JVM targets, which mirrors the actual Bazel graph. This fundamental change enables more accurate highlighting, more accurate completions and more reliable refactoring.
Improvements since 2025.1
Windows compatibility
We understand that development doesn’t just happen on one OS. That’s why we worked on making our plugin compatible with Microsoft Windows, bringing most of the feature set to our Windows-based users.
Enhanced Bazel configuration support
We believe editing your build files should be as easy as editing your source code, which is why we’ve improved the user experience for all Bazel-related configuration files.
Starlark (.bzl
, BUILD
)

Starlark Quick Documentation
- Quick documentation for Starlark rules: Hover over a Starlark rule or function to see its documentation directly in the editor. You’ll also get documentation as you type, guiding you through available parameters.
- Automatic formatting: If you have
buildifier
on yourPATH
, the plugin will now automatically format your Starlark files on save
Bazel module configuration file (MODULE.bazel
)
- Intelligent editing: The
MODULE.bazel
editor now offers smart completions for arguments and displays documentation as you edit.
Bazel project view file (.bazelproject
)

.bazelproject view highlighting and completions
- Guided editing: Get completions for section names and known values. The editor will now highlight completely unsupported sections as errors and sections that are supported in the old plugin originally by Google (but not in the new one) as warnings.
- Manage directories from the Project view file tree: You can now right-click a directory in the project tree to add or remove it from your
.bazelproject
file, thus loading or unloading that directory in IntelliJ.
Bazelisk configuration file (.bazelversion
):
- Stay up to date: The editor will now highlight outdated Bazel versions specified in your
.bazelversion
file and offer a quick-fix to update to the latest release.
Language ecosystem enhancements
- JVM:
- The underlying project model mapping has been further improved, resulting in better performance during sync and more reliable refactorings for targets where glob patterns match the whole directory.
- Scala:
- The Bazel plugin now respects the
scalacopts
parameter in yourscala_*
targets, which unlocks Scala 3 highlighting features with the-Xsource:3
flag. At the same time, we’ve updated the Scala plugin to provide native integration with the Bazel plugin out of the box.
- The Bazel plugin now respects the
- Python:
- Run from the gutter:
py_test
andpy_binary
targets now get the familiar green Run arrow in the editor gutter. - Improved dependency resolution: Python dependencies are now resolved correctly, enabling code navigation and eliminating false error highlighting.
- Interpreter from
MODULE.bazel
: The plugin now sets the Python interpreter based on what is defined inMODULE.bazel
. This includes support for hermetic toolchains downloaded byrules_python
– meaning you don’t need to have Python installed locally on your machine. - Debugging: You can now attach the debugger to
py_test
targets.
- Run from the gutter:
What happens to the Bazel plugin by Google?
The Bazel for IntelliJ plugin (also known as IJwB) by Google is being deprecated. Google has transferred the code ownership and maintenance to JetBrains. We will keep providing compatibility updates for new IntelliJ versions and critical fixes only throughout the year of 2025, but will be fully deprecating it in 2026. All our development effort for IntelliJ IDEA, GoLand, and PyCharm is now focused on the new plugin.
The Bazel for CLion plugin (CLwB) has also been transferred to JetBrains, and will continue to be actively developed. Learn more in the post Enhancing Bazel Support for CLion on the CLion Blog.
Got feedback? We’re listening!
We’re committed to making this the best Bazel experience possible. Please report any issues, ideas, or improvements straight to our issue tracker.
Fixed the problem yourself? We accept PRs on our hirschgarten repository.
You’ll also find us on the Bazel Community Slack, in the #intellij
channel.
Happy building!
Bazel Plugin Release: General Availability
Daniel Roy Greenfeld
Unpack for keyword arguments
Previously I wrote a TIL on how to better type annotate callables with *args
and **kwargs
- in essence you ignore the container and worry just about the content of the container. This makes sense, as *args
are a tuple and **kwargs
keys are strings.
Here's an example of that in action:
>>> def func(*args, **kwargs):
... print(f'{args=}')
... print(f'{kwargs=}')
args=(1, 2, 3)
kwargs={'one': 1, 'two': 2}
In fact, if you try to force **kwargs
to accept a non-string type Python stops you with a TypeError:
>>> func(**{1:2})
Traceback (most recent call last):
File "<python-input-9>", line 1, in <module>
func(**{1:2})
~~~~^^^^^^^^^
TypeError: keywords must be strings
This is all great, but what if you want your keyword arguments to consistently accept a pattern of arguments? So this passes type checks:
from typing import TypedDict, Unpack
class Cheese(TypedDict):
name: str
price: int
def func(**cheese: Unpack[Cheese]) -> None:
print(cheese)
Let's try it out:
>>> func(name='Paski Sir', price=30)
{'name': 'Paski Sir', 'price': 30}
Works great! Now let's break it by forgetting a keyword argument:
>>> func(name='Paski Sir')
{'name': 'Paski Sir'}
What? How about adding an extra keyword argument and replacing the int
with a float
:
>>> func(name='Paski Sir', price=30.5, country='Croatia')
{'name': 'Paski Sir', 'price': 30.5, 'country': 'Croatia'}
Still no errors? What gives? The answer is that type annotations are for type checkers, and don't catch during runtime. See the [note at the top of the core Python docs on typing]:
Note The Python runtime does not enforce function and variable type annotations. They can be used by third party tools such as type checkers, IDEs, linters, etc.
For those times when we do need runtime evaluations of types, we lean on built-ins like isinstance
and issubclass
, which are quite seperate from type hints and annotations.
Thanks to the astute Luke Plant for pointing out Unpack
to me and sending me down a quite pleasant rabbit hole.
July 30, 2025
Test and Code
236: Git Tips for Testing - Adam Johnson
In this episode, host Brian Okken and guest Adam Johnson explore essential Git features, highlighted by Adam's updated book, "Boost Your Git DX."
Key topics include
- "cherry picking" for selective commits
- "git stash" for managing in-progress work
- "git diff", and specifically its `--name-only` flag, which provides a streamlined method for developers to identify which files have changed, which can be used to determine which tests need to be run
- "git bisect" for efficiently pinpointing bugs.
This conversation offers valuable strategies for developers at any skill level to enhance their Git proficiency and optimize their coding workflows.
Links:
- Boost Your Git DX - Adam's book
Help support the show AND learn pytest:
- The Complete pytest course is now a bundle, with each part available separately.
- pytest Primary Power teaches the super powers of pytest that you need to learn to use pytest effectively.
- Using pytest with Projects has lots of "when you need it" sections like debugging failed tests, mocking, testing strategy, and CI
- Then pytest Booster Rockets can help with advanced parametrization and building plugins.
- Whether you need to get started with pytest today, or want to power up your pytest skills, PythonTest has a course for you.
Real Python
Python's asyncio: A Hands-On Walkthrough
Python’s asyncio
library enables you to write concurrent code using the async
and await
keywords. The core building blocks of async I/O in Python are awaitable objects—most often coroutines—that an event loop schedules and executes asynchronously. This programming model lets you efficiently manage multiple I/O-bound tasks within a single thread of execution.
In this tutorial, you’ll learn how Python asyncio
works, how to define and run coroutines, and when to use asynchronous programming for better performance in applications that perform I/O-bound tasks.
By the end of this tutorial, you’ll understand that:
- Python’s
asyncio
provides a framework for writing single-threaded concurrent code using coroutines, event loops, and non-blocking I/O operations. - For I/O-bound tasks, async I/O can often outperform multithreading—especially when managing a large number of concurrent tasks—because it avoids the overhead of thread management.
- You should use
asyncio
when your application spends significant time waiting on I/O operations, such as network requests or file access, and you want to run many of these tasks concurrently without creating extra threads or processes.
Through hands-on examples, you’ll gain the practical skills to write efficient Python code using asyncio
that scales gracefully with increasing I/O demands.
Get Your Code: Click here to download the free sample code that you’ll use to learn about async I/O in Python.
Take the Quiz: Test your knowledge with our interactive “Python's asyncio: A Hands-On Walkthrough” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python's asyncio: A Hands-On WalkthroughTest your knowledge of `asyncio` concurrency with this quiz that covers coroutines, event loops, and efficient I/O-bound task management.
A First Look at Async I/O
Before exploring asyncio
, it’s worth taking a moment to compare async I/O with other concurrency models to see how it fits into Python’s broader, sometimes dizzying, landscape. Here are some essential concepts to start with:
- Parallelism consists of executing multiple operations at the same time.
- Multiprocessing is a means of achieving parallelism that entails spreading tasks over a computer’s central processing unit (CPU) cores. Multiprocessing is well-suited for CPU-bound tasks, such as tightly bound
for
loops and mathematical computations. - Concurrency is a slightly broader term than parallelism, suggesting that multiple tasks have the ability to run in an overlapping manner. Concurrency doesn’t necessarily imply parallelism.
- Threading is a concurrent execution model in which multiple threads take turns executing tasks. A single process can contain multiple threads. Python’s relationship with threading is complicated due to the global interpreter lock (GIL), but that’s beyond the scope of this tutorial.
Threading is good for I/O-bound tasks. An I/O-bound job is dominated by a lot of waiting on input/output (I/O) to complete, while a CPU-bound task is characterized by the computer’s cores continually working hard from start to finish.
The Python standard library has offered longstanding support for these models through its multiprocessing
, concurrent.futures
, and threading
packages.
Now it’s time to add a new member to the mix. In recent years, a separate model has been more comprehensively built into CPython: asynchronous I/O, commonly called async I/O. This model is enabled through the standard library’s asyncio
package and the async
and await
keywords.
Note: Async I/O isn’t a new concept. It exists in—or is being built into—other languages such as Go, C#, and Rust.
The asyncio
package is billed by the Python documentation as a library to write concurrent code. However, async I/O isn’t threading or multiprocessing. It’s not built on top of either of these.
Async I/O is a single-threaded, single-process technique that uses cooperative multitasking. Async I/O gives a feeling of concurrency despite using a single thread in a single process. Coroutines—or coro for short—are a central feature of async I/O and can be scheduled concurrently, but they’re not inherently concurrent.
To reiterate, async I/O is a model of concurrent programming, but it’s not parallelism. It’s more closely aligned with threading than with multiprocessing, but it’s different from both and is a standalone member of the concurrency ecosystem.
That leaves one more term. What does it mean for something to be asynchronous? This isn’t a rigorous definition, but for the purposes of this tutorial, you can think of two key properties:
- Asynchronous routines can pause their execution while waiting for a result and allow other routines to run in the meantime.
- Asynchronous code facilitates the concurrent execution of tasks by coordinating asynchronous routines.
Here’s a diagram that puts it all together. The white terms represent concepts, and the green terms represent the ways they’re implemented:

For a thorough exploration of threading versus multiprocessing versus async I/O, pause here and check out the Speed Up Your Python Program With Concurrency tutorial. For now, you’ll focus on async I/O.
Async I/O Explained
Async I/O may seem counterintuitive and paradoxical at first. How does something that facilitates concurrent code use a single thread in a single CPU core? Miguel Grinberg’s PyCon talk explains everything quite beautifully:
Chess master Judit Polgár hosts a chess exhibition in which she plays multiple amateur players. She has two ways of conducting the exhibition: synchronously and asynchronously.
Assumptions:
- 24 opponents
- Judit makes each chess move in 5 seconds
- Opponents each take 55 seconds to make a move
- Games average 30 pair-moves (60 moves total)
Synchronous version: Judit plays one game at a time, never two at the same time, until the game is complete. Each game takes (55 + 5) * 30 == 1800 seconds, or 30 minutes. The entire exhibition takes 24 * 30 == 720 minutes, or 12 hours.
Asynchronous version: Judit moves from table to table, making one move at each table. She leaves the table and lets the opponent make their next move during the wait time. One move on all 24 games takes Judit 24 * 5 == 120 seconds, or 2 minutes. The entire exhibition is now cut down to 120 * 30 == 3600 seconds, or just 1 hour. (Source)
Read the full article at https://realpython.com/async-io-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Insider
Python 3.14 release candidate 1 is go!
It’s the first 3.14 release candidate!
https://www.python.org/downloads/release/python-3140rc1/
This is the first release candidate of Python 3.14
This release, 3.14.0rc1, is the penultimate release preview. Entering the release candidate phase, only reviewed code changes which are clear bug fixes are allowed between this release candidate and the final release. The second candidate (and the last planned release preview) is scheduled for Tuesday, 2025-08-26, while the official release of 3.14.0 is scheduled for Tuesday, 2025-10-07.
There will be no ABI changes from this point forward in the 3.14 series, and the goal is that there will be as few code changes as possible.
Call to action
We strongly encourage maintainers of third-party Python projects to prepare their projects for 3.14 during this phase, and where necessary publish Python 3.14 wheels on PyPI to be ready for the final release of 3.14.0, and to help other projects do their own testing. Any binary wheels built against Python 3.14.0rc1 will work with future versions of Python 3.14. As always, report any issues to the Python bug tracker.
Please keep in mind that this is a preview release and while it’s as close to the final release as we can get it, its use is not recommended for production environments.
Core developers: time to work on documentation now
- Are all your changes properly documented?
- Are they mentioned in What’s New?
- Did you notice other changes you know of to have insufficient documentation?
Major new features of the 3.14 series, compared to 3.13
Some of the major new features and changes in Python 3.14 are:
New features
- PEP 779: Free-threaded Python is officially supported
- PEP 649: The evaluation of annotations is now deferred, improving the semantics of using annotations.
- PEP 750: Template string literals (t-strings) for custom string processing, using the familiar syntax of f-strings.
- PEP 734: Multiple interpreters in the stdlib.
- PEP
784: A new module
compression.zstd
providing support for the Zstandard compression algorithm. - PEP
758:
except
andexcept*
expressions may now omit the brackets. - Syntax highlighting in PyREPL, and support for color in unittest, argparse, json and calendar CLIs.
- PEP 768: A zero-overhead external debugger interface for CPython.
- UUID
versions 6-8 are now supported by the
uuid
module, and generation of versions 3-5 are up to 40% faster. - PEP
765: Disallow
return
/break
/continue
that exit afinally
block. - PEP 741: An improved C API for configuring Python.
- A new type of interpreter. For certain newer compilers, this interpreter provides significantly better performance. Opt-in for now, requires building from source.
- Improved error messages.
- Builtin implementation of HMAC with formally verified code from the HACL* project.
- A new command-line interface to inspect running Python processes using asynchronous tasks.
- The pdb module now supports remote attaching to a running Python process.
(Hey, fellow core developer, if a feature you find important is missing from this list, let Hugo know.)
For more details on the changes to Python 3.14, see What’s new in Python 3.14. The next pre-release of Python 3.14 will be the final release candidate, 3.14.0rc2, scheduled for 2025-08-26.
Build changes
- PEP 761: Python 3.14 and onwards no longer provides PGP signatures for release artifacts. Instead, Sigstore is recommended for verifiers.
- Official macOS and Windows release binaries include an experimental JIT compiler.
Incompatible changes, removals and new deprecations
- Incompatible changes
- Python removals and deprecations
- C API removals and deprecations
- Overview of all pending deprecations
Python install manager
The installer we offer for Windows is being replaced by our new install manager, which can be installed from the Windows Store or from its download page. See our documentation for more information. The JSON file available for download below contains the list of all the installable packages available as part of this release, including file URLs and hashes, but is not required to install the latest release. The traditional installer will remain available throughout the 3.14 and 3.15 releases.
More resources
- Online documentation
- PEP 745, 3.14 Release Schedule
- Report bugs at github.com/python/cpython/issues
- Help fund Python and its community
And now for something completely different
Today, 22nd July, is Pi Approximation Day, because 22/7 is a common approximation of π and closer to π than 3.14.
22/7 is a Diophantine approximation, named after Diophantus of Alexandria (3rd century CE), which is a way of estimating a real number as a ratio of two integers. 22/7 has been known since antiquity; Archimedes (3rd century BCE) wrote the first known proof that 22/7 overestimates π by comparing 96-sided polygons to the circle it circumscribes.
Another approximation is 355/113. In Chinese mathematics, 22/7 and 355/113 are respectively known as Yuelü (约率; yuēlǜ; “approximate ratio”) and Milü (密率; mìlǜ; “close ratio”).
Happy Pi Approximation Day!
Enjoy the new release
Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organisation contributions to the Python Software Foundation.
Regards from a Helsinki heatwave after an excellent EuroPython,
Your release team,
Hugo van Kemenade
Ned Deily
Steve Dower
Łukasz Langa
Mike Driscoll
Creating a Simple XML Editor in Your Terminal with Python and Textual
Several years ago, I created an XML editor with the wxPython GUI toolkit called Boomslang. I recently thought it would be fun to port that code to Textual so I could have an XML viewer and editor in my terminal as well.
In this article, you will learn how that experiment went and see the results. Here is a quick outline of what you will cover:
- Get the packages you will need
- Create the main UI
- Creating the edit XML screen
- The add node screen
- Adding an XML preview screen
- Creating file browser and warning screens
- Creating the file save screen
Let’s get started!
Getting the Dependencies
You will need Textual to be able to run the application detailed in this tutorial. You will also need lxml, which is a super fast XML parsing package. You can install Textual using pip or uv. You can probably use uv with lxml as well, but pip definitely works.
Here’s an example using pip to install both packages:
python -m pip install textual lxml
Once pip has finished installing Textual and the lxml package and all its dependencies, you will be ready to continue!
Creating the Main UI
The first step in creating the user interface is figuring out what it should look like. Here is the original Boomslang user interface that was created using wxPython:
You want to create something similar to this UI, but in your terminal. Open up your favorite Python IDE and create a new file called boomslang.py
and then enter the following code into it:
from pathlib import Path from .edit_xml_screen import EditXMLScreen from .file_browser_screen import FileBrowser from textual import on from textual.app import App, ComposeResult from textual.containers import Horizontal, Vertical from textual.widgets import Button, Header, Footer, OptionList class BoomslangXML(App): BINDINGS = [ ("ctrl+o", "open", "Open XML File"), ] CSS_PATH = "main.tcss" def __init__(self) -> None: super().__init__() self.title = "Boomslang XML" self.recent_files_path = Path(__file__).absolute().parent / "recent_files.txt" self.app_selected_file: Path | None = None self.current_recent_file: Path | None = None def compose(self) -> ComposeResult: self.recent_files = OptionList("", id="recent_files") self.recent_files.border_title = "Recent Files" yield Header() yield self.recent_files yield Vertical( Horizontal( Button("Open XML File", id="open_xml_file", variant="primary"), Button("Open Recent", id="open_recent_file", variant="warning"), id="button_row", ) ) yield Footer() def on_mount(self) -> None: self.update_recent_files_ui() def action_open(self) -> None: self.push_screen(FileBrowser()) def on_file_browser_selected(self, message: FileBrowser.Selected) -> None: path = message.path if path.suffix.lower() == ".xml": self.update_recent_files_on_disk(path) self.push_screen(EditXMLScreen(path)) else: self.notify("Please choose an XML File!", severity="error", title="Error") @on(Button.Pressed, "#open_xml_file") def on_open_xml_file(self) -> None: self.push_screen(FileBrowser()) @on(Button.Pressed, "#open_recent_file") def on_open_recent_file(self) -> None: if self.current_recent_file is not None and self.current_recent_file.exists(): self.push_screen(EditXMLScreen(self.current_recent_file)) @on(OptionList.OptionSelected, "#recent_files") def on_recent_files_selected(self, event: OptionList.OptionSelected) -> None: self.current_recent_file = Path(event.option.prompt) def update_recent_files_ui(self) -> None: if self.recent_files_path.exists(): self.recent_files.clear_options() files = self.recent_files_path.read_text() for file in files.split("\n"): self.recent_files.add_option(file.strip()) def update_recent_files_on_disk(self, path: Path) -> None: if path.exists() and self.recent_files_path.exists(): recent_files = self.recent_files_path.read_text() if str(path) in recent_files: return with open(self.recent_files_path, mode="a") as f: f.write(str(path) + "\n") self.update_recent_files_ui() elif not self.recent_files_path.exists(): with open(self.recent_files_path, mode="a") as f: f.write(str(path) + "\n") def main() -> None: app = BoomslangXML() app.run() if __name__ == "__main__": main()
That’s a good chunk of code, but it’s still less than a hundred lines. You will go over it in smaller chunks though. You can start with this first chunk:
from pathlib import Path from .edit_xml_screen import EditXMLScreen from .file_browser_screen import FileBrowser from textual import on from textual.app import App, ComposeResult from textual.containers import Horizontal, Vertical from textual.widgets import Button, Header, Footer, OptionList class BoomslangXML(App): BINDINGS = [ ("ctrl+o", "open", "Open XML File"), ] CSS_PATH = "main.tcss" def __init__(self) -> None: super().__init__() self.title = "Boomslang XML" self.recent_files_path = Path(__file__).absolute().parent / "recent_files.txt" self.app_selected_file: Path | None = None self.current_recent_file: Path | None = None
You need a few imports to make your code work. The first import comes from Python itself and gives your code the ability to work with file paths. The next two are for a couple of small custom files you will create later on. The rest of the imports are from Textual and provide everything you need to make a nice little Textual application.
Next, you create the BoomslangXML
class where you set up a keyboard binding and set which CSS file you will be using for styling your application.
The __init__()
method sets the following:
- The title of the application
- The recent files path, which contains all the files you have recently opened
- The currently selected file or None
- The current recent file (i.e. the one you have open at the moment) or None
Now you are ready to create the main UI:
def compose(self) -> ComposeResult: self.recent_files = OptionList("", id="recent_files") self.recent_files.border_title = "Recent Files" yield Header() yield self.recent_files yield Vertical( Horizontal( Button("Open XML File", id="open_xml_file", variant="primary"), Button("Open Recent", id="open_recent_file", variant="warning"), id="button_row", ) ) yield Footer()
To create your user interface, you need a small number of widgets:
- A header to identify the name of the application
- An OptionList which contains the recently opened files, if any, that the user can reload
- A button to load a new XML file
- A button to load from the selected recent file
- A footer to show the application’s keyboard shortcuts
Next, you will write a few event handlers:
def on_mount(self) -> None: self.update_recent_files_ui() def action_open(self) -> None: self.push_screen(FileBrowser()) def on_file_browser_selected(self, message: FileBrowser.Selected) -> None: path = message.path if path.suffix.lower() == ".xml": self.update_recent_files_on_disk(path) self.push_screen(EditXMLScreen(path)) else: self.notify("Please choose an XML File!", severity="error", title="Error")
The code above contains the logic for three event handlers:
on_mount()
– After the application loads, it will update the OptionList by reading the text file that contains paths to the recent files.action_open()
– A keyboard shortcut action that gets called when the user presses CTRL+O. It will then show a file browser to the user so they can pick an XML file to load.on_file_browser_selected()
– Called when the user picks an XML file from the file browser and closes the file browser. If the file is an XML file, you will reload the screen to allow XML editing. Otherwise, you will notify the user to choose an XML file.
The next chunk of code is for three more event handlers:
@on(Button.Pressed, "#open_xml_file") def on_open_xml_file(self) -> None: self.push_screen(FileBrowser()) @on(Button.Pressed, "#open_recent_file") def on_open_recent_file(self) -> None: if self.current_recent_file is not None and self.current_recent_file.exists(): self.push_screen(EditXMLScreen(self.current_recent_file)) @on(OptionList.OptionSelected, "#recent_files") def on_recent_files_selected(self, event: OptionList.OptionSelected) -> None: self.current_recent_file = Path(event.option.prompt)
These event handlers use Textual’s handy @on
decorator, which allows you to bind the event to a specific widget or widgets.
on_open_xml_file()
– If the user presses the “Open XML File” button, this method is called and it will show the file browser.on_open_recent_file()
– If the user presses the “Open Recent” button, this method gets called and will load the selected recent file.on_recent_files_selected()
– When the user selects a recent file in the OptionList widget, this method gets called and sets thecurrent_recent_file
variable.
You only have two more methods to go over. The first is for updating the recent files UI:
def update_recent_files_ui(self) -> None: if self.recent_files_path.exists(): self.recent_files.clear_options() files = self.recent_files_path.read_text() for file in files.split("\n"): self.recent_files.add_option(file.strip())
Remember, this method gets called by on_mount()
and it will update the OptionList, if the file exists. The first thing this code will do is clear the OptionList in preparation for updating it. Then you will read the text from the file and loop over each path in that file.
As you loop over the paths, you add them to the OptionList. That’s it! You now have a recent files list that the user can choose from.
The last method to write is for updating the recent files text file:
def update_recent_files_on_disk(self, path: Path) -> None: if path.exists() and self.recent_files_path.exists(): recent_files = self.recent_files_path.read_text() if str(path) in recent_files: return with open(self.recent_files_path, mode="a") as f: f.write(str(path) + "\n") self.update_recent_files_ui() elif not self.recent_files_path.exists(): with open(self.recent_files_path, mode="a") as f: f.write(str(path) + "\n")
When the user opens a new XML file, you want to add that file to the recent file list on disk so that the next time the user opens your application, you can show the user the recent files. This is a nice way to make loading previous files much easier.
The code above will verify that the file still exists and that your recent files file also exists. Assuming that they do, you will check to see if the current XML file is already in the recent files file. If it is, you don’t want to add it again, so you return.
Otherwise, you open the recent files file in append mode, add the new file to disk and update the UI.
If the recent files file does not exist, you create it here and add the new path.
Here are the last few lines of code to add:
def main() -> None: app = BoomslangXML() app.run() if __name__ == "__main__": main()
You create a main()
function to create the Textual application object and run it. You do this primarily for making the application runnable by uv, Python’s fastest package installer and resolver.
Now you’re ready you move on and add some CSS styling to your UI.
Your XML editor doesn’t require extensive styling. In fact, there is nothing wrong with being minimalistic.
Open up your favorite IDE or text editor and create a new file named main.tcss
and then add the following code:
BoomslangXML { #button_row { align: center middle; } Horizontal{ height: auto; } OptionList { border: solid green; } Button { margin: 1; } }
Here you center the button row on your screen. You also set the Horizontal
container’s height to auto, which tells Textual to make the container fit its contents. You also add a border to your OptionList
and a margin to your buttons.
The XML editor screen is fairly complex, so that’s what you will learn about next.
Creating the Edit XML Screen
The XML editor screen is more complex than the main screen of your application and contains almost twice as many lines of code. But that’s to be expected when you realize that most of your logic will reside here.
As before, you will start out by writing the full code and then going over it piece-by-piece. Open up your Python IDE and create a new file named edit_xml_screen.py
and then enter the following code:
import lxml.etree as ET import tempfile from pathlib import Path from .add_node_screen import AddNodeScreen from .preview_xml_screen import PreviewXMLScreen from textual import on from textual.app import ComposeResult from textual.containers import Horizontal, Vertical, VerticalScroll from textual.screen import ModalScreen from textual.widgets import Footer, Header, Input, Tree from textual.widgets._tree import TreeNode class DataInput(Input): """ Create a variant of the Input widget that stores data """ def __init__(self, xml_obj: ET.Element, *args, **kwargs) -> None: super().__init__(*args, **kwargs) self.xml_obj = xml_obj class EditXMLScreen(ModalScreen): BINDINGS = [ ("ctrl+s", "save", "Save"), ("ctrl+a", "add_node", "Add Node"), ("p", "preview", "Preview"), ("escape", "esc", "Exit dialog"), ] CSS_PATH = "edit_xml_screens.tcss" def __init__(self, xml_path: Path, *args, **kwargs): super().__init__(*args, **kwargs) self.xml_tree = ET.parse(xml_path) self.expanded = {} self.selected_tree_node: None | TreeNode = None def compose(self) -> ComposeResult: xml_root = self.xml_tree.getroot() self.expanded[id(xml_root)] = "" yield Header() yield Horizontal( Vertical(Tree("No Data Loaded", id="xml_tree"), id="left_pane"), VerticalScroll(id="right_pane"), id="main_ui_container", ) yield Footer() def on_mount(self) -> None: self.load_tree() @on(Tree.NodeExpanded) def on_tree_node_expanded(self, event: Tree.NodeExpanded) -> None: """ When a tree node is expanded, parse the newly shown leaves and make them expandable, if necessary. """ xml_obj = event.node.data if id(xml_obj) not in self.expanded and xml_obj is not None: for top_level_item in xml_obj.getchildren(): child = event.node.add_leaf(top_level_item.tag, data=top_level_item) if top_level_item.getchildren(): child.allow_expand = True else: child.allow_expand = False self.expanded[id(xml_obj)] = "" @on(Tree.NodeSelected) def on_tree_node_selected(self, event: Tree.NodeSelected) -> None: """ When a node in the tree control is selected, update the right pane to show the data in the XML, if any """ xml_obj = event.node.data right_pane = self.query_one("#right_pane", VerticalScroll) right_pane.remove_children() self.selected_tree_node = event.node if xml_obj is not None: for child in xml_obj.getchildren(): if child.getchildren(): continue text = child.text if child.text else "" data_input = DataInput(child, text) data_input.border_title = child.tag container = Horizontal(data_input) right_pane.mount(container) else: # XML object has no children, so just show the tag and text if getattr(xml_obj, "tag") and getattr(xml_obj, "text"): if xml_obj.getchildren() == []: data_input = DataInput(xml_obj, xml_obj.text) data_input.border_title = xml_obj.tag container = Horizontal(data_input) right_pane.mount(container) @on(Input.Changed) def on_input_changed(self, event: Input.Changed) -> None: """ When an XML element changes, update the XML object """ xml_obj = event.input.xml_obj # self.notify(f"{xml_obj.text} is changed to new value: {event.input.value}") xml_obj.text = event.input.value def action_esc(self) -> None: """ Close the dialog when the user presses ESC """ self.dismiss() def action_add_node(self) -> None: """ Add another node to the XML tree and the UI """ # Show dialog and use callback to update XML and UI def add_node(result: tuple[str, str] | None) -> None: if result is not None: node_name, node_value = result self.update_xml_tree(node_name, node_value) self.app.push_screen(AddNodeScreen(), add_node) def action_preview(self) -> None: temp_directory = Path(tempfile.gettempdir()) xml_path = temp_directory / "temp.xml" self.xml_tree.write(xml_path) self.app.push_screen(PreviewXMLScreen(xml_path)) def action_save(self) -> None: self.xml_tree.write(r"C:\Temp\books.xml") self.notify("Saved!") def load_tree(self) -> None: """ Load the XML tree UI with data parsed from the XML file """ tree = self.query_one("#xml_tree", Tree) xml_root = self.xml_tree.getroot() self.expanded[id(xml_root)] = "" tree.reset(xml_root.tag) tree.root.expand() # If the root has children, add them if xml_root.getchildren(): for top_level_item in xml_root.getchildren(): child = tree.root.add(top_level_item.tag, data=top_level_item) if top_level_item.getchildren(): child.allow_expand = True else: child.allow_expand = False def update_tree_nodes(self, node_name: str, node: ET.SubElement) -> None: """ When adding a new node, update the UI Tree element to reflect the new element added """ child = self.selected_tree_node.add(node_name, data=node) child.allow_expand = False def update_xml_tree(self, node_name: str, node_value: str) -> None: """ When adding a new node, update the XML object with the new element """ element = ET.SubElement(self.selected_tree_node.data, node_name) element.text = node_value self.update_tree_nodes(node_name, element)
Phew! That seems like a lot of code if you are new to coding, but a hundred and seventy lines of code or so really isn’t very much. Most applications take thousands of lines of code.
Just the same, breaking the code down into smaller chunks will aid in your understanding of what’s going on.
With that in mind, here’s the first chunk:
import lxml.etree as ET import tempfile from pathlib import Path from .add_node_screen import AddNodeScreen from .preview_xml_screen import PreviewXMLScreen from textual import on from textual.app import ComposeResult from textual.containers import Horizontal, Vertical, VerticalScroll from textual.screen import ModalScreen from textual.widgets import Footer, Header, Input, Tree from textual.widgets._tree import TreeNode
You have move imports here than you did in the main UI file. Here’s a brief overview:
- You import lxml to make parsing and editing XML easy.
- You use Python’s
tempfile
module to create a temporary file for viewing the XML. - The
pathlib
module is used the same way as before. - You have a couple of custom Textual screens that you will need to code up and import.
- The last six lines are all Textual imports for making this editor screen work.
The next step is to subclass the Input
widget in such a way that it will store XML element data:
class DataInput(Input): """ Create a variant of the Input widget that stores data """ def __init__(self, xml_obj: ET.Element, *args, **kwargs) -> None: super().__init__(*args, **kwargs) self.xml_obj = xml_obj
Here you pass in an XML object and store it off in an instance variable. You will need this to make editing and displaying the XML easy.
The second class you create is the EditXMLScreen
:
class EditXMLScreen(ModalScreen): BINDINGS = [ ("ctrl+s", "save", "Save"), ("ctrl+a", "add_node", "Add Node"), ("p", "preview", "Preview"), ("escape", "esc", "Exit dialog"), ] CSS_PATH = "edit_xml_screens.tcss" def __init__(self, xml_path: Path, *args, **kwargs): super().__init__(*args, **kwargs) self.xml_tree = ET.parse(xml_path) self.expanded = {} self.selected_tree_node: None | TreeNode = None
The EditXMLScreen
is a new screen that holds your XML editor. Here you add four keyboard bindings, a CSS file path and the __init__()
method.
Your initialization method is used to create an lxml Element Tree instance. You also create an empty dictionary of expanded tree widgets and the selected tree node instance variable, which is set to None
.
Now you’re ready to create your user interface:
def compose(self) -> ComposeResult: xml_root = self.xml_tree.getroot() self.expanded[id(xml_root)] = "" yield Header() yield Horizontal( Vertical(Tree("No Data Loaded", id="xml_tree"), id="left_pane"), VerticalScroll(id="right_pane"), id="main_ui_container", ) yield Footer() def on_mount(self) -> None: self.load_tree()
Fortunately, the user interface needed for editing XML is fairly straightforward:
- You create a new header to add a new title to the screen.
- You use a horizontally-oriented container to hold your widgets.
- Inside of the container, you have a tree control that holds the DOM of the XML on the left.
- On the right, you have a vertical scrolling container.
- Finally, you have a footer
You also set up the first item in your “expanded” dictionary, which is the root node from the XML.
Now you can write your first event handler for this class:
@on(Tree.NodeExpanded) def on_tree_node_expanded(self, event: Tree.NodeExpanded) -> None: """ When a tree node is expanded, parse the newly shown leaves and make them expandable, if necessary. """ xml_obj = event.node.data if id(xml_obj) not in self.expanded and xml_obj is not None: for top_level_item in xml_obj.getchildren(): child = event.node.add_leaf(top_level_item.tag, data=top_level_item) if top_level_item.getchildren(): child.allow_expand = True else: child.allow_expand = False self.expanded[id(xml_obj)] = ""
When the user expands a node in the tree control, the on_tree_node_expanded()
method will get called. You will extract the node’s data, if it has any. Assuming that there is data, you will then loop over any child nodes that are present.
For each child node, you will add a new leaf to the tree control. You check to see if the child has children too and set the allow_expand
flag accordingly. At the end of the code, you add then XML object to your dictionary.
The next method you need to write is an event handler for when a tree node is selected:
@on(Tree.NodeSelected) def on_tree_node_selected(self, event: Tree.NodeSelected) -> None: """ When a node in the tree control is selected, update the right pane to show the data in the XML, if any """ xml_obj = event.node.data right_pane = self.query_one("#right_pane", VerticalScroll) right_pane.remove_children() self.selected_tree_node = event.node if xml_obj is not None: for child in xml_obj.getchildren(): if child.getchildren(): continue text = child.text if child.text else "" data_input = DataInput(child, text) data_input.border_title = child.tag container = Horizontal(data_input) right_pane.mount(container) else: # XML object has no children, so just show the tag and text if getattr(xml_obj, "tag") and getattr(xml_obj, "text"): if xml_obj.getchildren() == []: data_input = DataInput(xml_obj, xml_obj.text) data_input.border_title = xml_obj.tag container = Horizontal(data_input) right_pane.mount(container)
Wben the user selects a node in your tree, you need to update the righthand pane with the node’s contents. To do that, you once again extract the node’s data, if it has any. If it does have data, you loop over its children and update the right hand pane’s UI. This entails grabbing the XML node’s tags and values and adding a series of horizontal widgets to the scrollable container that makes up the right pane of your UI.
If the XML object has no children, you can simply show the top level node’s tag and value, if it has any.
The next two methods you will write are as follows:
@on(Input.Changed) def on_input_changed(self, event: Input.Changed) -> None: """ When an XML element changes, update the XML object """ xml_obj = event.input.xml_obj xml_obj.text = event.input.value def on_save_file_dialog_dismissed(self, xml_path: str) -> None: """ Save the file to the selected location """ if not Path(xml_path).exists(): self.xml_tree.write(xml_path) self.notify(f"Saved to: {xml_path}")
The on_input_changed()
method deals with Input
widgets which are your special DataInput
widgets. Whenever they are edited, you want to grab the XML object from the event and update the XML tag’s value accordingly. That way, the XML will always be up-to-date if the user decides they want to save it.
You can also add an auto-save feature which would also use the latest XML object when it is saving, if you wanted to.
The second method here, on_save_file_dialog_dismissed()
, is called when the user dismisses the save dialog that is opened when the user presses CTRL+S. Here you check to see if the file already exists. If not, you create it. You could spend some time adding another dialog here that warns that a file exists and gives the option to the user whether or not to overwrite it.
Anyway, your next step is to write the keyboard shortcut action methods. There are four keyboard shortcuts that you need to create actions for.
Here they are:
def action_esc(self) -> None: """ Close the dialog when the user presses ESC """ self.dismiss() def action_add_node(self) -> None: """ Add another node to the XML tree and the UI """ # Show dialog and use callback to update XML and UI def add_node(result: tuple[str, str] | None) -> None: if result is not None: node_name, node_value = result self.update_xml_tree(node_name, node_value) self.app.push_screen(AddNodeScreen(), add_node) def action_preview(self) -> None: temp_directory = Path(tempfile.gettempdir()) xml_path = temp_directory / "temp.xml" self.xml_tree.write(xml_path) self.app.push_screen(PreviewXMLScreen(xml_path)) def action_save(self) -> None: self.app.push_screen(SaveFileDialog(), self.on_save_file_dialog_dismissed)
The four keyboard shortcut event handlers are:
action_esc()
– Called when the user pressed the “Esc” key. Exits the dialog.action_add_node()
– Called when the user presses CTRL+A. Opens theAddNodeScreen
. If the user adds new data, theadd_node()
callback is called, which will then callupdate_xml_tree()
to update the UI with the new information.action_preview()
– Called when the user presses the “p” key. Creates a temporary file with the current contents of the XML object. Then opens a new screen that allows the user to view the XML as a kind of preview.action_save
– Called when the user presses CTRL+S.
The next method you will need to write is called load_tree()
:
def load_tree(self) -> None: """ Load the XML tree UI with data parsed from the XML file """ tree = self.query_one("#xml_tree", Tree) xml_root = self.xml_tree.getroot() self.expanded[id(xml_root)] = "" tree.reset(xml_root.tag) tree.root.expand() # If the root has children, add them if xml_root.getchildren(): for top_level_item in xml_root.getchildren(): child = tree.root.add(top_level_item.tag, data=top_level_item) if top_level_item.getchildren(): child.allow_expand = True else: child.allow_expand = False
The method above will grab the Tree
widget and the XML’s root element and then load the tree widget with the data. You check if the XML root object has any children (which most do) and then loop over the children, adding them to the tree widget.
You only have two more methods to write. Here they are:
def update_tree_nodes(self, node_name: str, node: ET.SubElement) -> None: """ When adding a new node, update the UI Tree element to reflect the new element added """ child = self.selected_tree_node.add(node_name, data=node) child.allow_expand = False def update_xml_tree(self, node_name: str, node_value: str) -> None: """ When adding a new node, update the XML object with the new element """ element = ET.SubElement(self.selected_tree_node.data, node_name) element.text = node_value self.update_tree_nodes(node_name, element)
These two methods are short and sweet:
update_tree_nodes()
– When the user adds a new node, you call this method which will update the node in the tree widget as needed.update_xml_tree()
– When a node is added, update the XML object and then call the UI updater method above.
The last piece of code you need to write is the CSS for this screen. Open up a text editor and create a new file called edit_xml_screens.tcss and then add the following code:
EditXMLScreen { Input { border: solid gold; margin: 1; height: auto; } Button { align: center middle; } Horizontal { margin: 1; height: auto; } }
This CSS is similar to the other CSS file. In this case, you set the Input
widget’s height to auto. You also set the margin and border for that widget. For the buttons, you tell Textual to center all of them. Finally, you also set the margin and height of the horizontal container, just like you did in the other CSS file.
Now you are ready to learn about the add node screen!
The Add Node Screen
When the user wants to add a new node to the XML, you will show an “add node screen”. This screen allows the user to enter a node (i.e., tag) name and value. The screen will then pass that new data to the callback which will update the XML object and the user interface. You have already seen that code in the previous section.
To get started, open up a new file named add_node_screen.py
and enter the following code:
from textual import on from textual.app import ComposeResult from textual.containers import Horizontal, Vertical from textual.screen import ModalScreen from textual.widgets import Button, Header, Footer, Input class AddNodeScreen(ModalScreen): BINDINGS = [ ("escape", "esc", "Exit dialog"), ] CSS_PATH = "add_node_screen.tcss" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.title = "Add New Node" def compose(self) -> ComposeResult: self.node_name = Input(id="node_name") self.node_name.border_title = "Node Name" self.node_value = Input(id="node_value") self.node_value.border_title = "Node Value" yield Vertical( Header(), self.node_name, self.node_value, Horizontal( Button("Save Node", variant="primary", id="save_node"), Button("Cancel", variant="warning", id="cancel_node"), ), Footer(), id="add_node_screen_ui", ) @on(Button.Pressed, "#save_node") def on_save(self) -> None: self.dismiss((self.node_name.value, self.node_value.value)) @on(Button.Pressed, "#cancel_node") def on_cancel(self) -> None: self.dismiss() def action_esc(self) -> None: """ Close the dialog when the user presses ESC """ self.dismiss()
Following is an overview of each method of the code above:
__init__()
– Sets the title of the screen.compose()
– Creates the user interface, which is made up of twoInput
widgets, a “Save” button, and a “Cancel” button.on_save()
-Called when the user presses the “Save” button. This will save the data entered by the user into the two inputs, if any.on_cancel()
– Called when the user presses the “Cancel” button. If pressed, the screen exits without saving.action_esc()
– Called when the user presses the “Esc” key. If pressed, the screen exits without saving.
That code is concise and straightforward.
Next, open up a text editor or use your IDE to create a file named add_node_screen.tcss
which will contain the following CSS:
AddNodeScreen { align: center middle; background: $primary 30%; #add_node_screen_ui { width: 80%; height: 40%; border: thick $background 70%; content-align: center middle; margin: 2; } Input { border: solid gold; margin: 1; height: auto; } Button { margin: 1; } Horizontal{ height: auto; align: center middle; } }
Your CSS functions as a way to quickly style individual widgets or groups of widgets. Here you set it up to make the screen a bit smaller than the screen underneath it (80% x 40%) so it looks like a dialog.
You set the border, height, and margin on your inputs. You add a margin around your buttons to keep them slightly apart. Finally, you add a height and alignment to the container.
You can try tweaking all of this to see how it changes the look and feel of the screen. It’s a fun way to explore, and you can do this with any of the screens you create.
The next screen to create is the XML preview screen.
Adding an XML Preview Screen
The XML Preview screen allows the user to check that the XML looks correct before they save it. Textual makes creating a preview screen short and sweet.
Open up your Python IDE and create a new file named preview_xml_screen.py
and then enter the following code into it:
from textual import on from textual.app import ComposeResult from textual.containers import Center, Vertical from textual.screen import ModalScreen from textual.widgets import Button, Header, TextArea class PreviewXMLScreen(ModalScreen): CSS_PATH = "preview_xml_screen.tcss" def __init__(self, xml_file_path: str, *args: tuple, **kwargs: dict) -> None: super().__init__(*args, **kwargs) self.xml_file_path = xml_file_path self.title = "Preview XML" def compose(self) -> ComposeResult: with open(self.xml_file_path) as xml_file: xml = xml_file.read() text_area = TextArea(xml) text_area.language = "xml" yield Header() yield Vertical( text_area, Center(Button("Exit Preview", id="exit_preview", variant="primary")), id="exit_preview_ui", ) @on(Button.Pressed, "#exit_preview") def on_exit_preview(self, event: Button.Pressed) -> None: self.dismiss()
There’s not a lot here, so you will go over the highlights like you did in the previous section:
__init__()
– Initializes a couple of instance variables:xml_file_path
– Which is a temporary file pathtitle
– The title of the screen
compose()
– The UI is created here. You open the XML file and read it in. Then you load the XML into aTextArea
widget. Finally, you tell Textual to use a header, the text area widget and an exit button for your interface.on_exit_preview()
– Called when the user presses the “Exit Preview” button. As the name implies, this exits the screen.
The last step is to apply a little CSS. Create a new file named preview_xml_screen.tcss
and add the following snippet to it:
PreviewXMLScreen { Button { margin: 1; } }
All this CSS does is add a margin to the button, which makes the UI look a little nicer.
There are three more screens yet to write. The first couple of screens you will create are the file browser and warning screens.
Creating the File Browser and Warning Screens
The file browser is what the user will use to find an XML file that they want to open. It is also nice to have a screen you can use for warnings, so you will create that as well.
For now, you will call this file file_browser_screen.py
but you are welcome to separate these two screens into different files. The first half of the file will contain the imports and the WarningScreen
class.
Here is that first half:
from pathlib import Path from textual import on from textual.app import ComposeResult from textual.containers import Center, Grid, Vertical from textual.message import Message from textual.screen import Screen from textual.widgets import Button, DirectoryTree, Footer, Label, Header class WarningScreen(Screen): """ Creates a pop-up Screen that displays a warning message to the user """ def __init__(self, warning_message: str) -> None: super().__init__() self.warning_message = warning_message def compose(self) -> ComposeResult: """ Create the UI in the Warning Screen """ yield Grid( Label(self.warning_message, id="warning_msg"), Button("OK", variant="primary", id="ok_warning"), id="warning_dialog", ) def on_button_pressed(self, event: Button.Pressed) -> None: """ Event handler for when the OK button - dismisses the screen """ self.dismiss() event.stop()
The warning screen is made up of two widgets: a label that contains the warning message and an “OK” button. You also add a method to respond to the buton being pressed. You dismiss the screen here and stop the event from propagating up to the parent.
The next class you need to add to this file is the FileBrowser
class:
class FileBrowser(Screen): BINDINGS = [ ("escape", "esc", "Exit dialog"), ] CSS_PATH = "file_browser_screen.tcss" class Selected(Message): """ File selected message """ def __init__(self, path: Path) -> None: self.path = path super().__init__() def __init__(self) -> None: super().__init__() self.selected_file = Path("") self.title = "Load XML Files" def compose(self) -> ComposeResult: yield Vertical( Header(), DirectoryTree("/"), Center( Button("Load File", variant="primary", id="load_file"), ), id="file_browser_dialog", ) @on(DirectoryTree.FileSelected) def on_file_selected(self, event: DirectoryTree.FileSelected) -> None: """ Called when the FileSelected Message is emitted from the DirectoryTree """ self.selected_file = event.path def on_button_pressed(self, event: Button.Pressed) -> None: """ Event handler for when the load file button is pressed """ event.stop() if self.selected_file.suffix.lower() != ".xml" and self.selected_file.is_file(): self.app.push_screen(WarningScreen("ERROR: You must choose a XML file!")) return self.post_message(self.Selected(self.selected_file)) self.dismiss() def action_esc(self) -> None: """ Close the dialog when the user presses ESC """ self.dismiss()
The FileBrowser
class is more complicated because it does a lot more than the warning screen does. Here’s a listing of the methods:
__init__()
– Initializes the currently selected file to an empty path and sets the title for the screen.compose()
– Creates the UI. This UI has a header, aDirectoryTree
for browsing files and a button for loading the currently selected file.on_file_selected()
– When the user selected a file in the directory tree, you grab the path and set theselected_file
instance variable.on_button_pressed()
– When the user presses the “Load File” button, you check if the selected file is the correct file type. If not, you should a warning screen. If the file is an XML file, then you post a custom message and close the screen.action_esc()
– Called when the user presses theEsc
key. Closes the screen.
The last item to write is your CSS file. As you might expect, you should name it file_browser_screen.tcss
. Then put the following CSS inside of the file:
FileBrowser { #file_browser_dialog { width: 80%; height: 50%; border: thick $background 70%; content-align: center middle; margin: 2; border: solid green; } Button { margin: 1; content-align: center middle; } }
The CSS code here should look pretty familiar to you. All you are doing is making the screen look like a dialog and then adding a margin and centering the button.
The last step is to create the file save screen.
Creating the File Save Screen
The file save screen is similar to the file browser screen with the main difference being that you are supplying a new file name that you want to use to save your XML file to.
Open your Python IDE and create a new file called save_file_dialog.py
and then enter the following code:
from pathlib import Path from textual import on from textual.app import ComposeResult from textual.containers import Vertical from textual.screen import Screen from textual.widgets import Button, DirectoryTree, Footer, Header, Input, Label class SaveFileDialog(Screen): CSS_PATH = "save_file_dialog.tcss" def __init__(self) -> None: super().__init__() self.title = "Save File" self.root = "/" def compose(self) -> ComposeResult: yield Vertical( Header(), Label(f"Folder name: {self.root}", id="folder"), DirectoryTree("/"), Input(placeholder="filename.txt", id="filename"), Button("Save File", variant="primary", id="save_file"), id="save_dialog", ) def on_mount(self) -> None: """ Focus the input widget so the user can name the file """ self.query_one("#filename").focus() def on_button_pressed(self, event: Button.Pressed) -> None: """ Event handler for when the load file button is pressed """ event.stop() filename = self.query_one("#filename").value full_path = Path(self.root) / filename self.dismiss(f"{full_path}") @on(DirectoryTree.DirectorySelected) def on_directory_selection(self, event: DirectoryTree.DirectorySelected) -> None: """ Called when the DirectorySelected message is emitted from the DirectoryTree """ self.root = event.path self.query_one("#folder").update(f"Folder name: {event.path}")
The save file dialog code is currently less than fifty lines of code. Here is a breakdown of that code:
__init__()
– Sets the title of the screen and the default root folder.compose()
– Creates the user interface, which consists of a header, a label (the root), the directory tree widget, an input for specifying the file name, and a “Save File” button.on_mount()
– Called automatically by Textual after thecompose()
method. Sets the input widget as the focus.on_button_pressed()
– Called when the user presses the “Save File” button. Grabs the filename and then create the full path using the root + filename. Finally, you send that full path back to the callback function viadismiss()
.on_directory_selection()
– Called when the user selects a directory. Updates theroot
variable to the selected path as well as updates the label so the user knows which path is selected.
The last item you need to write is the CSS file for this dialog. You will need to name the file save_file_dialog.tcss
and then add this code:
SaveFileDialog { #save_dialog { width: 80%; height: 50%; border: thick $background 70%; content-align: center middle; margin: 2; border: solid green; } Button { margin: 1; content-align: center middle; } }
The CSS code above is almost identical to the CSS you used for the file browser code.
When you run the TUI, you should see something like the following demo GIF:
Wrapping Up
You have now created a basic XML editor and viewer using Python and Textual. There are lots of little improvements that you can add to this code. However, those updates are up to you to make.
Have fun working with Textual and create something new or contribute to a neat Textual project yourself!
Get the Code
The code in this tutorial is based on version 0.2.0 of BoomslangXML TUI. You can download the code from GitHub or from the following links:
The post Creating a Simple XML Editor in Your Terminal with Python and Textual appeared first on Mouse Vs Python.
Armin Ronacher
Agentic Coding Things That Didn’t Work
Using Claude Code and other agentic coding tools has become all the rage. Not only is it getting millions of downloads, but these tools are also gaining features that help streamline workflows. As you know, I got very excited about agentic coding in May, and I’ve tried many of the new features that have been added. I’ve spent considerable time exploring everything on my plate.
But oddly enough, very little of what I attempted I ended up sticking with. Most of my attempts didn’t last, and I thought it might be interesting to share what didn’t work. This doesn’t mean these approaches won’t work or are bad ideas; it just means I didn’t manage to make them work. Maybe there’s something to learn from these failures for others.
Rules of Automation
The best way to think about the approach that I use is:
- I only automate things that I do regularly.
- If I create an automation for something that I do regularly, but then I stop using the automation, I consider it a failed automation and I delete it.
Non-working automations turn out to be quite common. Either I can’t get myself to use them, I forget about them, or I end up fine-tuning them endlessly. For me, deleting a failed workflow helper is crucial. You don’t want unused Claude commands cluttering your workspace and confusing others.
So I end up doing the simplest thing possible most of the time: just talk to the machine more, give it more context, keep the audio input going, and dump my train of thought into the prompt. And that is 95% of my workflow. The rest might be good use of copy/paste.
Slash Commands
Slash commands allow you to preload prompts to have them readily available in a session. I expected these to be more useful than they ended up being. I do use them, but many of the ones that I added I ended up never using.
There are some limitations with slash commands that make them less useful than they could be. One limitation is that there’s only one way to pass arguments, and it’s unstructured. This proves suboptimal in practice for my uses. Another issue I keep running into with Claude Code is that if you do use a slash command, the argument to the slash command for some reason does not support file-based autocomplete.
To make them work better, I often ask Claude to use the current Git state to determine which files to operate on. For instance, I have a command in this blog that fixes grammar mistakes. It operates almost entirely from the current git status context because providing filenames explicitly is tedious without autocomplete.
Here is one of the few slash commands I actually do use:
## Context
- git status: !`git status`
- Explicitly mentioned file to fix: "$ARGUMENTS"
## Your task
Based on the above information, I want you to edit the mentioned file or files
for grammar mistakes. Make a backup (eg: change file.md to file.md.bak) so I
can diff it later. If the backup file already exists, delete it.
If a blog post was explicitly provided, edit that; otherwise, edit the ones
that have pending changes or are untracked.
My workflow now assumes that Claude can determine which files I mean from the Git status virtually every time, making explicit arguments largely unnecessary.
Here are some of the many slash commands that I built at one point but ended up not using:
/fix-bug
: I had a command that instructed Claude to fix bugs by pulling issues from GitHub and adding extra context. But I saw no meaningful improvement over simply mentioning the GitHub issue URL and voicing my thoughts about how to fix it./commit
: I tried getting Claude to write good commit messages, but they never matched my style. I stopped using this command, though I haven’t given up on the idea entirely./add-tests
: I really hoped this would work. My idea was to have Claude skip tests during development, then use an elaborate reusable prompt to generate them properly at the end. But this approach wasn’t consistently better than automatic test generation, which I’m still not satisfied with overall./fix-nits
: I had a command to fix linting issues and run formatters. I stopped using it because it never became muscle memory, and Claude already knows how to do this. I can just tell it “fix lint” in the CLAUDE.md file without needing a slash command./next-todo
: I track small items in a to-do.md file and had a command to pull the next item and work on it. Even here, workflow automation didn’t help much. I use this command far less than expected.
So if I’m using fewer slash commands, what am I doing instead?
- Speech-to-text. Cannot stress this enough but talking to the machine means you’re more likely to share more about what you want it to do.
- I maintain some basic prompts and context for copy-pasting at the end or the beginning of what I entered.
Copy/paste is really, really useful because of how fuzzy LLMs are. For instance, I maintain link collections that I paste in when needed. Sometimes I fetch files proactively, drop them into a git-ignored folder, and mention them. It’s simple, easy, and effective. You still need to be somewhat selective to avoid polluting your context too much, but compared to having it spelunk in the wrong places, more text doesn’t harm as much.
Hooks
I tried hard to make hooks work, but I haven’t seen any efficiency gains from them yet. I think part of the problem is that I use yolo mode. I wish hooks could actually manipulate what gets executed. The only way to guide Claude today is through denies, which don’t work in yolo mode. For instance, I tried using hooks to make it use uv instead of regular Python, but I was unable to do so. Instead, I ended up preloading executables on the PATH that override the default ones, steering Claude toward the right tools.
For instance, this is really my hack for making it use uv run python
instead
of python
more reliably:
#!/bin/sh
echo "This project uses uv, please use 'uv run python' instead."
exit 1
I really just have a bunch of these in .claude/interceptors
and preload that
folder onto PATH
before launching Claude:
CLAUDE_BASH_MAINTAIN_PROJECT_WORKING_DIR=1 \
PATH="`pwd`/.claude/interceptors:${PATH}" \
claude --dangerously-skip-permissions
I also found it hard to hook into the right moment. I wish I could run formatters at the end of a long edit session. Currently, you must run formatters after each Edit tool operation, which often forces Claude to re-read files, wasting context. Even with the Edit tool hook, I’m not sure if I’m going to keep using it.
I’m actually really curious whether people manage to get good use out of hooks. I’ve seen some discussions on Twitter that suggest there are some really good ways of making them work, but I just went with much simpler solutions instead.
Claude Print Mode
I was initially very bullish on Claude’s print mode. I tried hard to have Claude generate scripts that used print mode internally. For instance, I had it create a mock data loading script — mostly deterministic code with a small inference component to generate test data using Claude Code.
The challenge is achieving reliability, which hasn’t worked well for me yet. Print mode is slow and difficult to debug. So I use it far less than I’d like, despite loving the concept of mostly deterministic scripts with small inference components. Whether using the Claude SDK or the command-line print flag, I haven’t achieved the results I hoped for.
I’m drawn to Print Mode because inference is too much like a slot machine. Many programming tasks are actually quite rigid and deterministic. We love linters and formatters because they’re unambiguous. Anything we can fully automate, we should. Using an LLM for tasks that don’t require inference is the wrong approach in my book.
That’s what makes print mode appealing. If only it worked better. Use an LLM for the commit message, but regular scripts for the commit and gh pr commands. Make mock data loading 90% deterministic with only 10% inference.
I still use it, but I see more potential than I am currently leveraging.
Sub Tasks and Sub Agents
I use the task tool frequently for basic parallelization and context isolation. Anthropic recently launched an agents feature meant to streamline this process, but I haven’t found it easier to use.
Sub-tasks and sub-agents enable parallelism, but you must be careful. Tasks that don’t parallelize well — especially those mixing reads and writes — create chaos. Outside of investigative tasks, I don’t get good results. While sub-agents should preserve context better, I often get better results by starting new sessions, writing thoughts to Markdown files, or even switching to o3 in the chat interface.
Does It Help?
What’s interesting about workflow automation is that without rigorous rules that you consistently follow as a developer, simply taking time to talk to the machine and give clear instructions outperforms elaborate pre-written prompts.
For instance, I don’t use emojis or commit prefixes. I don’t enforce templates for pull requests either. As a result, there’s less structure for me to teach the machine.
I also lack the time and motivation to thoroughly evaluate all my created workflows. This prevents me from gaining confidence in their value.
Context engineering and management remain major challenges. Despite my efforts to help agents pull the right data from various files and commands, they don’t yet succeed reliably. They pull in too much or too little. Long sessions lead to forgotten context from the beginning. Whether done manually or with slash commands, the results feel too random. It’s hard enough with ad-hoc approaches, but static prompts and commands make it even harder.
The rule I have now is that if I do want to automate something, I must have done it a few times already, and then I evaluate whether the agent gets any better results through my automation. There’s no exact science to it, but I mostly measure that right now by letting it do the same task three times and looking at the variance manually as measured by: would I accept the result.
Keeping The Brain On
Forcing myself to evaluate the automation has another benefit: I’m less likely to just blindly assume it helps me.
Because there is a big hidden risk with automation through LLMs: it encourages mental disengagement. When you stop thinking like an engineer, quality drops, time gets wasted and you don’t understand and learn. LLMs are already bad enough as they are, but whenever I lean in on automation I notice that it becomes even easier to disengage. I tend to overestimate the agent’s capabilities with time. There are real dragons there!
You can still review things as they land, but it becomes increasingly harder to do so later. While LLMs are reducing the cost of refactoring, the cost doesn’t drop to zero, and regressions are common.
July 29, 2025
PyCoder’s Weekly
Issue #692: PyPI, pedalboard, Django URL Patterns, and More (July 29, 2025)
#692 – JULY 29, 2025
View in Browser »
Supporting the Python Package Index
What goes into supporting more than 650,000 projects and nearly a million users of the Python Package Index? This week on the show, we speak with Maria Ashna about her first year as the inaugural PyPI Support Specialist.
REAL PYTHON podcast
Python Audio Processing With pedalboard
“The pedalboard library for Python is aimed at audio processing of various sorts, from converting between formats to adding audio effects.” This post summarizes a PyCon US talk on pedalboard
and its uses.
JAKE EDGE
Introducing Pixeltable: Declarative Data Infrastructure for Multimodal AI Apps
Open-source Python library for multimodal AI data. Store videos, images, text & embeddings in one place. Transform with computed columns, index for similarity search, add custom UDFs. Built by Apache Parquet team. Stop fighting data infrastructure →
PIXELTABLE sponsor
Django: Iterate Through URL Patterns
Every once and a while you need to iterate through the URL patterns you’ve registered in your Django project. Adam’s write-up covers just how to go about it.
ADAM JOHNSON
Articles & Tutorials
3 pandas Workflows That Slowed on Large Datasets
Data ingest, joins, and groupby aggs slow to a grind when querying millions of rows of data; this post shows how a single cudf.pandas
import moves the work to a GPU and slashes runtimes on common workflows.
NVIDIA.COM • Shared by Jamil Semaan
Stories From Python History
Talk Python To Me interviews Barry Warsaw, Paul Everitt, Carol Willing, and Brett Cannon and they tell stories about Python over the years, including how the first PyCon was only 30 people.
KENNEDY ET AL podcast
Coverage 7.10.0: Patch
Coverage has a new release: 7.10 with some significant new features that have solved some long-standing problems. This post talks about what Ned added and why.
NED BATCHELDER
What Does isinstance()
Do in Python?
Learn what isinstance()
does in Python and how to use this built-in function to check an object’s type. Discover its practical uses along with key limitations.
REAL PYTHON
Python’s Requests Library (Guide)
The Requests library is the go-to tool for making HTTP requests in Python. Learn how to use its intuitive API to send requests and interact with the web.
REAL PYTHON
Checking Out CPython 3.14’s Remote Debugging Protocol
Python 3.14 adds new capabilities for interacting with a running interpreter paving the way for better remote debugging. This article shows you how.
RAPHAEL GASCHIGNARD
Python F-String Quiz
Test your knowledge of Python’s f-string formatting with this interactive quiz. How well do you know Python’s string formatting quirks?
FSTRINGS.WTF
Exploring Python Closures: Examples and Use Cases
Learn about Python closures: function-like objects with extended scope used for decorators, factories, and stateful functions.
REAL PYTHON course
Python Koan 2: The Tale of Two Scrolls
Understanding the difference between identity and equality, and why it matters more than it seems.
SUBSTACK.COM • Shared by Vivis Dev
Toad: A Universal UI for Agentic Coding in the Terminal
Toad is a new Textual based TUI program for interacting with your favorite AI interfaces.
WILL MCGUGAN
How the App and Request Contexts Work in Python Flask
Dive deep into contexts in Flask with some practical examples.
FEDERICO TROTTA • Shared by AppSignal
Projects & Code
AutStr: Infinite Data Structures in Python
GITHUB.COM/FARIEDABUZAID • Shared by Faried Abu Zaid
Events
Weekly Real Python Office Hours Q&A (Virtual)
July 30, 2025
REALPYTHON.COM
Melbourne Python Users Group, Australia
August 4, 2025
J.MP
PyBodensee Monthly Meetup
August 4, 2025
PYBODENSEE.COM
STL Python
August 7, 2025
MEETUP.COM
Canberra Python Meetup
August 7, 2025
MEETUP.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #692.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
PyCharm
Our goal is to make remote development with JetBrains IDEs feel just as reliable and consistent as working locally does. Recently, we’ve made progress towards this goal in core areas like the editor, tool windows, plugin behavior, and settings synchronization.
Editor

All the changes made to the editor in the 2025.2 release are based on a single underlying idea: to make remote editing feel as seamless and responsive as working locally.
We improved the way the editor opens in remote sessions. To reduce the perceived delay between user action and UI feedback, we’ve changed how fast different UI elements appear when opening a file. Now, when a file is opened in a remote session, the editor tab appears immediately, first with the tab frame and file name, followed by the file content as soon as it’s available.
We’re also experimenting with a skeleton view for cases where the editor content cannot be displayed fast enough, with the goal of making the UI feel more responsive. Once the data arrives, the skeleton is seamlessly replaced by the actual file content. Please share your feedback on this change!

To improve responsiveness, we’ve moved several basic editor actions to the frontend:
- The clipboard now reliably captures the correct selection when copy-pasting, even when actions happen quickly.
- Identifier highlighting under the caret now feels faster thanks to frontend caching. The first time something is highlighted, it’s calculated on the backend, but repeated highlights appear instantly while the file stays open.
Using the Java plugin as an example, we’ve made progress toward smarter frontend execution by moving more functionality to the frontend. This includes:
- Code selection and navigation (brace matching and word selection).
- Statement and element movement (up/down and left/right).
- Code formatting and indentation.
- Smart Enter processing.
- Simple highlighting.
Thanks to these changes, the caret now moves much more predictably, even after smart backend updates are applied.
We’ve also extended this approach to SQL, JSON, YAML, TOML, and Ruby, which are now all available in the released version.
There is more work ongoing for upcoming releases, including native-feeling remote development support for other programming languages.
Debugger
We’ve started rolling out a split frontend/backend architecture for the debugger. One of the biggest advantages is that the debugger is less coupled with the network delay. For instance, if you place or delete a breakpoint, it will be applied immediately, with a subsequent interaction with the backend. We’ve also added support for core debugging features like frames, variables, watches, and more, and we’re continuing to work on developing additional features.
While not all functionality is in place yet, the current implementation is fully usable, and the missing features don’t block core debugging workflows.
Terminal
The initial implementation of the split terminal was written between 2024.3 and 2025.1. We finally enabled it by default in 2025.2. The new release fixed many issues related to the previous version of the terminal, and the change was highly anticipated by many individual and corporate customers.
These updates improve the current experience and lay the foundation for future enhancements, ensuring new features will now work natively in remote development mode.
Platform functional windows
Popups with long or dynamic lists have historically performed poorly in remote development scenarios, especially over unstable or high-latency connections. The redesigned versions now provide the same native-level performance when working remotely as they do when working locally, offering smooth scrolling and instant selection, even on slower or less reliable networks.
- Search Everywhere: The most used features are now fully supported, and the popup performs smoothly in remote development scenarios.
- Find in Files: This popup now feels fast and responsive, and several long-standing issues have been resolved.
- Git Branch widget: We’ve improved this widget’s performance and responsiveness under high latency.


Plugin experience
With the latest release, we introduced synchronization of plugins between the client and the host. The IDE now installs the necessary plugins on the client machine based on the host’s setup and vice versa. This allows the development environment to remain consistent with minimal user involvement. The synchronization behavior can be configured depending on the security requirements in specific companies.
IDE settings
We fixed an issue where various project settings were lost between IDE restarts. Recent updates make sure that selected UI and project-specific settings are preserved so that you can resume work exactly where you left off.
Here’s what now persists correctly:
- IDE window size and position.
- Layout and state of tool windows.
- Open files and their order.
- Tool window-specific settings, such as appearance and behavior in the Project view.
Toolbox and remote development
Remote development support in Toolbox was released in April, and while there’s still room for improvement, early feedback has been very positive. Several companies have confirmed that using Toolbox significantly improves connection stability.
In synthetic tests, we observed connection performance improvements of 1.5x or more:

In addition to performance gains, Toolbox supports OpenSSH, works with any major remote host’s OS (not just Linux, but Windows and macOS, too), and lets you manage everything from setup to updates in the same way you handle your IDEs locally. This results in a smoother remote workflow that’s built for how you actually work. You can read more about remote development with Toolbox in our recent blog post.
We’ve also added a new feature: If Toolbox is running, you can now see remote projects in the Recent Projects popup, right alongside your local projects.
Other important improvements
This year, we focused on improving core functionality – frequently used windows, actions, and better separation of components and languages between the frontend and backend. Our goal is to build a unified architecture that works consistently in both monolith and remote development environments.
That said, there are still some tricky parts of the IDE stack to tackle, like syncing keymaps, color schemes, and other settings.
We’ve also fixed several bugs. Here are some of the most important ones:
- IJPL-168465/Client-forgets-keymap-periodically
- IJPL-167788/State-of-splitted-editors-isnt-restored-after-reconnecting-to-the-session
- IJPL-166434/Project-View-only-Project-and-Packages-views-are-available-in-RemDev
- IJPL-170464/Clicking-the-Select-opened-File-in-the-project-tree-on-a-file-of-an-external-lib-does-not-work
- IJPL-168465/Client-forgets-keymap-periodically
Real Python
Working With Python's Built-in Exceptions
Python has a complete set of built-in exceptions that provide a quick and efficient way to handle errors and exceptional situations in your code. Knowing the most commonly used built-in exceptions is key for you as a Python developer. This knowledge will help you debug code because each exception has a specific meaning that can shed light on your debugging process.
You’ll also be able to handle and raise most of the built-in exceptions in your Python code, which is a great way to deal with errors and exceptional situations without having to create your own custom exceptions.
In this video course, you’ll:
- Learn what errors and exceptions are in Python
- Understand how Python organizes the built-in exceptions within a class hierarchy
- Explore the most commonly used built-in exceptions
- Learn how to handle and raise built-in exceptions in your code
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm
Faster Python: Unlocking the Python Global Interpreter Lock
Faster Python: Unlocking the Python Global Interpreter Lock
What is Python’s Global Interpreter Lock (GIL)?
“Global Interpreter Lock” (or “GIL”) is a familiar term in the Python community. It is a well-known Python feature. But what exactly is a GIL?
If you have experience with other programming languages (Rust, for example), you may already know what a mutex is. It’s an abbreviation for “mutual exclusion”. A mutex ensures that data can only be accessed by one thread at a time. This prevents data from being modified by multiple threads at once. You might think of it as a type of “lock” – it blocks all threads from accessing data, except for the one thread that holds the key.
The GIL is technically a mutex. It lets only one thread have access to the Python interpreter at a time. I sometimes imagine it as a steering wheel for Python. You never want to have more than one person in control of the wheel! Then again, a group of people on a road trip will often switch drivers. This is kind of like handing off interpreter access to a different thread.
Because of its GIL, Python does not allow true multithreaded processes. This feature has sparked debates in the past decade, and there have been many attempts to make Python faster by removing the GIL and allowing multithreaded processes. Recently in Python 3.13, an option to have a way to use Python without the GIL, sometimes known as no-GIL or free-threaded Python, has been put in place. Thus begins a new era of Python programming.
Why was the GIL there in the first place?
If the GIL is so unpopular, why was it implemented in the first place? There are actually benefits to having a GIL. In other programming languages with true multithreading, sometimes issues are caused by more than one thread modifying data, with the final outcome depending on which thread or process finishes first. This is called a “race condition”. Languages like Rust are often hard to learn because programmers have to use mutexes to prevent race conditions.
In Python, all objects have a reference counter to keep track of how many other objects require information from them. If the reference counter reaches zero, since we know there is no race condition in Python due to the GIL, we can confidently declare that the object is no longer needed and can be garbage-collected.
When Python was first released in 1991, most personal computers only had one core, and not many programmers were requesting support for multithreading. Having a GIL solves many problems in program implementation and can also make code easy to maintain. Therefore, a GIL was added by Guido van Rossum (the creator of Python) in 1992.
Fast-forward to 2025: Personal computers have multicore processers and thus way more computing power. We can take advantage of this extra power to achieve true concurrency without getting rid of the GIL.
Later in this post, we’ll break down the process of removing it. But for now, let’s look at how true concurrency works with the GIL in place.
Multiprocessing in Python
Before we take a deep dive into the process of removing the GIL, let’s have a look at how Python developers can achieve true concurrency using the multiprocessing library. The multiprocessing standard library offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads. In this way, the multiprocessing module allows the programmer to fully leverage multiple processors on a given machine.
However, to perform multiprocessing, we’ll have to design our program a bit differently. Let’s have a look at the following example of how to use the multiprocessing library in Python.
Remember our async burger restaurant from part 1 of the blog series:
import asyncio import time async def make_burger(order_num): print(f"Preparing burger #{order_num}...") await asyncio.sleep(5) # time for making the burger print(f"Burger made #{order_num}") async def main(): order_queue = [] for i in range(3): order_queue.append(make_burger(i)) await asyncio.gather(*(order_queue)) if __name__ == "__main__": s = time.perf_counter() asyncio.run(main()) elapsed = time.perf_counter() - s print(f"Orders completed in {elapsed:0.2f} seconds.")
We can use the multiprocessing library to do the same, for example:
import multiprocessing import time def make_burger(order_num): print(f"Preparing burger #{order_num}...") time.sleep(5) # time for making the burger print(f"Burger made #{order_num}") if __name__ == "__main__": print("Number of available CPU:", multiprocessing.cpu_count()) s = time.perf_counter() all_processes = [] for i in range(3): process = multiprocessing.Process(target=make_burger, args=(i,)) process.start() all_processes.append(process) for process in all_processes: process.join() elapsed = time.perf_counter() - s print(f"Orders completed in {elapsed:0.2f} seconds.")
As you may recall, a lot of the methods in multiprocessing are very similar to threading. To see the difference in multiprocessing, let’s explore a more complex use case:
import multiprocessing import time import queue def make_burger(order_num, item_made): name = multiprocessing.current_process().name print(f"{name} is preparing burger #{order_num}...") time.sleep(5) # time for making burger item_made.put(f"Burger #{order_num}") print(f"Burger #{order_num} made by {name}") def make_fries(order_num, item_made): name = multiprocessing.current_process().name print(f"{name} is preparing fries #{order_num}...") time.sleep(2) # time for making fries item_made.put(f"Fries #{order_num}") print(f"Fries #{order_num} made by {name}") def working(task_queue, item_made, order_num, lock): break_count = 0 name = multiprocessing.current_process().name while True: try: task = task_queue.get_nowait() except queue.Empty: print(f"{name} has nothing to do...") if break_count > 1: break # stop if idle for too long else: break_count += 1 time.sleep(1) else: lock.acquire() try: current_num = order_num.value order_num.value = current_num + 1 finally: lock.release() task(current_num, item_made) break_count = 0 if __name__ == "__main__": print("Welcome to Pyburger! Please place your order.") burger_num = input("Number of burgers:") fries_num = input("Number of fries:") s = time.perf_counter() task_queue = multiprocessing.Queue() item_made = multiprocessing.Queue() order_num = multiprocessing.Value("i", 0) lock = multiprocessing.Lock() for i in range(int(burger_num)): task_queue.put(make_burger) for i in range(int(fries_num)): task_queue.put(make_fries) staff1 = multiprocessing.Process( target=working, name="John", args=( task_queue, item_made, order_num, lock, ), ) staff2 = multiprocessing.Process( target=working, name="Jane", args=( task_queue, item_made, order_num, lock, ), ) staff1.start() staff2.start() staff1.join() staff2.join() print("All tasks finished. Closing now.") print("Items created are:") while not item_made.empty(): print(item_made.get()) elapsed = time.perf_counter() - s print(f"Orders completed in {elapsed:0.2f} seconds.")
Here’s the output we get:
Welcome to Pyburger! Please place your order. Number of burgers:3 Number of fries:2 Jane has nothing to do... John is preparing burger #0... Jane is preparing burger #1... Burger #0 made by John John is preparing burger #2... Burger #1 made by Jane Jane is preparing fries #3... Fries #3 made by Jane Jane is preparing fries #4... Burger #2 made by John John has nothing to do... Fries #4 made by Jane Jane has nothing to do... John has nothing to do... Jane has nothing to do... John has nothing to do... Jane has nothing to do... All tasks finished. Closing now. Items created are: Burger #0 Burger #1 Fries #3 Burger #2 Fries #4 Orders completed in 12.21 seconds.
Note that there are some limitations in multiprocessing that lead to the above code being designed this way. Let’s go over them one by one.
First, remember that we previously had make_burger and make_fries functions to generate a function with the correct order_num:
def make_burger(order_num): def making_burger(): logger.info(f"Preparing burger #{order_num}...") time.sleep(5) # time for making burger logger.info(f"Burger made #{order_num}") return making_burger def make_fries(order_num): def making_fries(): logger.info(f"Preparing fries #{order_num}...") time.sleep(2) # time for making fries logger.info(f"Fries made #{order_num}") return making_fries
We cannot do the same while using multiprocessing. An attempt to do so will give us an error along the lines of:
AttributeError: Can't get local object 'make_burger.<locals>.making_burger'
The reason is that multiprocessing uses pickle, which can only serialize top-module-level functions in general. This is one of the limitations of multiprocessing.
Second, notice in the example code snippet above using multiprocessing, we do not use any global variables for shared data. For example, we can’t use global variables for item_made and order_num. To share data between different processes, special class objects like Queue and Value from the multiprocessing library are used and passed to the processes as arguments.
Generally speaking, sharing data and states between different processes is not encouraged, as it can cause a lot more issues. In our example above, we have to use a Lock to ensure the value of order_num can only be accessed and incremented by one process at a time. Without the Lock, the order number of the item can be messed up like this:
Items created are: Burger #0 Burger #0 Fries #2 Burger #1 Fries #3
Here’s how you’d use a lock to avoid trouble:
lock.acquire() try: current_num = order_num.value order_num.value = current_num + 1 finally: lock.release() task(current_num, item_made)
To learn more about how to use the multiprocessing standard library, you can peruse the documentation here.
Removing the GIL
The removal of the GIL has been a topic for almost a decade. In 2016, at the Python Language Summit, Larry Hastings presented his thoughts on performing a “GIL-ectomy” on the CPython interpreter and the progress he’d made with this idea [1]. This was a pioneering attempt to remove the Python GIL. In 2021, Sam Gross reignited the discussion about removing the GIL [2], and that led to PEP 703 – Making the Global Interpreter Lock Optional in CPython in 2023.
As we can see, the removal of the GIL is by no means a rushed decision and has been the subject of considerable debate within the community. As demonstrated by the above examples of multiprocessing (and PEP 703, linked above), when the guarantee provided by the GIL is removed, things get complicated fast.
[1]: https://lwn.net/Articles/689548/
[2]: https://lwn.net/ml/python-dev/CAGr09bSrMNyVNLTvFq-h6t38kTxqTXfgxJYApmbEWnT71L74-g@mail.gmail.com/
Reference counting
When the GIL is present, reference counting and garbage collection are more straightforward. When only one thread at a time has access to Python objects, we can rely on straightforward non-atomic reference counting and remove the object when the reference count reaches zero.
Removing the GIL makes things tricky. We can no longer use non-atomic reference counting, as that does not guarantee thread safety. Things can go wrong if multiple threads are performing multiple increments and decrements of the reference on the Python object at the same time. Ideally, atomic reference counting would be used to guarantee thread safety. But this method suffers from high overhead, and efficiency is hampered when there are a lot of threads.
The solution is to use biased reference counting, which also guarantees thread safety. The idea is to bias each object towards an owner thread, which is the thread accessing that object most of the time. Owner threads can perform non-atomic reference counting on the objects they own, while other threads are required to perform atomic reference counting on those objects. This method is preferable to plain atomic reference counting because most objects are only accessed by one thread most of the time. We can cut down on execution overhead by allowing the owner thread to perform non-atomic reference counting.

In addition, some commonly used Python objects, such as True, False, small integers, and some interned strings, are made immortal. Here, “immortal” just means the objects will remain in the program for its entire lifetime, thus they don’t require reference counting.
Garbage collection
We also have to modify the way garbage collection is done. Instead of decreasing the reference count immediately when the reference is released and removing the object immediately when the reference count reaches zero, a technique called “deferred reference counting” is used.
When the reference count needs to be decreased, the object is stored in a table, which will be double-checked to see whether this decrement in the reference count is accurate or not. This avoids removing the object prematurely when it is still being referenced, which can happen without the GIL, since reference counting is not as straightforward as with the GIL. This complicates the garbage collection process, as garbage collection may need to traverse each thread’s stack for each thread’s own reference counting.
Another thing to consider: The reference count needs to be stable during garbage collection. If an object is about to be discarded but then suddenly gets referenced, this will cause serious issues. Because of that, during the garbage collection cycle, it will have to “stop the world” to provide thread-safety guarantees.
Memory allocation
When the GIL is there to ensure thread safety, the Python internal memory allocator pymalloc is used. But without the GIL, we’ll need a new memory allocator. Sam Gross proposed mimalloc in the PEP, which is a general-purpose allocator created by Daan Leijen and maintained by Microsoft. It’s a good choice because it’s thread-safe and has good performance on small objects.
Mimalloc fills its heap with pages and pages with blocks. Each page contains blocks, and the blocks within each page are all the same size. By adding some restrictions on the list and dict access, the garbage collector does not have to maintain a linked list to find all objects and it also allows read access to the list and dict without acquiring the lock.

There are more details about removing the GIL, but it is impossible to cover them all here. You can check out PEP 703 – Making the Global Interpreter Lock Optional in CPython for a complete breakdown.
Difference in performance with and without the GIL
As Python 3.13 provides a free-threaded option, we can compare the performance of the standard version of Python 3.13 to the free-threaded version.
Install thread-free Python
We’ll use pyenv to install both versions: the standard (e.g. 3.13.5) and the free-threaded version (e.g. 3.13.5t).
Alternatively, you can also use the installers on Python.org. Make sure you select the Customize option during installation and check the additional box to install free-threaded Python (see the example in this blog post).
After installing both versions, we can add them as interpreters in a PyCharm project.
First, click on the name of your Python interpreter on the bottom right.

Select Add New Interpreter in the menu and then Add Local Interpreter.

Choose Select existing, wait for the interpreter path to load (which may take a while if you have a lot of interpreters like I do), and then select the new interpreter you just installed from the drop-down Python path menu.

Click OK to add it. Repeat the same steps for the other interpreter. Now, when you click on the interpreter name at the bottom right again, you will see multiple Python 3.13 interpreters, just like in the image above.
Testing with a CPU-bounded process
Next, we need a script to test the different versions. Remember, we explained in part 1 of this blog post series that to speed up CPU-bounded processes, we need true multithreading. To see if removing the GIL will enable true multithreading and make Python faster, we can test with a CPU-bounded process on multiple threads. Here is the script I asked Junie to generate (with some final adjustments by me):
import time import multiprocessing # Kept for CPU count from concurrent.futures import ThreadPoolExecutor import sys def is_prime(n): """Check if a number is prime (CPU-intensive operation).""" if n <= 1: return False if n <= 3: return True if n % 2 == 0 or n % 3 == 0: return False i = 5 while i * i <= n: if n % i == 0 or n % (i + 2) == 0: return False i += 6 return True def count_primes(start, end): """Count prime numbers in a range.""" count = 0 for num in range(start, end): if is_prime(num): count += 1 return count def run_single_thread(range_size, num_chunks): """Run the prime counting task in a single thread.""" chunk_size = range_size // num_chunks total_count = 0 start_time = time.time() for i in range(num_chunks): start = i * chunk_size + 1 end = (i + 1) * chunk_size + 1 if i < num_chunks - 1 else range_size + 1 total_count += count_primes(start, end) end_time = time.time() return total_count, end_time - start_time def thread_task(start, end): """Task function for threads.""" return count_primes(start, end) def run_multi_thread(range_size, num_threads, num_chunks): """Run the prime counting task using multiple threads.""" chunk_size = range_size // num_chunks total_count = 0 start_time = time.time() with ThreadPoolExecutor(max_workers=num_threads) as executor: futures = [] for i in range(num_chunks): start = i * chunk_size + 1 end = (i + 1) * chunk_size + 1 if i < num_chunks - 1 else range_size + 1 futures.append(executor.submit(thread_task, start, end)) for future in futures: total_count += future.result() end_time = time.time() return total_count, end_time - start_time def main(): # Fixed parameters range_size = 1000000 # Range of numbers to check for primes num_chunks = 16 # Number of chunks to divide the work into num_threads = 4 # Fixed number of threads for multi-threading test print(f"Python version: {sys.version}") print(f"CPU count: {multiprocessing.cpu_count()}") print(f"Range size: {range_size}") print(f"Number of chunks: {num_chunks}") print("-" * 60) # Run single-threaded version as baseline print("Running single-threaded version (baseline)...") count, single_time = run_single_thread(range_size, num_chunks) print(f"Found {count} primes in {single_time:.4f} seconds") print("-" * 60) # Run multi-threaded version with fixed number of threads print(f"Running multi-threaded version with {num_threads} threads...") count, thread_time = run_multi_thread(range_size, num_threads, num_chunks) speedup = single_time / thread_time print(f"Found {count} primes in {thread_time:.4f} seconds (speedup: {speedup:.2f}x)") print("-" * 60) # Summary print("SUMMARY:") print(f"{'Threads':<10} {'Time (s)':<12} {'Speedup':<10}") print(f"{'1 (baseline)':<10} {single_time:<12.4f} {'1.00x':<10}") print(f"{num_threads:<10} {thread_time:<12.4f} {speedup:.2f}x") if __name__ == "__main__": main()
To make it easier to run the script with different Python interpreters, we can add a custom run script to our PyCharm project.
At the top, select Edit Configurations… from the drop-down menu next to the Run button ().
Click on the + button in the top left, then choose Python from the Add New Configuration drop-down menu.

Choose a name that will allow you to tell which specific interpreter is being used, e.g. 3.13.5 versus 3.15.3t. Pick the right interpreter and add the name of the testing script like this:

Add two configurations, one for each interpreter. Then click OK.
Now we can easily select and run the test script with or without the GIL by selecting the configuration and clicking the Run button () at the top.
Comparing the results
This is the result I got when running the standard version 3.13.5 with the GIL:
Python version: 3.13.5 (main, Jul 10 2025, 20:33:15) [Clang 17.0.0 (clang-1700.0.13.5)] CPU count: 8 Range size: 1000000 Number of chunks: 16 ------------------------------------------------------------ Running single-threaded version (baseline)... Found 78498 primes in 1.1930 seconds ------------------------------------------------------------ Running multi-threaded version with 4 threads... Found 78498 primes in 1.2183 seconds (speedup: 0.98x) ------------------------------------------------------------ SUMMARY: Threads Time (s) Speedup 1 (baseline) 1.1930 1.00x 4 1.2183 0.98x
As you see, there is no significant change in speed when running the version with 4 threads compared to the single-threaded baseline. Let’s see what we get when running the free-threaded version 3.13.5t:
Python version: 3.13.5 experimental free-threading build (main, Jul 10 2025, 20:36:28) [Clang 17.0.0 (clang-1700.0.13.5)] CPU count: 8 Range size: 1000000 Number of chunks: 16 ------------------------------------------------------------ Running single-threaded version (baseline)... Found 78498 primes in 1.5869 seconds ------------------------------------------------------------ Running multi-threaded version with 4 threads... Found 78498 primes in 0.4662 seconds (speedup: 3.40x) ------------------------------------------------------------ SUMMARY: Threads Time (s) Speedup 1 (baseline) 1.5869 1.00x 4 0.4662 3.40x
This time, the speed was over 3 times as high. Notice that in both cases there is an overhead for multithreading. Therefore, even with true multithreading, the speed is not 4 times as high with 4 threads.
Conclusion
In part 2 of the Faster Python blog post series, we discussed the reason behind having the Python GIL in the past, side-stepping the limitation of the GIL using multiprocessing, and the process and effect of removing the GIL.
As of this blog post, the free-threaded version of Python is still not the default. However, with the adoption of the community and third-party libraries, the community is expecting the free-threaded version of Python to be the standard in the future. It has been announced that Python 3.14 will include a free-threaded version that will be past the experimental stage but still optional.
PyCharm provides best-in-class Python support to ensure both speed and accuracy. Benefit from the smartest code completion, PEP 8 compliance checks, intelligent refactorings, and a variety of inspections to meet all of your coding needs. As demonstrated in this blog post, PyCharm provides custom settings for Python interpreters and run configurations, allowing you to switch between interpreters with only a few clicks, making it suitable for a wide range of Python projects.
Quansight Labs Blog
Learning from accessibility work
Years of accessibility work around Jupyter and thoughts on how to survive it in your own projects.
July 28, 2025
Ari Lamstein
Video: A Python App for Analyzing Immigration Enforcement Data
Last week I wrote a blog post about my latest open source project: an app that analyzes US Immigration Enforcement Data. I just released a video that walks through the app:
This video is in response to folks encouraging me to branch out beyond “just” having a blog. I’m still getting the hang of video creation, but I think this one is a big step forward from my last attempt.
If you enjoy it, I’d really appreciate a like on YouTube and a subscribe to the channel—both help the algorithm decide whether to show the video to others.
Thanks for watching and supporting the project.
Ned Batchelder
Coverage.py regex pragmas
Coverage.py lets you indicate code to exclude from measurement by adding comments to your Python files. But coverage implements them differently than other similar tools. Rather than having fixed syntax for these comments, they are defined using regexes that you can change or add to. This has been surprisingly powerful.
The basic behavior: coverage finds lines in your source files that match the regexes. These lines are excluded from measurement, that is, it’s OK if they aren’t executed. If a matched line is part of a multi-line statement the whole multi-line statement is excluded. If a matched line introduces a block of code the entire block is excluded.
At first, these regexes were just to make it easier to implement the basic
“here’s the comment you use” behavior for pragma comments. But it also enabled
pragma-less exclusions. You could decide (for example) that you didn’t care to
test any __repr__
methods. By adding def __repr__
as an exclusion
regex, all of those methods were automatically excluded from coverage
measurement without having to add a comment to each one. Very nice.
Not only did this let people add custom exclusions in their projects, but it enabled third-party plugins that could configure regexes in other interesting ways:
- covdefaults adds a bunch of default exclusions, and also platform- and version-specific comment syntaxes.
- coverage-conditional-plugin gives you a way to create comment syntaxes for entire files, for whether other packages are installed, and so on.
Then about a year ago, Daniel Diniz contributed a change that amped up the power: regexes could match multi-line patterns. This sounds like not that large a change, but it enabled much more powerful exclusions. As a sign, it made it possible to support four different feature requests.
To make it work, Daniel changed the matching code. Originally, it was a loop over the lines in the source file, checking each line for a match against the regexes. The new code uses the entire source file as the target string, and loops over the matches against that text. Each match is converted into a set of line numbers and added to the results.
The power comes from being able to use one pattern to match many lines. For example, one of the four feature requests was how to exclude an entire file. With configurable multi-line regex patterns, you can do this yourself:
\A(?s:.*# pragma: exclude file.*)\Z
With this regex, if you put the comment “# pragma: exclude file” in your
source file, the entire file will be excluded. The \A
and \Z
match the start and end of the target text, which remember is the entire file.
The (?s:...)
means the s/DOTALL flag is in
effect, so .
can match newlines. This pattern matches the entire source
file if the desired pragma is somewhere in the file.
Another requested feature was excluding code between two lines. We can use “# no cover: start” and “# no cover: end” as delimiters with this regex:
# no cover: start(?s:.*?)# no cover: stop
Here (?s:.*?)
means any number of any character at all, but as few as
possible. A star in regexes means as many as possible, but star-question-mark
means as few as possible. We need the minimal match so that we don’t match from
the start of one pair of comments all the way through to the end of a different
pair of comments.
This regex approach is powerful, but is still fairly shallow. For example, either of these two examples would get the wrong lines if you had a string literal with the pragma text in it. There isn’t a regex that skips easily over string literals.
This kind of difficulty hit home when I added a new default pattern to exclude empty placeholder methods like this:
def not_yet(self): ...
def also_not_this(self):
...
async def definitely_not_this(
self,
arg1,
):
...
We can’t just match three dots, because ellipses can be used in other places than empty function bodies. We need to be more delicate. I ended up with:
^\s*(((async )?def .*?)?\)(\s*->.*?)?:\s*)?\.\.\.\s*(#|$)
This craziness ensures the ellipsis is part of an (async) def, that the ellipsis appears first in the body (but no docstring allowed, doh!), allows for a comment on the line, and so on. And even with a pattern this complex, it would incorrectly match this contrived line:
def f(): print("(well): ... #2 false positive!")
So regexes aren’t perfect, but they’re a pretty good balance: flexible and powerful, and will work great on real code even if we can invent weird edge cases where they fail.
What started as a simple implementation expediency has turned into a powerful configuration option that has done more than I would have thought.
Real Python
Bitwise Operators in Python
Computers store all kinds of information as a stream of binary digits called bits. Whether you’re working with text, images, or videos, they all boil down to ones and zeros. Python’s bitwise operators let you manipulate those individual bits of data at the most granular level.
You can use bitwise operators to implement algorithms such as compression, encryption, and error detection, as well as to control physical devices in your Raspberry Pi project or elsewhere. Often, Python isolates you from the underlying bits with high-level abstractions. You’re more likely to find the overloaded flavors of bitwise operators in practice. But when you work with them in their original form, you’ll be surprised by their quirks!
By the end of this tutorial, you’ll understand that:
- Bitwise operators enable manipulation of individual bits, which is crucial for low-level data handling.
- You can read and write binary data in a platform-independent way using Python.
- Bitmasks pack and manipulate data efficiently within a single byte.
- Overloading bitwise operators allows custom data types to perform specific bitwise operations.
- You can embed secret messages in images using least-significant bit steganography.
To get the complete source code for the digital watermarking example you’ll use in this tutorial, and to extract a secret treat hidden in an image, click the link below:
Get Your Code: Click here to download the free sample code you’ll use to learn about bitwise operators in Python.
Take the Quiz: Test your knowledge with our interactive “Bitwise Operators in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Bitwise Operators in PythonTest your understanding of Python bitwise operators by revisiting core concepts like bitwise AND, OR, XOR, NOT, shifts, bitmasks, and their applications.
Overview of Python’s Bitwise Operators
Python comes with a few different kinds of operators such as the arithmetic, logical, and comparison operators. You can think of them as functions that take advantage of a more compact prefix and infix syntax.
Note: Python doesn’t include postfix operators like the increment (i++
) or decrement (i--
) operators available in C.
Bitwise operators look virtually the same across different programming languages:
Operator | Example | Meaning |
---|---|---|
& |
a & b |
Bitwise AND |
| |
a | b |
Bitwise OR |
^ |
a ^ b |
Bitwise XOR (exclusive OR) |
~ |
~a |
Bitwise NOT |
<< |
a << n |
Bitwise left shift |
>> |
a >> n |
Bitwise right shift |
As you can see, they’re denoted with strange-looking symbols instead of words. This makes them stand out in Python as slightly less verbose than what you might be used to. You probably wouldn’t be able to figure out their meaning just by looking at them.
Note: If you’re coming from another programming language such as Java, then you’ll immediately notice that Python is missing the unsigned right shift operator denoted by three greater-than signs (>>>
).
This has to do with how Python represents integers internally. Since integers in Python can have an infinite number of bits, the sign bit doesn’t have a fixed position. In fact, there’s no sign bit at all in Python!
Most of the bitwise operators are binary, which means that they expect two operands to work with, typically referred to as the left operand and the right operand. Bitwise NOT (~
) is the only unary bitwise operator since it expects just one operand.
All binary bitwise operators have a corresponding compound operator that performs an augmented assignment:
Operator | Example | Equivalent to |
---|---|---|
&= |
a &= b |
a = a & b |
|= |
a |= b |
a = a | b |
^= |
a ^= b |
a = a ^ b |
<<= |
a <<= n |
a = a << n |
>>= |
a >>= n |
a = a >> n |
These are shorthand notations for updating the left operand in place.
That’s all there is to Python’s bitwise operator syntax! Now you’re ready to take a closer look at each of the operators to understand where they’re most useful and how you can use them. First, you’ll get a quick refresher on the binary system before looking at two categories of bitwise operators: the bitwise logical operators and the bitwise shift operators.
Binary System in Five Minutes
Before moving on, take a moment to brush up your knowledge of the binary system, which is essential to understanding bitwise operators. If you’re already comfortable with it, then go ahead and jump to the Bitwise Logical Operators section below.
Why Use Binary?
There are an infinite number of ways to represent numbers. Since ancient times, people have developed different notations, such as Roman numerals and Egyptian hieroglyphs. Most modern civilizations use positional notation, which is efficient, flexible, and well suited for doing arithmetic.
A notable feature of any positional system is its base, which represents the number of digits available. People naturally favor the base-ten numeral system, also known as the decimal system, because it plays nicely with counting on fingers.
Computers, on the other hand, treat data as a bunch of numbers expressed in the base-two numeral system, more commonly known as the binary system. Such numbers are composed of only two digits—zero and one.
Read the full article at https://realpython.com/python-bitwise-operators/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Bitwise Operators in Python
In this quiz, you’ll test your understanding of the Bitwise Operators in Python.
By working through this quiz, you’ll revisit how to use Python’s bitwise AND (&
), OR (|
), XOR (^
), NOT (~
), left and right shifts (<<
, >>
), and bitmasks. You’ll also see practical examples for manipulating data at the bit level. Good luck!
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm
Bringing Remote Closer to Local: 2025.2 Highlights