Planet Python
Last update: July 18, 2025 09:42 PM UTC
July 18, 2025
Mike Driscoll
Announcing Squall: A TUI SQLite Editor
Squall is a SQLite viewer and editor that runs in your terminal. Squall is written in Python and uses the Textual package. Squall allows you to view and edit SQLite databases using SQL. You can check out the code on GitHub.
Here is what Squall looks like using the Chinook database:
Currently, there is only one command-line option: -f
or --filename
, which allows you to pass a database path to Squall to load.
Example Usage:
squall -f path/to/database.sqlite
The instructions assume you have uv or pip installed.
uv tool install squall_sql
uv tool install git+https://github.com/driscollis/squall
If you want to upgrade to the latest version of Squall SQL, then you will want to run one of the following commands.
uv tool install git+https://github.com/driscollis/squall -U --force
pip install squall-sql
If you have cloned the package and want to run Squall, one way to do so is to navigate to the cloned repository on your hard drive using your Terminal. Then run the following command while inside the src
folder:
python -m squall.squall
The post Announcing Squall: A TUI SQLite Editor appeared first on Mouse Vs Python.
The Python Coding Stack
Do You Really Know How `or` And `and` Work in Python?
Let's start with an easy question. Play along, please. I know you know how to use the or
keyword, just bear with me for a bit…
Have you answered? If you haven't, please do, even if this is a simple question for you.
…Have you submitted your answer now?
I often ask this question when running live courses, and people are a bit hesitant to answer because it seems to be such a simple, even trivial, question. Most people eventually answer: True
.
OK, let's dive further into how or
works, and we'll also explore and
in this article.
or
You may not have felt the need to cheat when answering the question above. But you could have just opened your Python REPL and typed in the expression. Let's try it:

Wait. What?!
The output is not True
. Why 5
? Let's try it again with different operands:
Hmm?!
Truthy and Falsy
Let's review the concept of truthiness in Python. Every Python object is either truthy or falsy. When you pass a truthy object to the built-in bool()
, you get True
. And, you guessed it, you'll get False
when you pass a falsy object to bool()
.
In situations where Python is expecting a True
or False
, such as after the if
or while
keywords, Python will use the object's truthiness value if the object isn't a Boolean (True
or False
).
Back to or
Let's get back to the expression 5 or 0
. The integer 5
is truthy. You can confirm this by running bool(5)
, which returns True
. But 0
is falsy. In fact, 0
is the only falsy integer. Every other integer is truthy. Therefore, 5 or 0
should behave like True
. If you write if 5 or 0:
, you'll expect Python to execute the block of code after the if
statement. And it does.
But you've seen that 5 or 0
evaluates to 5
. And 5
is not True
. But it's truthy. So, the statement if 5 or 0:
becomes if 5:
, and since 5
is truthy, this behaves as if it were if True:
.
But why does 5 or 0
give you 5
?
or
Only Needs One Truthy Value
The or
keyword is looking at its two operands, the one before and the one after the or
keyword. It only needs one of them to be true (by which I mean truthy) for the whole expression to be true (truthy).
So, what happens when you run the expression 5 or 0
? Python looks at the first operand, which is 5
. It's truthy, so the or
expression simply gives back this value. It doesn't need to bother with the second operand because if the first operand is truthy, the value of the second operand is irrelevant. Recall that or
only needs one operand to be truthy. It doesn't matter if only one or both operands are truthy.
So, what happens if the first operand is falsy?
The first of these expressions has one truthy and one falsy operand. But the first operand, 0
, is falsy. Therefore, the or
expression must look at the second operand. It's truthy. The or
expression gives back the second operand. Therefore, the output of the or
expression is truthy. Great.
But the or
expression doesn't return the second operand because the second operand is truthy. Instead, it returns the second operand because the first operand is falsy.
When the first operand in an or
expression is falsy, the result of the or
expression is determined solely by the second operand. If the second operand is truthy, then the or
expression is truthy. But if the second operand is falsy
, the whole or
expression is falsy. Recall that the previous two sentences apply to the case when the first operand is falsy.
That's why the second example above, 0 or ""
, returns the empty string, which is the second operand. An empty string is falsy—try bool("")
to confirm this. Any non-empty string is truthy.
So:
or
always evaluates to the first operand when the first operand is truthyor
always evaluates to the second operand when the first operand is falsy
But there's more to this…
Lazy Evaluation • Short Circuiting
Let's get back to the expression 5 or 0
. The or
looks at the first operand. It decides it's truthy, so its output is this first operand.
It never even looks at the second operand.
Do you want proof? Consider the following or
expression:
What's bizarre about this code at first sight? The expression int("hello")
is not valid since you can't convert the string "hello"
to an integer. Let's confirm this:
But the or
expression above, 5 or int("hello")
, didn't raise this error. Why?
Because Python never evaluated the second operand. Since the first operand, 5
, is truthy, Python decides to be lazy—it doesn't need to bother with the second operand. This is called short-circuit evaluation.
That's why 5 or int("hello")
doesn't raise the ValueError
you might expect from the second operand.
However, if the first operand is falsy, then Python needs to evaluate the second operand:
In this case, you get the ValueError
raised by the second operand.
Lazy is good (some will be pleased to read this). Python is being efficient when it evaluates expressions lazily. It saves time by avoiding the evaluation of expressions it doesn't need!
and
How about the and
keyword? The reasoning you need to use to understand and
is similar to the one you used above when reading about or
. But the logic is reversed. Let's try this out:
The and
keyword requires both operands to be truthy for the whole expression to be true (truthy). In the first example above, 5 and 0
, the first operand is truthy. Therefore, and
needs to also check the second operand. In fact, if the first operand in an and
expression is truthy, the second operand will determine the value of the whole expression.
When the first operand is truthy, and
always returns the second operand. In the first example, 5 and 0
, the second operand is 0
, which is falsy. So, the whole and
expression is falsy.
But in the second example, 5 and "hello"
, the second operand is "hello"
, which is truthy since it's a non-empty string. Therefore, the whole expression is truthy.
What do you think happens to the second operand when the first operand in an and
expression is falsy?
The first operand is falsy. It doesn't matter what the second operand is, since and
needs both operands to be truthy to evaluate to a truthy value.
And when the first operand in an and
expression is falsy, Python's lazy evaluation kicks in again. The second operand is never evaluated. You have a short-circuit evaluation:
Once again, you use the invalid expression int("hello")
as the second operand. This expression would raise an error when Python evaluates it. But, as you can see, the expression 0 and int("hello")
never raises this error since it never evaluates the second operand.
Let's summarise how and
works:
and
always evaluates to the first operand when the first operand is falsyand
always evaluates to the second operand when the first operand is truthy
Compare this to the bullet point summary for the or
expression earlier in this article.
Do you want to try video courses designed and delivered in the same style as these posts? You can get a free trial at The Python Coding Place and you also get access to a members-only forum.
More on Short-Circuiting
Here's code you may see that uses the or
expression’s short-circuiting behaviour:
Now, you're assigning the value of the or
expression to a variable name, person
. So, what will person
hold?
Let's try this out in two scenarios:
In the first example, you type your name when prompted. Or you can type my name, whatever you want! Therefore, the call to input()
returns a non-empty string, which is truthy. The or
expression evaluates to this first operand, which is the return value of the input()
call. So, person
is the string returned by input()
.
However, in the second example, you simply hit enter when prompted to type in a name. You leave the name field blank. In this case, input()
returns the empty string, ""
. And an empty string is falsy. Therefore, or
evaluates to the second operand, which is the string "Unknown"
. This string is assigned to person
.
Final Words
So, or
and and
don't always evaluate to a Boolean. They'll evaluate to one of their two operands, which can be any object—any data type. Since all objects in Python are either truthy or falsy, it doesn't matter that or
and and
don't return Booleans!
Now you know!
Do you want to join a forum to discuss Python further with other Pythonistas? Upgrade to a paid subscription here on The Python Coding Stack to get exclusive access to The Python Coding Place's members' forum. More Python. More discussions. More fun.
And you'll also be supporting this publication. I put plenty of time and effort into crafting each article. Your support will help me keep this content coming regularly and, importantly, will help keep it free for everyone.
Image by Paolo Trabattoni from Pixabay
Code in this article uses Python 3.13
The code images used in this article are created using Snappify. [Affiliate link]
You can also support this publication by making a one-off contribution of any amount you wish.
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Further reading related to this article’s topic:
Appendix: Code Blocks
Code Block #1
5 or 0
# 5
Code Block #2
"hello" or []
# 'hello'
Code Block #3
0 or 5
# 5
0 or ""
# ''
Code Block #4
5 or int("hello")
# 5
Code Block #5
int("hello")
# Traceback (most recent call last):
# File "<input>", line 1, in <module>
# ValueError: invalid literal for int() with base 10: 'hello'
Code Block #6
0 or int("hello")
# Traceback (most recent call last):
# File "<input>", line 1, in <module>
# ValueError: invalid literal for int() with base 10: 'hello'
Code Block #7
5 and 0
# 0
5 and "hello"
# 'hello'
Code Block #8
0 and 5
# 0
Code Block #9
0 and int("hello")
# 0
Code Block #10
person = input("Enter name: ") or "Unknown"
Code Block #11
person = input("Enter name: ") or "Unknown"
# Enter name: >? Stephen
person
# 'Stephen'
person = input("Enter name: ") or "Unknown"
# Enter name: >?
person
# 'Unknown'
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Talk Python to Me
#514: Python Language Summit 2025
Every year the core developers of Python convene in person to focus on high priority topics for CPython and beyond. This year they met at PyCon US 2025. Those meetings are closed door to keep focused and productive. But we're lucky that Seth Michael Larson was in attendance and wrote up each topic presented and the reactions and feedback to each. We'll be exploring this year's Language Summit with Seth. It's quite insightful to where Python is going and the pressing matters.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/seer'>Seer: AI Debugging, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/sentryagents'>Sentry AI Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading">Links from the show</h2> <div><strong>Seth on Mastodon</strong>: <a href="https://fosstodon.org/@sethmlarson" target="_blank" >@sethmlarson@fosstodon.org</a><br/> <strong>Seth on Twitter</strong>: <a href="https://twitter.com/sethmlarson?featured_on=talkpython" target="_blank" >@sethmlarson</a><br/> <strong>Seth on Github</strong>: <a href="https://github.com/sethmlarson?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>Python Language Summit 2025</strong>: <a href="https://pyfound.blogspot.com/2025/06/python-language-summit-2025.html?featured_on=talkpython" target="_blank" >pyfound.blogspot.com</a><br/> <strong>WheelNext</strong>: <a href="https://wheelnext.dev/?featured_on=talkpython" target="_blank" >wheelnext.dev</a><br/> <strong>Free-Threaded Wheels</strong>: <a href="https://hugovk.github.io/free-threaded-wheels/?featured_on=talkpython" target="_blank" >hugovk.github.io</a><br/> <strong>Free-Threaded Python Compatibility Tracking</strong>: <a href="https://py-free-threading.github.io/tracking/?featured_on=talkpython" target="_blank" >py-free-threading.github.io</a><br/> <strong>PEP 779: Criteria for supported status for free-threaded Python</strong>: <a href="https://discuss.python.org/t/pep-779-criteria-for-supported-status-for-free-threaded-python/84319/123?featured_on=talkpython" target="_blank" >discuss.python.org</a><br/> <strong>PyPI Data</strong>: <a href="https://py-code.org/?featured_on=talkpython" target="_blank" >py-code.org</a><br/> <strong>Senior Engineer tries Vibe Coding</strong>: <a href="https://www.youtube.com/watch?v=_2C2CNmK7dQ&ab_channel=Programmersarealsohuman" target="_blank" >youtube.com</a><br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=t7Ov3ICo8Kc" target="_blank" >youtube.com</a><br/> <strong>Episode #514 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/514/python-language-summit-2025#takeaways-anchor" target="_blank" >talkpython.fm/514</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/514/python-language-summit-2025" target="_blank" >talkpython.fm</a><br/> <strong>Developer Rap Theme Song: Served in a Flask</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>--- Stay in touch with us ---</strong><br/> <strong>Subscribe to Talk Python on YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" >youtube.com</a><br/> <strong>Talk Python on Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm at bsky.app</a><br/> <strong>Talk Python on Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes at bsky.app</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Matt Layman
Enhancing Chatbot State Management with LangGraph
Picture this: it’s late and I’m deep in a coding session, wrestling with a chatbot that’s starting to feel more like a living thing than a few lines of Python. Today’s mission? Supercharge the chatbot’s ability to remember and verify user details like names and birthdays using LangGraph. Let’s unpack the journey, from shell commands to Git commits, and see how this bot got a memory upgrade. For clarity, this is my adventure running through the LangGraph docs.
July 17, 2025
Wingware
Wing Python IDE Version 11.0.2 - July 17, 2025
Wing Python IDE version 11.0.2 is now available. It improves source code analysis, avoids multiple duplicate evaluation of values in the Watch tool, fixes ruff as an external code checker in the Code Warnings tool, and makes a few other minor improvements.

Downloads
Wing 10 and earlier versions are not affected by installation of Wing 11 and may be installed and used independently. However, project files for Wing 10 and earlier are converted when opened by Wing 11 and should be saved under a new name, since Wing 11 projects cannot be opened by older versions of Wing.
New in Wing 11
Improved AI Assisted Development
Wing 11 improves the user interface for AI assisted development by introducing two separate tools AI Coder and AI Chat. AI Coder can be used to write, redesign, or extend code in the current editor. AI Chat can be used to ask about code or iterate in creating a design or new code without directly modifying the code in an editor.
Wing 11's AI assisted development features now support not just OpenAI but also Claude, Grok, Gemini, Perplexity, Mistral, Deepseek, and any other OpenAI completions API compatible AI provider.
This release also improves setting up AI request context, so that both automatically and manually selected and described context items may be paired with an AI request. AI request contexts can now be stored, optionally so they are shared by all projects, and may be used independently with different AI features.
AI requests can now also be stored in the current project or shared with all projects, and Wing comes preconfigured with a set of commonly used requests. In addition to changing code in the current editor, stored requests may create a new untitled file or run instead in AI Chat. Wing 11 also introduces options for changing code within an editor, including replacing code, commenting out code, or starting a diff/merge session to either accept or reject changes.
Wing 11 also supports using AI to generate commit messages based on the changes being committed to a revision control system.
You can now also configure multiple AI providers for easier access to different models.
For details see AI Assisted Development under Wing Manual in Wing 11's Help menu.
Package Management with uv
Wing Pro 11 adds support for the uv package manager in the New Project dialog and the Packages tool.
For details see Project Manager > Creating Projects > Creating Python Environments and Package Manager > Package Management with uv under Wing Manual in Wing 11's Help menu.
Improved Python Code Analysis
Wing 11 improves code analysis of literals such as dicts and sets, parametrized type aliases, typing.Self, type of variables on the def or class line that declares them, generic classes with [...], __all__ in *.pyi files, subscripts in typing.Type and similar, type aliases, and type hints in strings.
Updated Localizations
Wing 11 updates the German, French, and Russian localizations, and introduces a new experimental AI-generated Spanish localization. The Spanish localization and the new AI-generated strings in the French and Russian localizations may be accessed with the new User Interface > Include AI Translated Strings preference.
Improved diff/merge
Wing Pro 11 adds floating buttons directly between the editors to make navigating differences and merging easier, allows undoing previously merged changes, and does a better job managing scratch buffers, scroll locking, and sizing of merged ranges.
For details see Difference and Merge under Wing Manual in Wing 11's Help menu.
Other Minor Features and Improvements
Wing 11 also improves the custom key binding assignment user interface, adds a Files > Auto-Save Files When Wing Loses Focus preference, warns immediately when opening a project with an invalid Python Executable configuration, allows clearing recent menus, expands the set of available special environment variables for project configuration, and makes a number of other bug fixes and usability improvements.
Changes and Incompatibilities
Since Wing 11 replaced the AI tool with AI Coder and AI Chat, and AI configuration is completely different than in Wing 10, you will need to reconfigure your AI integration manually in Wing 11. This is done with Manage AI Providers in the AI menu. After adding the first provider configuration, Wing will set that provider as the default. You can switch between providers with Switch to Provider in the AI menu.
If you have questions, please don't hesitate to contact us at support@wingware.com.
July 16, 2025
Real Python
Python Scope and the LEGB Rule: Resolving Names in Your Code
The scope of a variable in Python determines where in your code that variable is visible and accessible. Python has four general scope levels: local, enclosing, global, and built-in. When searching for a name, Python goes through these scopes in order. It follows the LEGB rule, which stands for Local, Enclosing, Global, and Built-in.
Understanding how Python manages the scope of variables and names is a fundamental skill for you as a Python developer. It helps you avoid unexpected behavior and errors related to name collisions or referencing the wrong variable.
By the end of this tutorial, you’ll understand that:
- A scope in Python defines where a variable is accessible, following the local, enclosing, global, and built-in (LEGB) rule.
- A namespace is a dictionary that maps names to objects and determines their scope.
- The four scope levels—local, enclosing, global, and built-in—each control variable visibility in a specific context.
- Common scope-related built-in functions include
globals()
andlocals()
, which provide access to global and local namespaces.
To get the most out of this tutorial, you should be familiar with Python concepts like variables, functions, inner functions, exception handling, comprehensions, and classes.
Get Your Code: Click here to download the free sample code that you’ll use to learn about Python scope and the LEGB rule.
Understanding the Concept of Scope
In programming, the scope of a name defines the region of a program where you can unambiguously access that name, which could identify a variable, constant, function, class, or any other object. In most cases, you’ll only be able to access a name within its own scope or from an inner or nested scope.
Nearly all programming languages use the concept of scope to avoid name collisions and unpredictable behavior. Most often, you’ll distinguish between two main types of scope:
- Global scope: Names in this scope are available to all your code.
- Local scope: Names in this scope are only available or visible to the code within the scope.
Scope came about because early programming languages like BASIC only had global names. With this type of name, any part of the program could modify any variable at any time, making large programs difficult to maintain and debug. To work with global names, you’d need to keep all the code in mind to know what value a given name refers to at any time. This is a major side effect of not having scopes and relying solely on global names.
Modern languages, like Python, use the concept of variable scoping to avoid this kind of issue. When you use a language that implements scopes, you won’t be able to access all the names in a program from all locations. Instead, your ability to access a name depends on its scope.
Note: In this tutorial, you’ll be using the term name to refer to the identifiers of variables, constants, functions, classes, or any other object that can be assigned a name.
The names in your programs take on the scope of the code block in which you define them. When you can access a name from somewhere in your code, then the name is in scope. If you can’t access the name, then the name is out of scope.
Names and Scopes in Python
Because Python is a dynamically-typed language, its variables come into existence when you first assign them a value. Similarly, functions and classes are available after you define them using def
or class
, respectively. Finally, modules exist after you import them into your current scope.
You can create names in Python using any of the following operations:
Operation | Example |
---|---|
Assignment | variable = value |
Import | import module or from module import name |
Function definition | def func(): pass |
Function argument | func(value1, value2,..., valueN) |
Class definition | class DemoClass: pass |
These are all ways to assign a value to either a variable, constant, function, class, instance, or module. In each case, you end up with a name that has a specific scope. This scope will depend on where in your code you’ve defined the name at hand.
Note: There’s an important difference between assignment operations and reference or access operations. When you assign a name, you’re either creating that name or making it reference a different object. When you reference a name, you’re retrieving the value that the name points to.
Python uses the location of a name definition to associate it with a particular scope. In other words, the place in which you define a name in your code determines the scope or visibility of that name.
For example, if you define a name inside a function, then that name will have a local scope. You can only access the name locally within the function implementation. In contrast, if you define a name at the top level of a module, then that name will have a global scope. You’ll be able to access it from anywhere in your code.
Scope vs Namespace in Python
The concept of scope is closely related to the concept of namespace. A scope determines the visibility and lifetime of names, while a namespace provides the place where those names are stored.
Python implements namespaces as dictionaries that map names to objects. These dictionaries are the underlying mechanism that Python uses to store names under a specific scope. You can often access them through the .__dict__
attribute of the owning object.
Read the full article at https://realpython.com/python-scope-legb-rule/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Mike Driscoll
An Intro to Asciimatics – Another Python TUI Package
Text-based user interfaces (TUIs) have gained significant popularity in recent years. Even Rust has its own library called Ratatui after all. Python has several different TUI packages to choose from. One of those packages is called Asciimatics.
While Asciimatics is not as full-featured and slick as Textual is, you can do quite a bit with Asciimatics. In fact, there is a special kind of charm to the old-school flavor of the TUIs that you can create using Asciimatics.
In this tutorial, you will learn the basics of Asciimatics:
- Installation
- Creating a Hello World application
- Creating a form
The purpose of this tutorial is not to be exhaustive, but to give you a sense of how easy it is to create a user interface with Asciimatics. Be sure to read the complete documentation and check out their examples to learn more.
For now, let’s get started!
Installation
Asciimatics is a third-party Python package. What that means is that Asciimatics is not included with Python. You will need to install it. You should use a Python virtual environment for installing packages or creating new applications.
Whether you use the virtual environment or not, you can use pip to install Asciimatics:
python -m pip install asciimatics
Once Asciimatics is installed, you can proceed to creating a Hello World application.
Creating a Hello World Application
Creating a simple application is a concrete way to learn how to use an unfamiliar package. You will create a fun little application that “prints” out “Hello from Asciimatics” multiple times and in multiple colors.
Open up your favorite Python IDE or text editor and create a new file called hello_asciimatics.py
and then add the following code to it:
from random import randint from asciimatics.screen import Screen def hello(screen: Screen): while True: screen.print_at("Hello from ASCIIMatics", randint(0, screen.width), randint(0, screen.height), colour=randint(0, screen.colours - 1), bg=randint(0, screen.colours - 1) ) key = screen.get_key() if key in (ord("Q"), ord("q")): return screen.refresh() Screen.wrapper(hello)
This codfe takes in an Asciimatics Screen
object. You draw your text on the screen. In this case, you use the screen’s print_at()
method to draw the text. You use Python’s handy random
module to choose random coordinates in your terminal to draw the text as well as choose random foreground and background colors.
You run this inside an infinite loop. Since the loop runs indefinitely, the text will be drawn all over the screen and over the top of previous iterations of the text. What that means is that you should see the same text over and over again, getting written on top of previous versions of the text.
If the user presses the “Q” button on their keyboard, the application will break out of the loop and exit.
When you run this code, you should see something like this:
Isn’t that neat? Give it a try on your machine and verify that it works.
Now you are ready to create a form!
Creating a Form
When you want to ask the user for some information, you will usually use a form. You will find that this is true in web, mobile and desktop applications.
To make this work in Asciimatics, you will need to create a way to organize your widgets. To do that, you create a Layout
object. You will find that Asciimatics follow an hierarchy of Screen -> Scene -> Effects and then layouts and widgets.
All of this is kind of abstract though. So it make this easier to understand, you will write some code. Open up your Python IDE and create another new file. Name this new file ascii_form.py
and then add this code to it:
import sys from asciimatics.exceptions import StopApplication from asciimatics.scene import Scene from asciimatics.screen import Screen from asciimatics.widgets import Frame, Button, Layout, Text class Form(Frame): def __init__(self, screen): super().__init__(screen, screen.height * 2 // 3, screen.width * 2 // 3, hover_focus=True, can_scroll=False, title="Contact Details", reduce_cpu=True) layout = Layout([100], fill_frame=True) self.add_layout(layout) layout.add_widget(Text("Name:", "name")) layout.add_widget(Text("Address:", "address")) layout.add_widget(Text("Phone number:", "phone")) layout.add_widget(Text("Email address:", "email")) button_layout = Layout([1, 1, 1, 1]) self.add_layout(button_layout) button_layout.add_widget(Button("OK", self.on_ok), 0) button_layout.add_widget(Button("Cancel", self.on_cancel), 3) self.fix() def on_ok(self): print("User pressed OK") def on_cancel(self): sys.exit(0) raise StopApplication("User pressed cancel. Quitting!") def main(screen: Screen): while True: scenes = [ Scene([Form(screen)], -1, name="Main Form") ] screen.play(scenes, stop_on_resize=True, start_scene=scenes[0], allow_int=True) Screen.wrapper(main, catch_interrupt=True)
The Form
is a subclass of Frame
which is an Effect
in Asciimatics. In this case, you can think of the frame as a kind of window or dialog within your terminal.
The frame will contain your form. Within the frame, you create a Layout
object and you tell it to fill the frame. Next you add the widgets to the layout, which will add the widgets vertically, from top to bottom.
Then you create a second layout to hold two buttons: “OK” and “Cancel”. The second layout is defined as having four columns with a size of one. You will then add the buttons and specify which column the button should be put in.
To show the frame to the user, you add the frame to a Scene
and then you play()
it.
When you run this code, you should see something like the following:
Pretty neat, eh?
Now this example is great for demonstrating how to create a more complex user interface, but it doesn’t show how to get the data from the user as you haven’t written any code to grab the contents of the Text
widgets. However, you did show that when you created the buttons, you can bind them to specific methods that get called when the user clicks on those buttons.
Wrapping Up
Asciimatics makes creating simple and complex applications for your terminal easy. However, the applications have a distincly retro-look to them that is reminiscent to the 1980’s or even earlier. The applications are appealing in their own way, though.
This tutorial only scratches the surface of Asciimatics. For full details, you should check out their documentation.
If you wamt to create a more modern looking user interface, you might want to check out Textual instead.
Related Reading
Want to learn how to create TUIs the modern way? Check out my book: Creating TUI Applications with Textual and Python.
Available at the following:
The post An Intro to Asciimatics – Another Python TUI Package appeared first on Mouse Vs Python.
Python Software Foundation
Affirm Your PSF Membership Voting Status
Every PSF voting-eligible Member (Supporting, Contributing, and Fellow) needs to affirm their membership to vote in this year’s election.
If you wish to vote in this year’s PSF Board election, you must affirm your intention to vote no later than Tuesday, August 26th, 2:00 pm UTC. This year’s Board Election vote begins Tuesday, September 2nd, 2:00 pm UTC, and closes on Tuesday, September 16th, 2:00 pm UTC.
You should have received an email from "psf@psfmember.org <Python Software Foundation>" with the subject "[Action Required] Affirm your PSF Membership voting intention for 2025 PSF Board Election" that contains information on how to affirm your voting status. If you were expecting to receive the email but have not (make sure to check your spam!), please email psf-elections@pyfound.org, and we’ll assist you. Please note: If you opted out of emails related to your membership, you did not receive this email.
Need to check your membership status?
Log on to psfmember.org and visit your PSF Member User Information page to see your membership record and status. If you are a voting-eligible member (active Supporting, Contributing, and Fellow members of the PSF) and do not already have a login, please create an account on psfmember.org and then email psf-elections@pyfound.org so we can link your membership to your account. Please ensure you have an account linked to your membership so that we can have the most up-to-date contact information for you in the future.
How to affirm your intention to vote
You can affirm your voting intention by following the steps in our video tutorial:
- Log in to psfmember.org
- Check your eligibility to vote (You must be a Contributing, Supporting, or Fellow member)
- Choose “Voting Affirmation” at the top right
- Select your preferred intention for voting in 2025
- Click the “Submit” button
PSF Bylaws
Section 4.2 of the PSF Bylaws requires that “Members of any membership class with voting rights must affirm each year to the corporation in writing that such member intends to be a voting member for such year.”
Our motivation is to ensure that our elections can meet quorum as required by Section 3.9 of our bylaws. As our membership has grown, we have seen that an increasing number of Contributing and Fellow members with indefinite membership do not engage with our annual election, making quorum difficult to reach.
An election that does not reach quorum is invalid. This would cause the whole voting process to be re-held, resulting in fewer voters and an undue amount of effort on the part of PSF Staff.
Recent updates to membership and voting
If you were formerly a Managing member, your membership has been updated to Contributing as of June 25th, 2025, per last year’s Bylaw change that merged Managing and Contributing memberships.
Per another recent Bylaw change that allows for simplifying the voter affirmation process by treating past voting activity as intent to continue voting, if you voted last year, you will automatically be added to the 2025 voter roll. Please note: If you removed or changed your email on psfmember.org, you may not automatically be added to this year's voter roll.
What happens next?
You’ll get an email from OpaVote with a ballot on or right before September 2nd, and then you can vote!
Check out our PSF Membership page to learn more. If you have questions about membership, nominations, or this year’s Board election, please email psf-elections@pyfound.org or join the PSF Discord for the upcoming Board Office Hours on August 12th, 9 PM UTC. You are also welcome to join the discussion about the PSF Board election on our forum.
July 15, 2025
PyCoder’s Weekly
Issue #690: JIT, __init__, dis, and That's Not It (July 15, 2025)
#690 – JULY 15, 2025
View in Browser »
Reflections on 2 Years of CPython’s JIT Compiler
Ken is one of the contributors to CPython’s JIT compiler. This retrospective talks about what is going well and what Ken thinks could be better with the JIT.
KEN JIN
What Is Python’s __init__.
py For?
Learn to declare packages with Python’s __init__
.py, set package variables, simplify imports, and understand what happens if this module is missing.
REAL PYTHON
[Live Event] Debugging AI Applications with Sentry
Join the Sentry team for the latest Sentry Build workshop on Debugging with Sentry AI using Seer, MCP, and Agent Monitoring. In this hands-on session, you’ll learn how to debug AI-integrated applications and agents with full-stack visibility. Join live on July 23rd →
SENTRY sponsor
Disassembling Python Code Using the dis
Module
Look behind the scenes to see what happens when you run your Python (CPython) code by using the tools in the dis
module.
THEPYTHONCODINGSTACK.COM
Articles & Tutorials
Run Coverage on Tests
Code coverage tools tell you just what parts of your programs got executed during test runs. They’re an important part of your test suite, without them you may miss errors in your tests themselves. This post has two quick examples of just why you should use a coverage tool.
HUGO VAN KEMENADE
Python Software Foundation Bylaws Change
To comply with a variety of data privacy laws in the EU, UK, and California, the PSF is updating section 3.8 of the bylaws which formerly allowed any voting member to request a list of all members’ names and email addresses.
PYTHON SOFTWARE FOUNDATION
Happy 20th Birthday Django!
July 13th was the 20th anniversary of the first public commit to the Django code repository. In celebration, Simon has reposted his talk from the 10th anniversary on the history of the project.
SIMON WILLISON
330× Faster: Four Different Ways to Speed Up Your Code
There are many approaches to speeding up Python code; applying multiple approaches can make your code even faster. This post talks about four different ways you can achieve speed-up.
ITAMAR TURNER-TRAURING
Thinking About Running for the PSF Board? Let’s Talk!
It is that time of year, the PSF board elections are starting. If you’re thinking about running or want to know more, consider attending the office hours session on August 12th.
PYTHON SOFTWARE FOUNDATION
How Global Variables Work in Python Bytecode
To better understand how Python handles globals, this article walks through dynamic name resolution, the global store, and how monkey patching works at the bytecode level.
FROMSCRATCHCODE.COM • Shared by Tyler Green
Building a JIT Compiler for CPython
Talk Python To Me interviews Brandt Bucher and they talk about the upcoming JIT compiler for Python and how it is different than JITs in other languages.
KENNEDY & BUCHER podcast
International Travel to DjangoCon US 2025
DjangoCon US is in Chicago on September 8-12. If you’re travelling there from outside the US, this article has details that may be helpful to you.
DJANGOCON US
Using DuckDB With Pandas, Parquet, and SQL
Learn about DuckDB’s in-process architecture and SQL capabilities which can enhance performance and simplify data handling.
KHUYEN TRAN • Shared by Ben Portz
Exploring Protocols in Python
Learn how Python’s protocols improve your use of type hints and static type checkers in this practical video course.
REAL PYTHON course
How to Use MongoDB in Python Flask
This article explores the benefits of MongoDB and how to use it in a Flask application.
FEDERICO TROTTA • Shared by AppSignal
Open Source Security Work Isn’t “Special”
Seth gave a keynote talk at the OpenSSF Community Day NA and spoke about how in many open source projects security is thought of in isolation and it can be overwhelming to maintainers. This post from Seth is a summary of the talk and proposes changes to how we approach the security problem in open source.
SETH LARSON
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
July 16, 2025
REALPYTHON.COM
PyData Bristol Meetup
July 17, 2025
MEETUP.COM
PyLadies Dublin
July 17, 2025
PYLADIES.COM
Chattanooga Python User Group
July 18 to July 19, 2025
MEETUP.COM
IndyPy X IndyAWS: Python-Powered Cloud
July 22 to July 23, 2025
MEETUP.COM
PyOhio 2025
July 26 to July 28, 2025
PYOHIO.ORG
Happy Pythoning!
This was PyCoder’s Weekly Issue #690.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Mike Driscoll
Creating TUI Applications with Textual and Python is Released
Learn how to create text-based user interfaces (TUIs) using Python and the amazing Textual package.
Textual is a rapid application development framework for your terminal or web browser. You can build complex, sophisticated applications in your terminal. While terminal applications are text-based rather than pixel-based, they still provide fantastic user interfaces.
The Textual package allows you to create widgets in your terminal that mimic those used in a web or GUI application.
Creating TUI Applications with Textual and Python is to teach you how to use Textual to make striking applications of your own. The book’s first half will teach you everything you need to know to develop a terminal application.
The book’s second half has many small applications you will learn how to create. Each chapter also includes challenges to complete to help cement what you learn or give you ideas for continued learning.
Here are some of the applications you will create:
- A basic calculator
- A CSV viewer
- A Text Editor
- An MP3 player
- An ID3 Editor
- A Weather application
- A TUI for pre-commit
- RSS Reader
Where to Buy
You can purchase Creating TUI Applications with Textual and Python on the following websites:
Calculator
CSV Viewer
MP3 Player
Weather App
The post Creating TUI Applications with Textual and Python is Released appeared first on Mouse Vs Python.
Ruslan Spivak
Book Notes: The Dark Art of Linear Algebra by Seth Braver — Chapter 1 Review
“Mathematics is the art of reducing any problem to linear algebra.” — William Stein
If you’ve ever looked at a vector and thought, “Just a column of numbers, right?”, this chapter will change that. The Dark Art of Linear Algebra (aka DALA) by Seth Braver opens with one of the clearest intros I’ve read. Not every part clicks on the first pass, but the effort pays off. Paired with the author’s videos, this is a strong starting point whether you’re learning math for the first time or coming back to it with purpose.
As I wrote in Unlocking AI with Math and [Book Notes] Infinitesimals, Derivatives, and Beer – Full Frontal Calculus (Ch. 1), I’m not learning math to pass a test. I’m learning it to understand the machinery behind AI and robotics, and eventually build machines of my own. (That would be fun, right?)
That goal needs a solid grasp of linear algebra. And it starts with understanding what a vector really is. Not just how to work with vectors algebraically, but how they behave in space and fit into a larger structure.
This chapter helped me sharpen that understanding.
Chapter Notes
What’s a Vector?
The book makes it clear that the answer to this question will evolve as you go deeper into linear algebra. But Chapter 1 starts simple: a vector is an arrow. A geometric object. A displacement.
In the video that comes with the chapter, the author even says to forget everything you think you know about vectors. He introduces them geometrically, which makes them feel tangible and helps you see familiar algebraic ideas in a visual, spatial way.
Vector Addition
The book introduces vector addition visually. Once you see vectors as displacements or moves through space, the addition feels natural. Almost obvious.
Image source: DALA Ch1
The text doesn’t focus on vector subtraction, but there’s an exercise on it. The companion video shows two methods. One of them is subtraction by addition: flip the direction of the vector you want to subtract, then add. It reminded me of that Office scene where Andy says “addition by subtraction,” and Michael asks, “What does that even mean?” In that context, it’s just a throwaway phrase. But in vector math, subtraction by addition is a real method. Flip the vector, then add. If you’ve done engineering, you’ve likely seen this before.
Vector addition also follows familiar rules like commutativity and associativity. If those sound fuzzy, the book and video prove them using triangles and parallelograms. No heavy algebra, just geometry.
One nice bonus is that the commutative proof gives you another way to add vectors. Place both tails at the same point, draw a parallelogram, and the diagonal gives the sum. Itss clean and easy to visualize:
Stretching Vectors
Scalar multiplication is introduced as a way to stretch, shrink, or flip a vector, not just multiply its components.
The author even explains where the word scalar comes from. Numbers are called scalars because they scale vectors. I liked that he doesn’t assume you already know this.
To stretch a vector, multiply by 3.
To flip it, multiply by –1.
To collapse it, multiply by 0.
It’s easier to remember when you learn it by drawing instead of just computing.
Standard Basis Vectors
Only after you’ve built a solid geometric understanding does the author introduce the standard basis vectors: i, j, and k. By then, it’s clear that 2i + 3j + 5k is just a weighted sum of familiar directions.
The chapter shows how to express vectors in ℝ² and ℝ³ using these basis vectors, and how to rewrite them in column form.
Length of Vectors
Be sure to watch the videos that go with this chapter. They walk you through finding the length of a vector visually.
You’ll start with the Pythagorean theorem to calculate the length of a vector in ℝ³, then extend the idea to ℝⁿ. The chapter also proves the general length formula when a vector is written in Cartesian coordinates. Neat.
The Dot Product
The chapter defines the dot product using the same geometric approach as earlier sections, and it makes sense. But for me, it really clicked in the physics example where work is defined using the dot product. The author’s video made it even clearer.
In the screenshot above, I underlined “Thus we see that work, viewed in a more general setting, is simply a dot product” and scribbled “watch the video” in the margin. Just a reminder that the video is a great companion to the chapter.
The text then walks through key properties: commutativity, dotting a vector with itself, the distributive property, a test for perpendicularity, and how to compute the dot product in ℝ².
You could memorize the formula. But it’s much more satisfying to understand the parts and derive it from scratch. Like Einstein said, “Any fool can know. The point is to understand.”
Here’s a step-by-step derivation, written out in my notes:
Thoughts and Tips
Like Full Frontal Calculus did for derivatives, this chapter tears vectors down to the basics and builds them back up. It does that visually, intuitively, and from first principles. It starts with geometry, not formulas. By the end, it’s clear that coordinates are just a way to describe vectors. They are not the vectors themselves.
Verdict: Highly recommend if you want a clear, visual grasp of what vectors really are. Especially if linear algebra has ever felt abstract, dry, or overly symbolic.
If you plan to read the chapter, these tips helped me get the most out of it:
-
Read slowly. Then read slowly again. The material is clear, but it rewards focused attention. Grab a paperback if you can. Write in the margins. Make the book your own.
-
Watch the author’s YouTube videos. The book explains the idea. The video often makes it stick. If you’re reading any of Braver’s math books, don’t skip the videos. They’re short, clear, and worth it.
-
Don’t worry about the proofs. They’re explained in plain language, supported by visuals, and still rigorous. You don’t need a separate book on how to follow them. They just make sense.
-
Brush up on your trig. Knowing how cosine works pays off when finding angles between vectors. It’s a small part of the chapter, but if you’re rusty, check out the trig section in Precalculus Made Difficult by the same author.
-
Do the exercises. The book includes answers, which makes it great for self-study. But like in Full Frontal Calculus, the solutions are compact. Use ChatGPT or Grok (xAI) to expand on them when needed.
-
Use spaced repetition. For ideas that are hard to keep in memory, try active recall. I use Anki, but any similar tool should work.
-
Check out the book sample. The author offers a sample on his site. If you’re on the fence, it gives you a solid feel for the writing and style.
These pages and videos are exactly what I wish I had the first time I saw vectors. They make the concept click and give you a foundation you can build on, whether you’re starting fresh or coming back to review.
More to come. Stay tuned.
Originally published in my newsletter Beyond Basics. If you’d like to get future posts like this by email, you can subscribe here.
P.S. I’m not affiliated with the author. I just really enjoy his books and wanted to share that.
Real Python
Getting Started With marimo Notebooks
marimo notebooks redefine the notebook experience by offering a reactive environment that addresses the limitations of traditional linear notebooks. With marimo, you can seamlessly reproduce and share content while benefiting from automatic cell updates and a correct execution order. Discover how marimo’s features make it an ideal tool for documenting research and learning activities.
By the end of this video course, you’ll understand that:
- marimo notebooks automatically update dependent cells, ensuring consistent results across your work.
- Reactivity allows marimo to determine the correct running order of cells using a directed acyclic graph (DAG).
- Sandboxing in marimo creates isolated environments for notebooks, preventing package conflicts and ensuring reproducibility.
- You can add interactivity to marimo notebooks with UI elements like sliders and radio buttons.
- Traditional linear notebooks have inherent flaws, such as hidden state issues, that marimo addresses with its reactive design.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Ned Batchelder
2048: iterators and iterables
I wrote a low-tech terminal-based version of the classic 2048 game and had some interesting difficulties with iterators along the way.
2048 has a 4×4 grid with sliding tiles. Because the tiles can slide left or right and up or down, sometimes we want to loop over the rows and columns from 0 to 3, and sometimes from 3 to 0. My first attempt looked like this:
N = 4
if sliding_right:
cols = range(N-1, -1, -1) # 3 2 1 0
else:
cols = range(N) # 0 1 2 3
if sliding_down:
rows = range(N-1, -1, -1) # 3 2 1 0
else:
rows = range(N) # 0 1 2 3
for row in rows:
for col in cols:
...
This worked, but those counting-down ranges are ugly. Let’s make it nicer:
cols = range(N) # 0 1 2 3
if sliding_right:
cols = reversed(cols) # 3 2 1 0
rows = range(N) # 0 1 2 3
if sliding_down:
rows = reversed(rows) # 3 2 1 0
for row in rows:
for col in cols:
...
Looks cleaner, but it doesn’t work! Can you see why? It took me a bit of debugging to see the light.
range()
produces an iterable: something that can be iterated over.
Similar but different is that reversed()
produces an iterator: something
that is already iterating. Some iterables (like ranges) can be used more than
once, creating a new iterator each time. But once an iterator like
reversed()
has been consumed, it is done. Iterating it again will
produce no values.
If “iterable” vs “iterator” is already confusing here’s a quick definition: an iterable is something that can be iterated, that can produce values in a particular order. An iterator tracks the state of an iteration in progress. An analogy: the pages of a book are iterable; a bookmark is an iterator. The English hints at it: an iter-able is able to be iterated at some point, an iterator is actively iterating.
The outer loop of my double loop was iterating only once over the rows, so the row iteration was fine whether it was going forward or backward. But the columns were being iterated again for each row. If the columns were going forward, they were a range, a reusable iterable, and everything worked fine.
But if the columns were meant to go backward, they were a one-use-only
iterator made by reversed()
. The first row would get all the columns,
but the other rows would try to iterate using a fully consumed iterator and get
nothing.
The simple fix was to use list()
to turn my iterator into a reusable
iterable:
cols = list(reversed(cols))
The code was slightly less nice, but it worked. An even better fix was to change my doubly nested loop into a single loop:
for row, col in itertools.product(rows, cols):
That also takes care of the original iterator/iterable problem, so I can get rid of that first fix:
cols = range(N)
if sliding_right:
cols = reversed(cols)
rows = range(N)
if sliding_down:
rows = reversed(rows)
for row, col in itertools.product(rows, cols):
...
Once I had this working, I wondered why product()
solved the
iterator/iterable problem. The docs have a sample Python
implementation that shows why: internally, product()
is doing just
what my list()
call did: it makes an explicit iterable from each of the
iterables it was passed, then picks values from them to make the pairs. This
lets product()
accept iterators (like my reversed range) rather than
forcing the caller to always pass iterables.
If your head is spinning from all this iterable / iterator / iteration talk,
I don’t blame you. Just now I said, “it makes an explicit iterable from each of
the iterables it was passed.” How does that make sense? Well, an iterator is an
iterable. So product()
can take either a reusable iterable (like a range
or a list) or it can take a use-once iterator (like a reversed range). Either
way, it populates its own reusable iterables internally.
Python’s iteration features are powerful but sometimes require careful thinking to get right. Don’t overlook the tools in itertools, and mind your iterators and iterables!
• • •
Some more notes:
1: Another way to reverse a range: you can slice them!
>>> range(4)
range(0, 4)
>>> range(4)[::-1]
range(3, -1, -1)
>>> reversed(range(4))
<range_iterator object at 0x10307cba0>
It didn’t occur to me to reverse-slice the range, since reversed
is
right there, but the slice gives you a new reusable range object while reversing
the range gives you a use-once iterator.
2: Why did product()
explicitly store the values it would need but
reversed
did not? Two reasons: first, reversed()
depends on the
__reversed__
dunder method, so it’s up to the original object to decide
how to implement it. Ranges know how to produce their values in backward order,
so they don’t need to store them all. Second, product()
is going to need
to use the values from each iterable many times and can’t depend on the
iterables being reusable.
death and gravity
Inheritance over composition, sometimes
In ProcessThreadPoolExecutor: when I/O becomes CPU-bound, we built a hybrid concurrent.futures executor that runs tasks in multiple threads on all available CPUs, bypassing Python's global interpreter lock.
Here's some interesting reader feedback:
Currently, the code is complex due to subclassing and many layers of delegation. Could this solution be implemented using only functions, no classes? Intuitively I feel classes would be hell to debug.
Since a lot of advanced beginners struggle with structuring code, we'll implement the same executor using inheritance, composition, and functions only, compare the solutions, and reach some interesting conclusions. Consider this a worked example.
Note
Today we're focusing on code structure. While not required, reading the original article will give you a better idea of why the code does what it does.
Requirements #
Before we delve into the code, we should have some understanding of what we're building. The orginal article sets out the following functional requirements:
- Implement the Executor interface; we want a drop-in replacement for existing concurrent.futures executors, so that user code doesn't have to change.
- Spread the work to one worker process per CPU, and then further to multiple threads inside each worker, to work around CPU becoming a bottleneck for I/O.
Additionally, we have two implicit non-functional requirements:
- Use the existing executors where possible (less code means fewer bugs).
- Only depend on stable, documented features; we don't want our code to break when concurrent.futures internals change.
concurrent.futures #
Since we're building on top of concurrent.futures, we should also get familiar with it; the docs already provide a great introduction:
The concurrent.futures module provides a high-level interface for asynchronously executing callables. [...this] can be performed with threads, using ThreadPoolExecutor, or separate processes, using ProcessPoolExecutor. Both implement the same interface, which is defined by the abstract Executor class.
Let's look at the classes in more detail.
Executor is an abstract base class1 defined in concurrent.futures._base. It provides dummy submit() and shutdown() methods, a concrete map() method implemented in terms of submit(), and context manager methods that shutdown() the executor on exit. Notably, the documentation does not mention the concrete methods, instead saying that the class "should not be used directly, but through its concrete subclasses".
The first subclass, ThreadPoolExecutor, is defined in concurrent.futures.thread; it implements submit() and shutdown(), inheriting map() unchanged.
The second one, ProcessPoolExecutor, is defined in concurrent.futures.process; as an optimization, it overrides map() to chop the input iterables and pass the chunks to the superclass method with super().
Three solutions #
Now we're ready for code.
Inheritance #
First, the original implementation,2 arguably a textbook example of inheritance.
We override __init__
, submit(), and shutdown(),
and do some extra stuff on top of the inherited behavior,
which we access through super().
We inherit
the context manager methods,
map(),
and any public methods ProcessPoolExecutor may get in the future,
assuming they use only other public methods
(more on this below).
class ProcessThreadPoolExecutor(concurrent.futures.ProcessPoolExecutor):
def __init__(self, max_threads=None, initializer=None, initargs=()):
self.__result_queue = multiprocessing.Queue()
super().__init__(
initializer=_init_process,
initargs=(self.__result_queue, max_threads, initializer, initargs)
)
self.__tasks = {}
self.__result_handler = threading.Thread(target=self.__handle_results)
self.__result_handler.start()
def submit(self, fn, *args, **kwargs):
outer = concurrent.futures.Future()
task_id = id(outer)
self.__tasks[task_id] = outer
outer.set_running_or_notify_cancel()
inner = super().submit(_submit, task_id, fn, *args, **kwargs)
return outer
def __handle_results(self):
for task_id, ok, result in iter(self.__result_queue.get, None):
outer = self.__tasks.pop(task_id)
if ok:
outer.set_result(result)
else:
outer.set_exception(result)
def shutdown(self, wait=True):
super().shutdown(wait=wait)
if self.__result_queue:
self.__result_queue.put(None)
if wait:
self.__result_handler.join()
self.__result_queue.close()
self.__result_queue = None
Because we're subclassing a class with private, undocumented attributes, our private attributes have to start with double underscores to avoid clashes with superclass ones (such as _result_queue).
In addition to the main class, there are some global functions used in the worker processes which remain unchanged regardless of the solution:
# this code runs in each worker process
_executor = None
_result_queue = None
def _init_process(queue, max_threads, initializer, initargs):
global _executor, _result_queue
_executor = concurrent.futures.ThreadPoolExecutor(max_threads)
_result_queue = queue
if initializer:
initializer(*initargs)
def _submit(task_id, fn, *args, **kwargs):
task = _executor.submit(fn, *args, **kwargs)
task.task_id = task_id
task.add_done_callback(_put_result)
def _put_result(task):
if exception := task.exception():
_result_queue.put((task.task_id, False, exception))
else:
_result_queue.put((task.task_id, True, task.result()))
Composition #
OK, now let's use composition –
instead of being a ProcessPoolExecutor,
our ProcessThreadPoolExecutor has one.
At a first glance,
the result is the same as before,
with super()
changed to self._inner
:
class ProcessThreadPoolExecutor:
def __init__(self, max_threads=None, initializer=None, initargs=()):
self._result_queue = multiprocessing.Queue()
self._inner = concurrent.futures.ProcessPoolExecutor(
initializer=_init_process,
initargs=(self._result_queue, max_threads, initializer, initargs)
)
self._tasks = {}
self._result_handler = threading.Thread(target=self._handle_results)
self._result_handler.start()
def submit(self, fn, *args, **kwargs):
outer = concurrent.futures.Future()
task_id = id(outer)
self._tasks[task_id] = outer
outer.set_running_or_notify_cancel()
inner = self._inner.submit(_submit, task_id, fn, *args, **kwargs)
return outer
def _handle_results(self):
for task_id, ok, result in iter(self._result_queue.get, None):
outer = self._tasks.pop(task_id)
if ok:
outer.set_result(result)
else:
outer.set_exception(result)
def shutdown(self, wait=True):
self._inner.shutdown(wait=wait)
if self._result_queue:
self._result_queue.put(None)
if wait:
self._result_handler.join()
self._result_queue.close()
self._result_queue = None
Except, we need to implement the context manager protocol ourselves:
def __enter__(self):
# concurrent.futures._base.Executor.__enter__
return self
def __exit__(self, exc_type, exc_val, exc_tb):
# concurrent.futures._base.Executor.__exit__
self.shutdown(wait=True)
return False
...and we need to copy map()
from Executor,
since it should use our submit()
:
def _map(self, fn, *iterables, timeout=None, chunksize=1):
# concurrent.futures._base.Executor.map
if timeout is not None:
end_time = timeout + time.monotonic()
fs = [self.submit(fn, *args) for args in zip(*iterables)]
def result_iterator():
try:
fs.reverse()
while fs:
if timeout is None:
yield _result_or_cancel(fs.pop())
else:
yield _result_or_cancel(fs.pop(), end_time - time.monotonic())
finally:
for future in fs:
future.cancel()
return result_iterator()
...and the chunksize
optimization from its ProcessPoolExecutor version:
def map(self, fn, *iterables, timeout=None, chunksize=1):
# concurrent.futures.process.ProcessPoolExecutor.map
if chunksize < 1:
raise ValueError("chunksize must be >= 1.")
results = self._map(partial(_process_chunk, fn),
itertools.batched(zip(*iterables), chunksize),
timeout=timeout)
return _chain_from_iterable_of_lists(results)
def _result_or_cancel(fut, timeout=None):
# concurrent.futures._base._result_or_cancel
try:
try:
return fut.result(timeout)
finally:
fut.cancel()
finally:
del fut
def _process_chunk(fn, chunk):
# concurrent.futures.process._process_chunk
return [fn(*args) for args in chunk]
def _chain_from_iterable_of_lists(iterable):
# concurrent.futures.process._chain_from_iterable_of_lists
for element in iterable:
element.reverse()
while element:
yield element.pop()
And, when the Executor interface gets new methods, we'll need to at least forward them to the inner executor, although we may have to copy those too.
On the upside, no base class means we can name attributes however we want.
But this is Python, why do we need to copy stuff? In Python, methods are just functions, so we could almost get away with this:
class ProcessThreadPoolExecutor:
... # __init__, submit(), and shutdown() just as before
__enter__ = ProcessPoolExecutor.__enter__
__exit__ = ProcessPoolExecutor.__exit__
map = ProcessPoolExecutor.map
Alas, it won't work –
ProcessPoolExecutor map()
calls super().map()
,
and object,
the superclass of our executor,
has no such method,
which is why we had to change it to self._map()
in our copy in the first place.
Functions #
Can this be done using only functions, though?
Theoretically no, since we need to implement the executor interface. Practically yes, since this is Python, where an "interface" just means having specific attributes, usually functions with specific signatures. For example, a module like this:
def init(max_threads=None, initializer=None, initargs=()):
global _result_queue, _inner, _tasks, _result_handler
_result_queue = multiprocessing.Queue()
_inner = concurrent.futures.ProcessPoolExecutor(
initializer=_init_process,
initargs=(_result_queue, max_threads, initializer, initargs)
)
_tasks = {}
_result_handler = threading.Thread(target=_handle_results)
_result_handler.start()
def submit(fn, *args, **kwargs):
outer = concurrent.futures.Future()
task_id = id(outer)
_tasks[task_id] = outer
outer.set_running_or_notify_cancel()
inner = _inner.submit(_submit, task_id, fn, *args, **kwargs)
return outer
def _handle_results():
for task_id, ok, result in iter(_result_queue.get, None):
outer = _tasks.pop(task_id)
if ok:
outer.set_result(result)
else:
outer.set_exception(result)
def shutdown(wait=True):
global _result_queue
_inner.shutdown(wait=wait)
if _result_queue:
_result_queue.put(None)
if wait:
_result_handler.join()
_result_queue.close()
_result_queue = None
map()
with minor tweaks.
def _map(fn, *iterables, timeout=None, chunksize=1):
# concurrent.futures._base.Executor.map
if timeout is not None:
end_time = timeout + time.monotonic()
fs = [submit(fn, *args) for args in zip(*iterables)]
def result_iterator():
try:
fs.reverse()
while fs:
if timeout is None:
yield _result_or_cancel(fs.pop())
else:
yield _result_or_cancel(fs.pop(), end_time - time.monotonic())
finally:
for future in fs:
future.cancel()
return result_iterator()
def map(fn, *iterables, timeout=None, chunksize=1):
# concurrent.futures.process.ProcessPoolExecutor.map
if chunksize < 1:
raise ValueError("chunksize must be >= 1.")
results = _map(partial(_process_chunk, fn),
itertools.batched(zip(*iterables), chunksize),
timeout=timeout)
return _chain_from_iterable_of_lists(results)
Behold, we can use the module itself as an executor:
>>> ptpe.init()
>>> ptpe.submit(int, '1').result()
1
Of note,
everything that was an instance variable before
is now a global variable;
as a consequence,
only one executor can exist at any given time,
since there's only the one module.3
But it gets worse – calling init()
a second time
will clobber the state of the first executor,
leading to all sorts of bugs;
if we were serious,
we'd prevent it somehow.
Also, some interfaces are more complicated than having the right functions;
defining __enter__
and __exit__
is not enough to use a module in a with
statement, since
the interpreter looks them up on the class of the object,
not on the object itself.
We can work around this with
an alternate "constructor"
that returns a context manager:
@contextmanager
def init_cm(*args, **kwargs):
init(*args, **kwargs)
try:
yield sys.modules[__name__]
finally:
shutdown()
>>> with ptpe.init_cm() as executor:
... assert executor is ptpe
... ptpe.submit(int, '2').result()
...
2
Comparison #
So, how do the solutions stack up? Here's a summary:
pros | cons | |
---|---|---|
inheritance |
|
|
composition |
|
|
functions | ? |
|
I may be a bit biased, but inheritance looks like a clear winner.
Composition over inheritance #
Given that favoring composition over inheritance is usually a good practice, it's worth discussing why inheritance won this time. I see three reasons:
- Composition helps most when you have unrelated components that need to be flexible in response to an evolving business domain; that's not the case here, so we get all the drawbacks with none of the benefits.
- The existing code is designed for inheritance.
- We have a true is-a relationship – ProcessThreadPoolExecutor really is a ProcessPoolExecutor with extra behavior, and not just part of an arbitrary hierarchy.
For a different line of reasoning involving subtyping, check out Hillel Wayne's When to prefer inheritance to composition; he offers this rule of thumb:
So, here's when you want to use inheritance: when you need to instantiate both the parent and child classes and pass them to the same functions.
Forward compatibility #
The inheritance solution assumes map() and any future public ProcessPoolExecutor methods are implemented only in terms of other public methods. This assumption introduces a risk that updates may break our executor; this is lowered by two things:
- concurrent.futures is in the standard library, which rarely does major rewrites of existing code, and never within a minor (X.Y) version; concurrent.futures exists in its current form since Python 3.2, released in 2011.
- concurrent.futures is clearly designed for inheritance, even if mainly to enable internal reuse, and not explicitly documented.
As active mitigations, we can add a basic test suite (which we should do anyway), and document the supported Python versions explicitly (which we should do anyway if we were to release this on PyPI).
If concurrent.futures were not in the standard library, I'd probably go with the composition version instead, although as already mentioned, this wouldn't be free from upkeep either. Another option would be to upstream ProcessThreadPoolExecutor, so that it is maintained together with the code it depends on.
Global state #
The functions-only solution is probably the worst of the three, since it has all the downsides of composition, and significant limitations due to its use of global state.
We could avoid using globals
by passing the state
(process pool executor instance, result queue, etc.)
as function arguments,
but this breaks the executor interface,
and makes for an awful user experience.
We could group common arguments into a single object
so there's only one argument to pass around;
if you call that argument self
,
it becomes obvious that's just a class instance with extra steps.
Having to keep track of a bunch of related globals has enough downsides that even if you do want a module-level API, it's still worth using a class to group them, and exposing the methods of a global instance at module-level (like so); Brandon Rhodes discusses this at length in The Prebound Method Pattern.
Complexity #
While the code is somewhat complex, that's mostly intrinsic to the problem itself (what runs in the main vs. worker processes, passing results around, error handling, and so on), rather than due to our of use classes, which only affects how we refer to ProcessPoolExecutor methods and how we store state.
One could argue that copying a bunch of code doesn't increase complexity, but if you factor in keeping it up to date and tested, it's not exactly free either.
One could also argue that building our executor on top of ProcessPoolExecutor is increasing complexity, and in a way that's true – for example, we have two result queues and had to deal with dead workers too, which wouldn't be the case if we wrote it from scratch; but in turn, that would come with having to understand, maintain, and test 800+ lines of code of low level process management code. Sometimes, complexity I have to care about is more important that total complexity.
Debugging #
I have to come clean at this point – I use print debugging a lot 🙀 (especially if there are no tests yet, and sometimes from tests too); when that doesn't cut it, IPython's embed() usually provides enough interactivity to figure out what's going on.4
With the minimal test at the end of the file
driving the executor,
I used temporary print() calls
in _submit()
, _put_result()
, and __handle_results()
to check data is making its way through properly;
if I expected the code to change more often,
I'd replace them with permanent logging calls.
In addition,
there were two debugging scripts
in the benchmark file
that I didn't show,
one to automate killing workers at the right time,
and one to make sure shutdown()
waits any pending tasks.
So, does how we wrote the code change any of this? Not really, no; all the techniques above (and using a debugger too) apply equally well. If anything, using classes makes interactive debugging easier, since it's easier to discover state via autocomplete (with functions only, you have to know to look it up on the module).
Try it out #
As I've said before, try it out – it only took ~10 minutes to convert the initial solution to the other two. In part, the right code structure is a matter feeling and taste, and both are educated by reading and writing lots of code. If you think there's a better way to do something, do it and see how it looks; it is a sort of deliberate practice.
Learned something new today? Share this with others, it really helps!
Want to know when new articles come out? Subscribe here to get new stuff straight to your inbox!
Executor is an abstract base class only by convention: it is a base class (other classes are supposed to subclass it), and it is abstract (other classes are supposed to provide concrete implementations for some methods).
Python also allows formalizing abstract base classes using the abc module; see When to use classes in Python? When you repeat similar sets of functions for an example of this and other ways of achieving the same goal. [return]
For brevity, I'm using the version before dealing with dead workers; the final code is similar, but with a more involved
__handle_results
. [return]This is almost true – we could "this is Python" our way deeper and reload the module while still keeping a reference to the old one, but that's just a round-about, unholy way of emulating class instances. [return]
Pro tip: you can use embed() as a breakpoint() hook:
PYTHONBREAKPOINT=IPython.embed python myscript.py
. [return]
Python Bytes
#440 Can't Register for VibeCon
<strong>Topics covered in this episode:</strong><br> <ul> <li><em>* <a href="https://treyhunner.com/2024/10/switching-from-virtualenvwrapper-to-direnv-starship-and-uv/?featured_on=pythonbytes">Switching to direnv, Starship, and uv</a></em>*</li> <li><em>* <a href="https://rqlite.io?featured_on=pythonbytes">rqlite - Distributed SQLite DB</a></em>*</li> <li><em>* Some Markdown Stuff</em>*</li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=AXcQsRZRd8k' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="440">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by PropelAuth: <a href="https://pythonbytes.fm/propelauth77">pythonbytes.fm/propelauth77</a></p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: <a href="https://treyhunner.com/2024/10/switching-from-virtualenvwrapper-to-direnv-starship-and-uv/?featured_on=pythonbytes">Switching to direnv, Starship, and uv</a></strong></p> <ul> <li><p>Last week I mentioned that I’m ready to try direnv again, but secretly, I still had some worries about the process. Thankfully, Trey has a tutorial to walk me past the troublesome parts.</p></li> <li><p><a href="https://direnv.net?featured_on=pythonbytes">direnv</a> - an extension for your shell. It augments existing shells with a new feature that can load and unload environment variables depending on the current directory.</p></li> <li><p>Switching from virtualenvwrapper to direnv, Starship, and uv</p> <p>- Trey Hunner**</p> <ul> <li><p>Trey has solved a bunch of the problems I had when I tried direnv before</p> <ul> <li><p>Show the virtual environment name in the prompt</p></li> <li><p>Place new virtual environments in local <code>.venv</code> instead of in <code>.direnv/python3.12</code></p></li> <li><p>Silence all of the “loading”, “unloading” statements every time you enter a directory</p></li> <li><p>Have a script called </p> <pre><code>venv </code></pre> <p>to create an environment, activate it, create a </p> <pre><code>.envrc </code></pre> <p>file</p> <ul> <li>I’m more used to a <code>create</code> script, so I’ll stick with that name and Trey’s contents</li> </ul></li> <li><p>A </p> <pre><code>workon </code></pre> <p>script to be able to switch around to different projects.</p> <ul> <li>This is a carry over from “virtualenvwrapper’, but seems cool. I’ll take it.</li> </ul></li> <li><p>Adding </p> <pre><code>uv </code></pre> <p>to the mix for creating virtual environments.</p> <ul> <li>Interestingly including <code>--seed</code> which, for one, installs <code>pip</code> in the new environment. (Some tools need it, even if you don’t)</li> </ul></li> </ul></li> <li><p>Starship</p> <ul> <li>Trey also has some setup for Starship. But I’ll get through the above first, then MAYBE try Starship again.</li> <li>Some motivation <ul> <li>Trey’s setup is pretty simple. Maybe I was trying to get too fancy before</li> <li>Starship config in toml files that can be loaded with direnv and be different for different projects. Neato</li> <li>Also, Trey mentions his dotfiles repo. This is a cool idea that I’ve been meaning to do for a long time.</li> </ul></li> </ul></li> </ul></li> <li><p>See also:</p> <ul> <li><a href="https://www.pythonbynight.com/blog/terminal?featured_on=pythonbytes">It's Terminal - Bootstrapping With Starship, Just, Direnv, and UV</a> - Mario Munoz</li> </ul></li> </ul> <p><strong>Michael #2: <a href="https://rqlite.io?featured_on=pythonbytes">rqlite - Distributed SQLite DB</a></strong></p> <ul> <li><a href="https://fosstodon.org/@themlu/114852806589871969">via themlu, thanks</a>!</li> <li>rqlite is a lightweight, user-friendly, distributed relational database built on SQLite.</li> <li>Built on SQLite, the world’s most popular database</li> <li>Supports full-text search, Vector Search, and JSON documents</li> <li>Access controls and encryption for secure deployments</li> </ul> <p><strong>Michael #3</strong>: <a href="https://www.peterbe.com/plog/a-python-dict-that-can-report-which-keys-you-did-not-use?featured_on=pythonbytes">A Python dict that can report which keys you did not use</a></p> <ul> <li>by Peter Bengtsson</li> <li>Very cool for testing that a dictionary has been used as expected (e.g. all data has been sent out via an API or report).</li> <li>Note: It does NOT track d.get(), but it’s easy to just add it to the class in the post.</li> <li>Maybe someone should polish it up and put it on pypi (that person is not me :) ).</li> </ul> <p><strong>Brian #4: Some Markdown Stuff</strong></p> <ul> <li><p>Textual 4.0.0</p> <p>adds Markdown.append which can be used to efficiently stream markdown content</p> <ul> <li>The reason for the major bump is due to an interface change to Widget.anchor</li> <li>Refreshing to see a symantic change cause a major version bump.</li> </ul></li> <li><p>html-to-markdown</p> <ul> <li><p>Converts html to markdown</p></li> <li><p>A complete rewrite fork of markdownify</p></li> <li>Lots of fun features like “streaming support” <ul> <li>Curious if it can stream to Textual’s Markdown.append method. hmmm.</li> </ul></li> </ul></li> </ul> <p><strong>Joke: <a href="https://www.reddit.com/r/programminghumor/comments/1ko7ube/vibecon/?featured_on=pythonbytes">Vibecon is hard to attend</a></strong></p>
Programiz
Getting Started with Python
In this tutorial, you will learn to write your first Python program.
Seth Michael Larson
Email has algorithmic curation, too
Communication technologies should optimally be reliable, especially when both parties have opted-in to consistent reliable delivery. I don't want someone else to decide whether I receive a text message or email from a friend.
I associate "algorithmic curation" with social media platforms like TikTok, YouTube, Twitter, or Instagram. I don't typically think about email as a communication technology that contains algorithmic curation. Maybe that thinking should change?
Email for most people has algorithmic curation applied by their email provider. Email providers like Gmail automatically filter the email and decide which "category" the email ends up in, regardless of how much you trust the sender or if you have opted-in to their emails. Some of these categories are harmless, like "Social", where social media updates will be filtered into its own category but not hidden in any meaningful way.
The category that is destructive is one we know and love: "Spam". Spam filtering is usually a good thing, if you've ever looked in the folder you understand why it exists. However, many email providers don't give a way to opt-out of spam filtering, even for senders that have sent you hundreds of high-quality opted-in emails.
Where this is relevant is for email newsletters. I publish an email newsletter for this blog, and yet I would prefer you not use the newsletter and instead use RSS. If you enjoy the blog's content enough to get a notification when there's more, then you probably want delivery to be reliable.
My previous email was sent to the Spam folder for at least Gmail, and from reading the email I am not sure why this would be the case. The language isn't any different from the rest of my emails, and yet the number of deliveries and opens is less than half of a typical email.
As someone trying to communicate to readers, what am I supposed to learn or do in this situation? Just like with other algorithmically curated platforms, I feel like I'm at the mercy of a process that isn't understandable and prone to change without warning.
Reliable communication technologies like RSS are the answer. If you're a regular consumer of internet content I highly recommend installing an RSS feed reader. My personal recommendation (that I use and pay for) is Inoreader. You'd be surprised which platforms offer RSS as a reliable alternative to their typical curation approach, for example YouTube offers RSS feeds for channels.
As a web surfer I hope this article inspires you to choose a reliable communication technology like RSS when "subscribing" to internet creatives so you never miss another publication. If you're a publisher, providing your content through a reliable opt-in medium like RSS, Patreon, or even Discord means only you and your readers are in control of who sees your content.
July 14, 2025
Real Python
How to Debug Common Python Errors
Python debugging involves identifying and fixing errors in your code using tools like tracebacks, print()
calls, breakpoints, and tests. In this tutorial, you’ll learn how to interpret error messages, use print()
to track variable values, and set breakpoints to pause execution and inspect your code’s behavior. You’ll also explore how writing tests can help prevent errors and ensure your code runs as expected.
By the end of this tutorial, you’ll understand that:
- Debugging means identifying, analyzing, and resolving issues in your Python code using systematic approaches.
- Tracebacks are messages that help you pinpoint where errors occur in your code, allowing you to resolve them effectively.
- Using
print()
helps you track variable values and understand code flow, aiding in error identification. - Breakpoints let you pause code execution to inspect and debug specific parts, improving error detection.
- Writing and running tests before or during development aids in catching errors early and ensures code reliability.
Understanding these debugging techniques will empower you to handle Python errors confidently and maintain efficient code.
Get Your Code: Click here to download the free sample code that shows you how to debug common Python errors.
Take the Quiz: Test your knowledge with our interactive “How to Debug Common Python Errors” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Debug Common Python ErrorsTake this quiz to review core Python debugging techniques like reading tracebacks, using print(), and setting breakpoints to find and fix errors.
How to Get Started With Debugging in Python
Debugging means to unravel what is sometimes hidden. It’s the process of identifying, analyzing, and resolving issues, errors, or bugs in your code.
At its core, debugging involves systematically examining code to determine the root cause of a problem and implementing fixes to ensure the program functions as intended. Debugging is an essential skill for you to develop.
Debugging often involves using tools and techniques such as breakpoints, logging, and tests to achieve error-free and optimized performance of your code. In simpler terms, to debug is to dig through your code and error messages in an attempt to find the source of the problem, and then come up with a solution to the problem.
Say you have the following code:
cat.py
print(cat)
The code that prints the variable cat
is saved in a file called cat.py
. If you try to run the file, then you’ll get a traceback error saying that it can’t find the definition for the variable named cat
:
$ python cat.py
Traceback (most recent call last):
File "/path_to_your_file/cat.py", line 1, in <module>
print(cat)
^^^
NameError: name 'cat' is not defined
When Python encounters an error during execution, it prints a traceback, which is a detailed message that shows where the problem occurred in your code. In this example, the variable named cat
can’t be found because it hasn’t been defined.
Here’s what each part of this Python traceback means:
Part | Explanation |
---|---|
Traceback (most recent call last) |
A generic message sent by Python to notify you of a problem with your code. |
File "/path_to_your_file/cat.py" |
This points to the file where the error originated. |
line 1, in <module> |
Tells you the exact line in the file where the error occurred. |
print(cat) |
Shows you the line of Python code that caused the error. |
NameError |
Tells you the kind of error it is. In this example, you have a NameError . |
name 'cat' is not defined |
This is the specific error message that tells you a bit more about what’s wrong with the piece of code. |
In this example, the Python interpreter can’t find any prior definition of the variable cat
and therefore can’t provide a value when you call print(cat)
. This is a common Python error that can happen when you forget to define variables with initial values.
To fix this error, you’ll need to take a step-by-step approach by reading the error message, identifying the problem, and testing solutions until you find one that works.
In this case, the solution would be to assign a value to the variable cat
before the print call. Here’s an example:
cat.py
cat = "Siamese"
print(cat)
Notice that the error message disappears when you rerun your program, and the following output is printed:
$ python cat.py
Siamese
The text string stored in cat
is printed as the code output. With this error resolved, you’re well on your way to quickly debugging errors in Python.
In the next sections, you’ll explore other approaches to debugging, but first, you’ll take a closer look at using tracebacks.
Read the full article at https://realpython.com/debug-python-errors/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: How to Debug Common Python Errors
In this quiz, you’ll test your understanding of How to Debug Common Python Errors.
Debugging means identifying, analyzing, and resolving issues in your Python code. You’ll revisit reading tracebacks, using print()
for value tracking,
setting breakpoints to pause execution, and writing tests to catch errors. Good luck!
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Engineering at Microsoft
Announcing Full Cross-Platform Support for the mssql-python Driver
After the successful release of Public Preview of mssql-python driver, we’re thrilled to announce a major milestone for the mssql-python driver: full support for all three major operating systems—Windows, macOS, and Linux. This release marks a significant leap forward in our mission to provide seamless, performant, and Pythonic connectivity to Microsoft SQL Server and the Azure SQL family.
Try it here: mssql-python
Linux Joins the Party
With this release, Linux support is officially live, completing our cross-platform vision. Whether you’re developing on Ubuntu, Red Hat or Debian, the mssql-python driver now offers native compatibility and a streamlined installation experience. This was made possible through deep integration work and iterative testing across distros.
Note:
- Support for other distros (Alpine and SUSE Linux) is going to be released soon.
- Support for server editions of Linux OSs will also be releasing soon!
Connection Pooling for All Platforms
We’ve also rolled out Connection Pooling support across Windows, macOS, and Linux. This feature dramatically improves performance and scalability by reusing active database connections. It’s enabled by default and has already shown significant gains in internal benchmarks.
Important:
Our latest performance benchmark results show mssql-python outperforming pyodbc by up to 2.2× across core SQL operations, fetch patterns, and connection pooling—stay tuned for a deep dive into the numbers and what’s driving this performance leap in our upcoming blogs!EntraID Support for MacOS and Linux
EntraID authentication is now fully supported on MacOS and Linux but with certain limitations as mentioned in the table:
Authentication Method | macOS/Linux Support | Notes |
ActiveDirectoryPassword | ![]() |
Username/password-based authentication |
ActiveDirectoryInteractive | ![]() |
Only works on Windows |
ActiveDirectoryMSI (Managed Identity) | ![]() |
For Azure VMs/containers with managed identity |
ActiveDirectoryServicePrincipal | ![]() |
Use client ID and secret or certificate |
ActiveDirectoryIntegrated | ![]() |
Only works on Windows (requires Kerberos/SSPI) |
Note:
ActiveDirectoryInteractive for Linux and MacOS will be supported in future releases of the driver. Please stay tuned!Unified Codebase, Smarter Engineering
Behind the scenes, we’ve unified the mssql-python driver’s codebase across platforms. This includes hardened DDBC bindings using smart pointers for better memory safety and maintainability. It will become easier for the community members to help us grow this driver. These efforts ensure that the driver behaves consistently across environments and is easier to maintain and extend.
Backward Compatibility with Python ≥ 3.10
All three platforms now support Python versions starting from 3.10, ensuring backward compatibility and broader adoption. Whether you’re running legacy scripts or modern workloads, the driver is ready to support your stack.
Seamless Installation
Thanks to our recent work on packaging and dependency management, installing the mssql-python driver is now simpler than ever. Users can get started with a single pip install command—no admin privileges, no pre-installed driver manager is required.
Windows and Linux: mssql-python can be installed with pip:
pip install mssql-python
MacOS: For MacOS, the user must install openssl before mssql-python can be installed with pip:
brew install openssl
pip install mssql-python
Who Benefits — Explained by Scenario
Audience | How They Benefit | Scenario |
---|---|---|
Python Developers | Seamless setup and consistent behavior across Windows, macOS, and Linux | A developer working on a cross-platform data ingestion tool can now use the same driver codebase without OS-specific tweaks. |
Data Engineers & Analysts | Connection pooling and EntraID support improve performance and security | A data engineer running ETL jobs on Azure VMs can authenticate using managed identity and benefit from faster connection reuse. |
Open Source Contributors | Unified codebase makes it easier to contribute and maintain | A contributor can now submit a patch without worrying about platform-specific regressions. |
Enterprise Teams | Backward compatibility and secure authentication options | A team migrating legacy Python 3.10 scripts to Azure SQL can do so without rewriting authentication logic. |
PyODBC Users | Frictionless migration path to a modern, actively maintained driver | A team using PyODBC can switch to mssql-python with minimal changes and gain performance, security, and cross-platform benefits. |
Why It Matters — Impact Highlights
Impact Area | Why It Matters | Real-World Value |
---|---|---|
Cross-Platform Development | Eliminates OS-specific workarounds | Teams can standardize their SQL connectivity stack across dev, test, and prod environments. |
Enterprise Readiness | EntraID support and connection pooling are built-in | Organizations can deploy secure, scalable apps with minimal configuration. |
Community Growth | Easier onboarding and contribution pathways | New contributors can quickly understand and extend the driver, accelerating innovation. |
Performance & Scalability | Connection reuse reduces latency and resource usage | Apps with high query volumes see measurable performance improvements. |
Migration Enablement | Supports drop-in replacement for PyODBC and other drivers | Developers can modernize their stack without rewriting business logic. |
What’s Next
Here’s a sneak peek at what we’re working on for upcoming releases:
- Linux Support – additional distros (Alpine and SUSE) will be supported in next few releases.
- Support for Bulk Copy for accelerated data transfer
- Support for complex SQL Server data types
Try It and Share Your Feedback!
Ready to test the latest features? We invite you to:
- Try it out: Check-out the mssql-python driver and integrate it into your projects.
- Share your thoughts: Open issues, suggest features, and contribute to the project.
- Join the conversation: GitHub Discussions | SQL Server Tech Community.
Use Python Driver with Free Azure SQL Database
You can use the Python Driver with the free version of Azure SQL Database! Deploy Azure SQL Database for free
Deploy Azure SQL Managed Instance for free
Perfect for testing, development, or learning scenarios without incurring costs.
We look forward to your feedback and collaboration!
The post Announcing Full Cross-Platform Support for the mssql-python Driver appeared first on Microsoft for Python Developers Blog.
Talk Python to Me
#513: Stories from Python History
Why do people listen to this podcast? Sure, they're looking for technical explorations of new libraries and ideas. But often it's to hear the story behind them. If that speaks to you, then I have the perfect episode lined up. I have Barry Warsaw, Paul Everitt, Carol Willing, and Brett Cannon all back on the show to share stories from the history of Python. You'll hear about how import this came to be and how the first PyCon had around 30 attendees (two of whom are guests on this episode!). Sit back and enjoy the humorous stories from Python's past.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/connect-cloud'>Posit</a><br> <a href='https://talkpython.fm/agntcy'>Agntcy</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading">Links from the show</h2> <div><strong>Barry's Zen of Python song</strong>: <a href="https://www.youtube.com/watch?v=i6G6dmVJy74" target="_blank" >youtube.com</a><br/> <strong>Jake Vanderplas - Keynote - PyCon 2017</strong>: <a href="https://www.youtube.com/watch?v=ZyjCqQEUa8o&ab_channel=PyCon2017" target="_blank" >youtube.com</a><br/> <strong>Why it’s called “Python” (Monty Python fan-reference)</strong>: <a href="https://www.geeksforgeeks.org/history-of-python/?featured_on=talkpython" target="_blank" >geeksforgeeks.org</a><br/> <strong>import antigravity</strong>: <a href="https://python-history.blogspot.com/2010/06/import-antigravity.html?featured_on=talkpython" target="_blank" >python-history.blogspot.com</a><br/> <strong>NIST Python Workshop Attendees</strong>: <a href="https://legacy.python.org/workshops/1994-11/attendees.html?featured_on=talkpython" target="_blank" >legacy.python.org</a><br/> <strong>Paul Everitt open-sources Zope</strong>: <a href="https://old.zope.dev/Members/paul/BusinessDecision/?featured_on=talkpython" target="_blank" >old.zope.dev</a><br/> <strong>Carol Willing wins ACM Software System Award</strong>: <a href="https://awards.acm.org/award_winners/willing_1304832?featured_on=talkpython" target="_blank" >awards.acm.org</a><br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=et9WtazSpZQ" target="_blank" >youtube.com</a><br/> <strong>Episode #513 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/513/stories-from-python-history#takeaways-anchor" target="_blank" >talkpython.fm/513</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/513/stories-from-python-history" target="_blank" >talkpython.fm</a><br/> <strong>Developer Rap Theme Song: Served in a Flask</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>--- Stay in touch with us ---</strong><br/> <strong>Subscribe to Talk Python on YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" >youtube.com</a><br/> <strong>Talk Python on Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm at bsky.app</a><br/> <strong>Talk Python on Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes at bsky.app</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
July 13, 2025
Django Weblog
Happy 20th birthday Django!
On July 13th 2005, Jacob Kaplan-Moss made the first commit to the public repository that would become Django. Twenty years and 400+ releases later, here we are – Happy 20th birthday Django! 🎉
Join the celebrations
We want to share this special occasion with you all! Our new 20-years of Django website showcases all online and local events happening around the world, through all of 2025. As well as other opportunities to celebrate!
- Expect birthday cake 🎂 and singing Happy Birthday
- A special quiz or two? see who knows all about Django trivia
- Showcase of great community achievements
View our 20th birthday website
Support Django
As a birthday gift of sorts, consider whether you or your employer can support the project via donations to our non-profit Django Software Foundation. For this special event, we want to set a special goal!
Over the next 20 days, we want to see 200 new donors, supporting Django with $20 or more, with at least 20 monthly donors. Help us making this happen:
- Donate on the Django website
- Donate on GitHub sponsors
- Or check out how to become a Corporate Member
Once you’ve done it, post with #DjangoBirthday and tag us on Mastodon / on Bluesky / on X / on LinkedIn so we can say thank you!
Of our US $300,000.00 goal for 2025, as of July 13th, 2025, we are at:
- 25.6% funded
- $76,707 donated
The next 20 years
20 years is a long time in open source – and we want to keep Django thriving for many more, so it keeps on being the web framework for perfectionists with deadlines as the industry evolves. We don’t know how the web will change it that time, but from Django, you can expect:
- Many new releases, each with years of support
- Thousands more packages in our thriving ecosystem
- An inclusive and supportive community with hundreds of thousands of developers
Happy 20th birthday, Django!
Michael Droettboom
How to think about LLMs for OSS development
In July, I had the honor of giving a keynote talk at PyCon Colombia 2025. This isn't exactly what I said on stage, but it is the script I was working from. Since some people prefer to read rather than watch a long video, I thought I would share it.
The full title of this talk is: "How to Think About Large Language Models for Open Source Software Development: A hype-free approach: That should still be relevant in a few months: That should work for most people"
Prologue
Disclaimer #1
Opinions are my own. Biases are my own, too. I know everyone’s experience in the world is different, and what works for me may not work for you.
Disclaimer #2
This is a “how to think about” talk, not a “how to”. There are plenty of “how to’s” out there – I don’t need to create another one. I want to leave you with some good questions, and seed some good hallway conversations. And maybe have a bit of a call to action.
But the other reason this talk is not a “how to” is that...
Disclaimer #3
This space is changing too rapidly to effectively give a keynote about. It seems like there is a new model or new layer on top or a new startup with a new solution every single day. I can’t possibly be on top of all the things people are doing in this space – if I say “we need a solution to X” in this talk, chances are someone already is working on that, I’m just not aware of them yet.
Thankfully I think I’ve found a framing to address this kind of rapid change. Back in the 90’s when I was studying computer science, the big battlefield was programming languages. There were all these languages that seemed to come and go – how do we know which ones to teach? The answer was – none of them. Focus on the things that are more fundamental – algorithms, data structures, distributed systems theory, etc – and use “teaching languages”, like “Turing, Eiffel, Standard ML of NJ” (anyone heard of those?) rather than the fads of the day. Students are left “learning how to learn”. I still think this was a good approach.
By treating LLMs as an unknown value, we can talk about general principles that are likely to remain relevant and ignore “news of the day” thinking. And whether LLMs improve or devolve, hopefully you are left understanding how to make decisions on your own to apply them to your own day-to-day work.
Disclaimer #4
I am not an expert in using LLMs for open source software development. Let’s break this one down:
First, what do I mean by “expert”? Anyone can become an “expert” in something with the right conditions. No one was born knowing “software development” – they just had the opportunity to spend time learning about it and doing it. That point may seem obvious, but there is a couple of decades of good research saying that most people believe that expertise is a fixed thing. And counter to that, it’s been shown that SWE teams that believe in the “growth mindset” – i.e. that anyone can become an expert in something with the right conditions – tend to perform much better than teams that believe in the “immutable” view of expertise.
Secondly, I say “LLMs” here because I want to make it clear that what I’m talking about are these generative tools based on large language models, plus the scaffolding like agents added on the side. I’m talking about this current new wave of things as distinct from the broader world of Data Science and Machine Learning. In particular, Artificial Intelligence is too overloaded, in my humble opinion.
And… I’m reducing the scope way down here to just open source software development. I’m not talking about drawing cartoons, doing drug discovery or any of the other things that generative AI may be used for. I’m also not talking about building software products that have LLMs inside them – that’s a whole other interesting space, but outside scope for today.
And I’m also talking about open source specifically, which is a bit different from, say, enterprise software development or other kinds of technical engineering. This is something I’ve been doing for over 25 years, so maybe I am an expert on this part, and based on that, I would argue that open source software development is actually a very complex set of social dynamics more than it is a technical activity. More on that later.
And when you put this all together, it’s clear that the number of experts who understand these very complex probabilistic tools (LLMs) and how they interact with these very complex social systems (OSS) is vanishingly small. In general, the intersection of software engineering and social or psychological systems has historically been extremely poorly studied (though there are some standout researchers in that area, such as Dr. Nadia Eghbal and Dr. Catherine Hicks), but when you are talking about tools that have only been widely available for, say, 2-3 years – at scale:
Experience in AI for software development is the new 20 years of Java experience: Meaning most of us are still figuring this out and working to “become” experts.
Disclaimer #5
LLM discourse is mired in anecdotalism and polarization: In software, it’s long been understood that “Works For Me” isn’t good enough. This is just a really, really old core scientific principle. It isn’t enough to say that “I took echinacea and my cold got better”, you would need to actually study the effects of echinacea on a larger population to ascribe any causal link that it’s a cure for the common cold. So it’s surprising to me that so much of the discourse around LLMs is still at the level of “I did something” and “it was amazing” or “it’s garbage”. Assuming good faith actors, both of those experiences are valid, but it doesn’t actually tell us much about how these systems really work when confronted with the real world at large.

So, let’s suppose this is the universe of opinions or experiences with LLMs, plotted in a 2D space: on one dimension you have a person’s belief in its effectiveness and on the other how beneficial it will be. You might have one of these “anecdotes”, like “AI is just a fad”, “I built my app in a weekend” (this is the vibe coder – which I sort of take to be a straw man).
And over here you have “AI will replace all SWE jobs 😦”, and “AI will replace all SWE jobs 😃”. (The emoji are doing a lot of work here.)
But these are all extreme opinions. The really interesting ones are in this middle part, because this is where you can have conversations about moving the field forward. For my part, I think there is something useful here, and LLMs are here to stay, but I don’t yet know where it will end up for my own work or the world at large. But if we want to make them better and more useful, and work for US, specifically for open source software development, we need to be having conversations in HERE. One of the problems is that social media is designed to amplify the outlier opinions. LLMs are probably the biggest technological advance we have seen since the beginning of the “social media” era, and that’s not doing us any favors.
The other reason anecdotalism is so sticky is:
Disclaimer #6
LLM evaluation is really hard: The solution space of LLMs is just so large that it’s beyond any direct means to evaluate it at scale.

When I first started doing machine learning back in the late 90’s, the general approach was you took a data set (like a set of hand-written digits) and you trained your model on half of it, then you used the other half to evaluate how it was doing. (I’m hand waving over a lot of the detail here). This was all very straightforward and easy to understand, and, as long as you were ethical, hard to game. But with LLMs the expected solution space is so large, you can’t feasibly evaluate it in the usual way. Often people resort to horrorific distortions of the scientific process like using one model to evaluate another model. A lot of very smart people are working on making this better – but the state-of-the-art benchmark for software engineering tasks, SWE Bench, remains controversial, not just because there may be shenanigans going on by some players, but because the sophistication required to evaluate a model is equal to the sophistication required to create it in the first place.
Disclaimer #7
We all make mistakes: One thing I do find useful when talking about evaluating or benchmarking LLMs is the recognition that humans make mistakes, too. We shouldn’t be comparing machine output to some Platonic ideal of the perfect programmer – the perfect programmer doesn’t and can’t exist – there are always grey areas and tradeoffs everywhere you look, because software is built to operate in the real world, which is messy. But more importantly, because humans make mistakes and always have, we already have a set of processes and social constructs that we use to mitigate those mistakes, which can also apply to LLM-generated content. It is crazy to think we would take those guardrails off just because we have more automation in the loop. (More on that later).
My favorite bug

With that, let me take a detour and tell you about one of the favorite bugs that I caused. Back in 2007, I was working on matplotlib, and we had a problem that drawing scatter plots with lots of circles was just too slow. At the time, circles were drawn as polygons with a large number of edges, like 100. And filling a polygon with a lot of sides is rather slow.

I found a paper that showed how circles could be approximated using 4 bezier splines instead, and all of the graphics libraries underpinning matplotlib understand how to optimize bezier splines rather well, so it’s a whole lot faster, and the paper even got into detail about how accurate this approximation was, and with some back of the napkin math it was clear that even if you drew a circle the whole size of a screen or a page, the inaccuracy wouldn’t be visible at all.

Enter the Mars Phoenix lander. The team managing the spacecraft at JPL was using matplotlib to plot its trajectory as it hurtled toward Mars at 12,000 km/h.

When you zoom in on the path, which is an ellipse much much larger than the circle representing Mars, the inaccuracy of the bezier curve was quite noticeable, and made it look as if the spacecraft would miss its landing entirely.

They weren’t using this for guidance (thank goodness!), but planned to use it on the big control room screens when they invited the press to watch the landing. “Oh, and, by the way, it’s already on its way, so please figure this out before it lands, we kind of have a hard deadline.”
Reverting the change wasn’t enough – the polygonal approximation also wouldn’t work at that scale (but it wasn’t as bad). I worked to figure out the solution – basically dynamically truncating the arc when larger than the viewport – the solution isn’t really the important part.
The moral of the story is – we all make mistakes and it’s really hard to anticipate all of the uses of software when you write it. The process – of not putting code into production until others have a chance to test it – saved its impact from being much worse. And the way it was fixed – in one place in matplotlib itself – meant that future NASA missions, and all other matplotlib users, could also benefit.
Back to the disclaimers.
Disclaimer #8

The most important disclaimer: Ethics matter: I want to acknowledge there are many ethical problems with LLMs, from climate-threatening energy usage, to amplification of bias, to labor market disruption, to intellectual property issues, and on and on. I will touch on a few specific ethical quandaries related closely to open source software specifically later on, but I can’t effectively cover all of the ethical issues in this talk. They are all important – anyone using LLMs should be aware of the pitfalls and be supporting those who are working on making it better. We can proceed with some things in parallel while we work on solving the problems, and proceed with different levels of caution, depending on our context. I don’t think “we should stop exploring these tools until we have all of the downsides figured out” to be a very practical position, but I’m also not going to say you are wrong if you choose to avoid the use of LLMs because any of these ethical problems are a dealbreaker for you. It’s important to me the open source community participates with everyone.
For my part, you may notice my slides are simple because I chose not to use generative art in my talks on ethical grounds – you are getting the true and full extent of my own artistic abilities here. I had some humans help with the content, but I didn’t use LLMs to help me write this.
The middle
Open source software contribution workflow

This is the usual workflow of adding a feature or fixing a bug to an open source project. It’s worth noting that even this is fairly new – GitHub has mainstreamed this workflow, but 15 years ago, everyone just threw their code in a pile and hoped for the best. But I think most serious open source these days has a workflow like this to mitigate risk and improve quality – with varying levels of rigidity based on context or project maturity.
You can actually use LLMs for any of these tasks, but how you approach them depends on the type of task.

For “learning” tasks, the LLM is acting like an augmented search engine, a summarization tool, or a research assistant. It can flag important things to learn or try and help to build a mental model of the current state of the code. There is low risk here – if it sends you down the wrong path, all you have wasted is time.

For “creative” tasks, you can use the LLM to generate the content, but ultimately, the human in the loop must take responsibility for the final result. “The LLM did this, it works, but I don’t understand why” is never going to pass muster – not just because it’s riskier, but because the end result needs to be understandable by other humans and LLM systems in the future. I might be reflecting my bias from working on long term projects here, but “building a quick short-term solution to throw away” doesn’t seem like we are advancing the art.
An important side note here is that LLMs seem to really pick up on documentation – “documenting code is the new SEO for LLMs”.

For “collaborating”, these are the points where the community comes together, so “keeping it real” is actually really important. I think it’s ok to use machine translation and grammar checking tools to help make communication more effective, but beyond that, you want to be human here. This is where things like personal trust, empathy and a sense of common purpose are built. If a maintainer starts to feel like they are talking to a machine, it undermines that. And I can’t really envision a world where LLMs are just chatting with other LLMs here.

It’s the “risk mitigation” steps where we need to retain the most caution. Even here, LLMs may be useful – for example, GitHub Copilot will review PRs for you – but it’s unlikely the LLM will have enough context both of the entire codebase or how it connects to the real world to meaningfully review an entire PR. I think today it’s more of an incremental improvement on linting – recognizing problematic patterns. So the LLM is basically “another signal in the mix” and not a “final arbiter of quality or taste”. Even with all of these combined filters of multiple humans and LLMs, my concern here is the mistakes and risks created by LLM-generated content may be subtly different from human-generated content, and the skill at identifying them will take some time to adapt to.

So, in summary, you can see that we can accommodate some risks HERE, because we are mitigating for it OVER HERE. If the use of LLMs made things worse, the mitigation may need to be different or harder in some ways (and we are going to need to adapt), but it doesn’t change fundamentally.

If the LLMs are improving throughput or “productivity”, the worry is that the volume of submissions will increase, open source library maintainers, already overwhelmed, will become burnt out. A number of projects have already seen an influx of AI generated slop pull requests. I think that for the most part “fully automated” or “malicious” issues and PRs will be filtered transparently by the platform – just like e-mail spam, it still exists but I don’t spend a lot of time thinking about it. But for hybrid things – where a human is using LLMs to complete work faster and not fully understanding the implications, that’s a real issue.
A more optimistic view of this is that the quality of PRs will improve before they even get to the maintainers, and maintainers will spend less time in back-and-forth getting them in shape. I, personally, have built a prototype to help first time bug reporters create better reports. It’s hard to get that without creating additional frustration. But I haven’t seen any evidence that we are at the point where LLMs are reducing the burden on maintainers, yet.
Wherever that ends up, one thing seems clear: From the maintainer’s side, the job will become even more about reading code, understanding the larger implications of changes, and connecting issues in the software to real human problems. My personal concern there is based on what we know about how humans learn – learning to be good at reading requires learning to be good at writing. For example, when we teach how to identify a good and well-reasoned essay, we don’t just have students read a bunch of essays, we teach them how to write an essay – a principle known as constructivism. It’s the same with code. So there is an educational / pedagogical challenge we have to help support the next generation of open source maintainers and help them become good readers when they may get less experience as writers.
On the contributor’s side, the advice I have is to stay real and human. Anything produced by LLM as a tool ultimately is still coming from you, so you need to learn enough to understand and stand behind it.

Don’t fall into the productivity trap: using LLMs is not about timing your tasks with a stop watch and watching seconds being shaved off. This is a Fordism (assembly line) view of software engineering, which doesn’t really apply to knowledge work. Instead, think of time scales of weeks not seconds. It’s not “it takes me X% less time to fix a bug”.
It’s things like – Are you forming stronger connections to the projects that you use? Are you finding yourself understanding them better? Do you find yourself tackling projects with confidence that seemed daunting before? Does building quick throwaway prototypes help you understand the correct “permanent” solution? You may never know if it’s the LLM that’s helping you, or just natural learning and the growing of expertise. (If the cost of the LLM subscription is not an issue) does it even matter?
Generative AI policies for open source software packages
I’m not the only one thinking about how LLMs fit into open source software development. One way to get a read on where the community is is by looking at the open source contribution policies. It’s early days, and therefore, unlike licenses and codes of conduct which have settled on a handful of standard models, LLM policies are mostly “one-offs”. When you look at some of them, it’s a sign that all open source isn’t created equal and they exist on a sort of “political spectrum”.
I'm just going to link to some policies here. I think they largely fit into the framework described above and highlight some of the concerns that OSS projects currently have.
Open source graph of trust

Now let’s look at things from the perspective of a consumer of open source. Typically, you pull in some dependencies for your project, and you end up with a whole tree of secondary dependencies that get automatically pulled in. This forms a network of trust or reputation. In this example, I’m writing the Phoenix Mars lander software, and I depend directly on matplotlib. I heard it was a good library, maybe I met some of the folks at a conference and they seemed like nice people, so I trust it to do the right thing. I don’t really know or care what’s beneath that, but maybe I trust the matplotlib developers to care about that. Again, if one of these dependencies tanked in quality, the usual signals from the open source trust network will probably kick in. (Entire companies exist to help with this dependency safety problem, of course, if you /really/ need to be certain.) So there are some self-healing and mitigation of risk properties here as well. When you ignore the possibility of bad faith actors, I think LLMs represent an incremental, not existential, risk.
However, as the xz incident has shown – where a state actor impersonated a developer in order to take over an open source project and inject malicious code – using LLMs to convincingly impersonate developers at scale may soon be a real risk.
But let’s get back to my favorite bug. What if, back in 2007 the authors had access to an LLM, and they asked it to fix the bug? As designed today, that LLM tool is more likely to just create a workaround – in this case, having been told that matplotlib’s circle drawing code was buggy, it might attempt to write its own in the local project and just pass the result of that calculation off to matplotlib, rather than trying to report the bug or filing a pull request against matplotlib. These sort of workarounds are fine, of course – I don’t want to imply that we as busy software developers have to submit every bug we fix upstream. But the social system that makes open source work depends on the fact that a certain fraction of engineers do submit fixes upstream. If everyone started using LLM-based tools, and those tools never suggest contributing upstream, we’d very soon run aground. Not just because open source project quality wouldn’t move forward, but the solutions to the same problems may never make it back into training sets. And if you remember that the LLM was trained on open source projects to begin with, what it’s actually doing is syphoning off value from the open source commons to individual users of the LLMs, and ultimately, in the form of money, to the companies that sell access to LLMs.
Epilogue
We need tools that purpose-built for the social dynamics of open source software. The current wave of LLM tools are largely coming from large corporations and are built to support that model of development. We instead need tools that are built with the conventions of open source in mind. There are a few things holding back building the whole stack:
Models are mind-bogglingly expensive to build. By some outside estimates, ChatGPT will cost $2.5 billion dollars to train They are built from questionable rights in the training sets. While some of the models sell themselves as “open”, most are not “open source” by the classic definition – that they could be rebuilt from first principles from fully open data. It’s therefore currently too hard to “bootstrap” the whole stack in an open source way, and open source has historically been reluctant to build processes that rely on tools we can’t build from first principles The copyright claims on the output of the models may be problematic. Can we find a way to ensure that attribution is being correctly applied? That’s a hard problem because by the time the model is built it’s virtually impossible to track back to the source, but perhaps it is possible to do that as a post-processing step.
So, right now, I think it’s nigh impossible to build a model that would meet the traditional definition of open source. But we don’t need to wait to “get everything we want” – we can pick these apart and tackle them individually – it will be hard and expensive at first, but eventually economies may shift.

And while we tackle those problems, we can still treat the LLM as a black box and build tooling designed for open source processes all around it. I do believe the open source community has the opportunity to build something better that serves the commons, but we aren’t going to get there by standing on the sidelines.
Again, a lot of these things are already coming about. Let’s do more of that.
Existing processes for risk mitigation help, but will need to continue to evolve. The same systems that improve code quality with humans are still useful with LLMs, but they may change in nature, and they may require re-thinking how we approach validation of quality. To the extent that LLMs start training us, it will be in our ability to detect and remediate for their problematic content.
We have some real challenges to get there – some of them are technical, and some are pedagogical.
Collaboration and reputation matter now more than ever. Understanding how to work with other open source developers to meet the real world use cases remains the key skill, whether LLMs succeed or fail.
With the possibility of impersonation, or generation of “magical” solutions, personal responsibility, understanding, and reputation matter now more than ever.
I think that any additional skills you connect to software development matter more.
Conclusion: And I think that’s as good a segway as any to go forth to the conference and meet and chat in the hallway about the possibilities here.
Gracias por escuchar y buena suerte!
July 11, 2025
Python Engineering at Microsoft
Python in Visual Studio Code – July 2025 Release
We’re excited to announce the July 2025 release of the Python, Pylance and Jupyter extensions for Visual Studio Code!
This release includes the following announcements:
- Python Environments included as part of the Python extension
- Disabled PyREPL for Python 3.13
If you’re interested, you can check the full list of improvements in our changelogs for the Python, Jupyter and Pylance extensions.
Python Environments included as part of the Python extension
We’ve begun the roll-out of the Python Environments extension as an optional dependency with the Python extension. What this means is that you might now begin to see the Python Environments extension automatically installed alongside the Python extension, similar to the Python Debugger and Pylance extensions. You can find its features by clicking the Python icon that appears in the Activity Bar. This controlled roll-out allows us to gather early feedback and ensure reliability before general availability.
The Python Environments extension includes all the core capabilities we’ve introduced so far including: one-click environment setup using Quick Create, automatic terminal activation (via "python-envs.terminal.autoActivationType"
setting), and all supported UI for environment and package management.
To use the Python Environments extension during the roll-out, make sure the extension is installed and add the following to your VS Code settings.json
file:
"python.useEnvironmentsExtension": true
Disabled PyREPL for Python 3.13
We have disabled PyREPL for Python 3.13 and above to address indentation and cursor issues in the interactive terminal. For more details, see Disable PyREPL for >= 3.13.
Other Changes and Enhancements
We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python and Jupyter Notebooks in Visual Studio Code. Some notable changes include:
- Polished terminal activation support for Poetry versions greater than 2.0.0 in the Python Environments extension (vscode-python-environments#529).
.venv
folders generating by the Python Environments extension are now git-ignored by default (vscode-python-environments#552).- Improved the environment deletion process through the Python Environments extension (vscode-python-environments#481 and vscode-python-environments#505).
- Quick Create environment creation now provides an option to set up multiple virtual environments which are uniquely named within the same workspace (vscode-python-environments#477).
- The Pylance extension now includes several experimental MCP tools, which offer access to Pylance’s documentation, import analysis, environment management, and more. These tools are still under active development and continue to be polished.
We would also like to extend special thanks to this month’s contributors:
- @jezdez Fixed conda listing in README.md in vscode-python-environments#80
- @robwoods-cam Added note for Python Pre-release requirement in vscode-python-environments#111
- @almarouk Remove tilde from conda path in settings in vscode-python-environments#122
- @flying-sheep Handle all shells de/activation in vscode-python-environments#137
Try out these new improvements by downloading the Python extension and the Jupyter extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.
The post Python in Visual Studio Code – July 2025 Release appeared first on Microsoft for Python Developers Blog.
PyBites
From SQL to SQLModel: A Cleaner Way to Work with Databases in Python
SQLModel is a library that lets you interact with databases through Python code with Python objects and type annotations instead of writing direct SQL queries.
Created by the same author of the extremely popular framework FastAPI, it aims to make interacting with SQL DBs in Python easier and elegant, with data validation and IDE support, without the need to learn SQL.
It’s an ORM (Object-Relational Mapper), meaning: it translates between classes/objects and SQL.
In this article, I will cover why you would use SQLModel over plain SQL queries, what benefits it brings to the table and the basics of using it in Python projects.
I’ll assume that you’re comfortable with Python (functions, classes, attributes). I’m not assuming any prior knowledge of SQL though. The article should be approachable to anyone with basic Python experience.
To keep the article manageable, I’ve intentionally left out things like grouping operations in functions, error handling, code execution output and performance optimization. Otherwise, this would span multiple parts.
Why use SQLModel?
The main reason you would want to use SQLModel is to avoid writing SQL in your Python code. Mixing code from another language (especially SQL queries) with your Python code can get messy. It take away from your code clarity and readability, it adds a maintenance cost and it’s not always secure.
Although there are other solutions that allow you to avoid writing SQL in Python, SQLModel comes with additional, very important features for free, most notably: data validation.
Here are the main features included in SQLModel:
- Type annotation by default: SQLModel is built on Pydantic. An SQLModel model works exactly the same way a Pydantic model works. You get Pydantic data validation, serialization and documentation.
- Built on the most popular DB library in Python: SQLAlchemy. When the simplicity and defaults of SQLModel aren’t enough for your use case, you can use SQLAlchemy directly.
- IDE support: code completion/suggestions and inline errors.
- easier to test. Writing tests that includes SQLModel code is fairly easy with expected results as compared to writing test for SQL queries.
SQLModel creator says that there was a lot of research and effort dedicated to make SQLModel model both a Pydantic & an SQLAlchemy model.
Another reason might be that you don’t (or don’t want to) know SQL. Which is fine although it will greatly help if you learn the very basics even if you choose to use SQLModel.
How to use SQLModel?
To keep things simple, I will use a user info table as an example throughout the post.
Here is how the ‘user’ table in the DB may look like:
id | name | date_joined | is_admin | |
---|---|---|---|---|
The SQL-in-my-Python way:
Lets start with the non-SQLModel way of interacting with DBs from Python.
When your Python app needs to save/retrieve/update/delete data from/to an SQL DB, we write SQL queries directly in Python code like this:
First, we connect to the DB (creating it if it doesn’t exist):
import sqlite3
from datetime import datetime
con = sqlite3.connect("user_db.db")
Then, we get a hold of a cursor to be able to execute SQL statements:
cur = con.cursor()
Then, we create the ‘user’ table with the necessary fields:
cur.execute("""CREATE TABLE IF NOT EXISTS user (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
email TEXT NOT NULL,
date_joined TEXT,
is_admin INTEGER DEFAULT 0
);
""")
The string passed to .execute()
is the SQL statement that we want to to execute on the DB.
Finally, we can perform CRUD operations (create, read, update, delete). Suppose we have this user data:
# id omitted. sqlite will auto create it and autoincrement it.
name = "John Doe"
date_joined = datetime.now()
email = 'john@example.com'
is_admin = False
Using same connection and cursor we’ve just created above, we can execute SQL queries like so:
- insert (create):
cur.execute(
"INSERT INTO user (name, email, date_joined, is_admin) VALUES (?, ?, ?, ?)",
(name, email, date_joined, int(is_admin)),
)
con.commit()
- select (read):
user_id = 1
user1 = cur.execute("SELECT * FROM user WHERE id = ?", (user_id,)).fetchone()
print("User1: ", user1)
- update:
user_id = 1
new_name = "Jane Doe"
cur.execute("UPDATE user SET name=? WHERE id = ?", (new_name, user_id))
con.commit()
- delete:
user_id = 1
cur.execute("DELETE FROM user WHERE id = ?", (user_id,))
con.commit()
You can already see how this is not the most efficient way and not the cleanest code. It’s a mix of Python and SQL with long strings and placeholders for values. In addition, you need to know at least basic SQL to write these queries.
I’m sure, some developers know and love SQL and prefer to use it directly in their Python code, however, in my opinion, there are some downsides to this approach:
- It’s not clean.
- less readable.
- prone to errors.
- harder to maintain.
- could open code to SQL injection attacks especially if you use string formatting instead of value placeholders.
- no data validation or type annotation (at least not by default).
- no IDE/editor support. No completion or suggestions. No inline error warnings. Query strings errors might not be detected by your IDE.
- makes writing tests a tedious task. You have to also write SQL queries (maybe complicated ones) in tests especially for edge cases.
Now let’s see what SQLModel has to offer to improve the situation.
The SQLModel way:
SQLModel offers a better, safer, and more elegant way to interact with SQL DBs from Python code. It allows us to work with DB records (rows) as regular Python objects.
Let’s see how we can use it instead.
1- Install SQLModel:
After creating and activating you virtual environment, use your favorite package manager to install SQLModel. Here, I’m using uv:
uv add sqlmodel
2- Create a model for your data:
Before using SQLModel to interact with the DB, we need first to create an SQLModel model. It’s exactly the same as a Pydantic model.
Using our ‘user’ example above, here is how our model would look:
from sqlmodel import SQLModel, Field
from datetime import datetime
class User(SQLModel, table=True):
id: int | None = Field(default=None, primary_key=True)
name: str
email: str
date_joined: datetime = Field(default_factory=datetime.now)
is_admin: bool = False
Here, in the class User
, we’re sub-classing SQLModel from the sqlmodel
library and telling it to create a corresponding SQL table in the database if it doesn’t exist, indicated by table=True
.
Then we defined the fields of the class just like any other Python class with type annotations (it’s a Pydantic model).
We’re using the special Field
function from SQLModel to set arguments for the id
and date_joined
fields.
The date_joined
filed uses a default_factory
to store the current timestamp.
The model itself represents a table in the database and each field represents a column in that table.
Now, we can create “user” instances from this SQLModel “User” model. Every instance we create will represent a row in the table.
3- Create DB and Table:
We’re now ready to create the DB, connect to it and create necessary tables.
For this task, we need to create an SQLAlchemy engine (remember, SQLModel uses SQLAlchemy under the hood).
The engine is an object that handles the communication with the DB.
We can create one using create_engine()
from SQLModel. First we need to add it to our imports:
from sqlmodel import SQLModel, Field, create_engine
Then, we add the engine creation code below the model class as order here matters:
...
# below the code defining User model
sqlite_file_name = "users_db.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
engine = create_engine(sqlite_url, echo=True)
SQLModel.metadata.create_all(engine)
Here, we’re using SQLite and setting the DB URL. You can use any DB that is supported by SQLAlchemy.
You would typically load this DB URL from the environment, see our article: How to handle environment variables in Python
Then we’re calling create_engine()
passing it the URL and telling it to log what it’s doing to the terminal where we run the Python code with echo=True
. This is very handy not only for debugging but also for learning.
Lastly, we call create_all()
passing the engine to tell SQLModel to create everything: the DB and the table(s).
The create_all()
takes an engine and creates a table for each model that inherits from SQLModel with table=True
configured. Our model meets these conditions, so its table gets created.
One important thing to remember is that calling create_all()
needs to happen after the code that defines the model class. Otherwise the table will not be created.
You can verify that the DB has indeed been created by simply checking for a new .db file with the same name as sqlite_file_name
. If it’s there, you’re good to continue.
4- Create instances of the SQLModel to represent individual rows:
Now that we have our DB and table created, we can start creating instances of our User.
It’s straightforward. Just like instantiating any Python class, we do it like so:
user1 = User(name="ahmed", email="ahmed@example.com", is_admin=True)
NOTE: We don’t need to include the
id
field when creating or updating records in the DB (you can use it to find the record to update but not updateid
itself). It will be automatically created by SQLite and auto incremented for us. The same goes for thedate_joined
field as it uses adefault_factory
to store the current timestamp automatically when creating a new instance of User.
So far, our user (user1) lives only in memory, we need to persist it to the DB so that we can later retrieve/update/delete it.
Here is where SQLModel really shines. Forget all those messy SQL queries and value placeholders. SQLModel allows us to use Python code to cleanly and elegantly interact with the DB.
5- Create a session and perform DB operations:
The engine we created earlier handles the communication with the DB for the whole program. On top of the engine, we need a session.
A session uses the engine to perform operations on the DB. We need a new session for each group of operations that belong together.
Let’s create a session to add a row in the DB table for the user we just created.
First, we need to add Session
to our imports:
from sqlmodel import SQLModel, Field, Session, create_engine
Next, we create the session and tell it to add the User instance to the DB. We’re using a with
block to cleanly close the session after each operation even if there was an exception in the block:
...
# below the code that creates the DB & table above:
with Session(engine) as session:
session.add(user1)
So far, the user instance is only added to the session (in memory) and still not sent to the DB.
This is where we see the benefit of using the session. It holds in memory the objects we need to save to the DB until we’re ready to commit, at which point, we call commit()
which will use the engine to save all changes to the DB.
This allows for batch operations, instead of sending changes individually to the DB which may be expensive.
Now we can commit:
session.commit()
You can use a DB browsing tool (like DB Browser for SQLite) to verify that a row has been created for the user instance.
And here is the complete creation code:
user1 = User(name="ahmed", email="ahmed@example.com", is_admin=True)
with Session(engine) as session:
session.add(user1)
session.commit()
To make things easier for us for the rest of the post and to have some data to work with in the DB, we’ll now create multiple users. Use the following code to create 3 more users and add them to the DB:
user2 = User(name="john", email="john@example.com", is_admin=False)
user3 = User(name="bob", email="bob@example.com", is_admin=True)
user4 = User(name="kate", email="kate@example.com", is_admin=False)
with Session(engine) as session:
session.add(user2)
session.add(user3)
session.add(user4)
session.commit()
How to do CRUD operations with SQLModel?
Now, that we know how to create a session to perform operations on the DB, it’s a matter of knowing how to do each CRUD operation using SQLModel.
Here is a simple table for each operation, SQL command and the corresponding SQLModel function or session
method:
CRUD op. | SQL cmnd. | SQLModel fn./mthd. |
---|---|---|
create | INSERT | .add() |
read | SELECT | select() |
update | UPDATE | select() & .add() |
delete | DELETE | delete() or .delete() |
We’ve already seen how to do the first one. We’ll explore the rest next.
Select (read/retrieve) multiple rows from DB:
“Where does SELECT
come from?” you may ask. In SQL, we use SELECT
to read rows from the DB. An SQL SELECT
statement looks like this:
SELECT * FROM user
This means: select all columns (i.e. id, name, email, etc.) from table ‘user’. Not specifying any filtering conditions means: return all rows (records, user instances) you find in the DB.
Similarly, in SQLModel we use its select()
function and pass it the name of the model (which represent the DB table) we want to read from.
We can chain filtering and ordering preference to select()
but we’re not going to do that just yet. Now, we want all rows from the table.
First we need to add select
to our imports:
from sqlmodel import SQLModel, Field, Session, create_engine, select
Then, to read all rows from the ‘user’ table in the DB, we use the following pattern:
with Session(engine) as session:
statement = select(User)
results = session.exec(statement)
users = results.all()
for user in users:
print(user)
NOTE: We’re still using the same engine from earlier but with a new session for each set of related DB operations.
Here, we’re storing the query statement in a variable then passing it to exec()
to execute it. This will return a results
object. We call its .all()
method to return all User objects.
We can make the previous code more Pythonic:
with Session(engine) as session:
users = session.exec(select(User)).all()
for user in users:
print(user)
However, for readability, especially if the statement is a long one (like the code block where we chain filtering, ordering and limiting), it may be better to store the statement in a variable first and then pass it to the exec()
function.
Filtering, ordering and limiting the results:
As mentioned earlier, we can optionally chain filtering, ordering preference and/or limiting to select()
using the methods .where()
, .order_by()
and .limit()
respectively. We can use all of them, or mix and mach to suit our needs.
For example, we can select only the users that are admins:
with Session(engine) as session:
statement = select(User).where(User.is_admin)
users = session.exec(statement).all()
for user in users:
print(user)
One important thing to remember is that you want to pass a Python expression to .where()
like:
select(User).where(User.name=="ahmed")
not a keyword argument like:
select(User).where(name="ahmed")
With the latter, you won’t get IDE auto-completion/suggestions nor inline errors. So, if you pass a non-existent attribute or miss-spell an existing one and it returned unexpected results, it would be hard to discover/debug.
We can also order the results by further chaining .order_by()
passing it the field we want to order by, and calling either .asc()
for ascending or .desc()
for descending:
with Session(engine) as session:
statement = (
select(User)
.where(User.is_admin)
.order_by(User.date_joined.asc())
)
users = session.exec(statement).all()
for user in users:
print(user)
Here, we’re ordering by users by the date they joined. The first joined first.
And finally we can chain one more method to limit the results instead of returning the full list of rows which may be slow especially for large tables. We just pass the number of rows we want to .limit()
the result to:
with Session(engine) as session:
statement = (
select(User)
.where(User.is_admin)
.order_by(User.date_joined.asc())
.limit(2)
)
users = session.exec(statement).all()
for user in users:
print(user)
Select only one row from the DB:
So far, we have been selecting (reading) multiple rows from the DB. The select()
statement unmodified returns an object that is an iterable. We’ve seen how we can use its method .all()
to get a list of rows/records instead of the iterable object.
There might be times though where we may want only the first row. We can do that by calling .first()
instead of .all()
. This will return the first row if there was any:
...
user = session.exec(statement).first()
print(user)
If we’re absolutely sure that our query should return one (and only one) row, we can use the .one()
method on the exec()
result. For example, the id
field is the primary key and it must be unique across the table. If we look a user up by id
the query should return only one row, or None if no user was found with that id
.
with Session(engine) as session:
statement = select(User).where(User.id==1)
user = session.exec(statement).one()
print(user)
This also helps in testing cases where we want to insure that only one row is returned. If multiple rows were returned, an exception is raised and the test fails.
NOTE: Knowing how to use
select()
, especially combined with.one()
is very important as we will need them when updating and/or deleting to find the row(s) we want to update/delete.
Update rows
Updating rows using SQLModel is a three-step process:
- first, we select the row we want to update,
- then, we update the row,
- finally, we add the row back to the DB and commit.
Think of it like this: you pull the item out of the DB ‘box’, change some of its attributes and then put it back in the DB box.
(Optional): We can refresh the Python object/instance linked with that DB row so that it reflects the new changes.
It’s easier than it sounds, so let’s see how. Assume that we want to assign the user with id
#4 administrator privileges.
with Session(engine) as session:
statement = select(User).where(User.id == 4)
results = session.exec(statement)
user_to_update = results.one()
user_to_update.is_admin = True
session.add(user_to_update)
session.commit()
session.refresh(user_to_update)
print("Updated user: ", user_to_update)
The code is almost self-explanatory. We find the user using select()
filtering the rows with .where()
. We then call the one()
method of the results object to get only one user, and store that user’s instance in user_to_update
.
We then change the is_admin
attribute of the user instance to True
, add it back to the session and commit the change to the DB.
Finally we refresh the user instance and print it to verify that the new is_admin
value has indeed been updated and the user with id
#4 is now an admin.
NOTE: Refreshing after updating is optional because SQLModel (via SQLAlchemy) will automatically refresh the object when you access one of its attributes, for example
user.name
. This lazy refresh ensures you’re seeing the latest data from the database.However, if you don’t access any attributes after committing, the object might not refresh on its own. That’s why we explicitly call
session.refresh(user)
to make sure we’re working with the latest data even if we don’t immediately read from it.
That’s it. As you can probably see, it’s fairly easy to update a row with SQLModel.
Next, we’ll see how to perform the last CRUD operation: delete.
Delete specific rows:
Deleting rows from DB is a straightforward business. You select()
the row(s) you want to delete, delete them and commit the session.
Here is how we would delete the user with id
#3:
with Session(engine) as session:
statement = select(User).where(User.id == 3)
results = session.exec(statement)
user_to_delete = results.one()
session.delete(user_to_delete)
session.commit()
We ‘select’ the user to be deleted, the one with the id
#3, using select()
and filter for the id
with .where()
. We then call .one()
on the results to get the only row returned. After we’ve got the row, we tell the session to delete it and commit the deletion to the DB.
Delete all rows:
To delete all rows in the table, we use the delete()
function from sqlmodel
, not the .delete()
method of the session. After importing the function, we execute it as a statement passing it the name of the model (‘User’ in our example).
WARNING: This would delete all user rows from the ‘user’ table in the DB.
from sqlmodel import SQLModel, Field, Session, create_engine, select, delete
with Session(engine) as session:
statement = delete(User)
session.exec(statement)
session.commit()
That concludes our exploration of CRUD operations and, therefore, our discussion of SQLModel as a nice alternative to using SQL directly in Python code.
Conclusion
It’s been a journey! If you have made it so far, congrats on learning this cool library and adding a new tool to your toolbox.
I think SQLModel is one of the best SQL libraries for Python and is worth learning.
To continue learning, or if you need more technical details, I recommend the documentation. It’s well written, well presented and very approachable.
Thanks for your time. See you in another Pybites blog post.