Planet Python
Last update: July 25, 2025 04:44 PM UTC
July 25, 2025
Real Python
The Real Python Podcast – Episode #258: Supporting the Python Package Index
What goes into supporting more than 650,000 projects and nearly a million users of the Python Package Index? This week on the show, we speak with Maria Ashna about her first year as the inaugural PyPI Support Specialist.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
DjangoCon Africa 2025 Heads to Arusha 🇹🇿
We’re excited to share that DjangoCon Africa is returning this year — and this time we’re heading to Arusha, Tanzania from August 11–15, 2025! 🎉
Arusha city view with Mount Meru in the background, credits Halidtz - CC BY-SA 4.0
This second edition builds on the incredible success of the inaugural DjangoCon Africa held in Zanzibar in 2023. That event welcomed over 200 attendees from 22+ countries, with more than half of the participants being women — a powerful statement about the growing diversity and strength of the African tech ecosystem.
What to expect at DjangoCon Africa 2025
Five action-packed days of:
- 💬 Talks Three full days of diverse talks spanning programming, technology, society, career development, business, education, and more — all with voices from across Africa and around the globe.
- 🖥️ Workshops Hands-on training led by Django and Python experts — perfect for deepening your skills and learning new ones.
- 🤝 Sprints Join code sprints and contribute to open source projects, including Django itself.
- 👩💻 Django Girls workshop A special pre-conference workshop for women interested in web development — part of a global initiative that has introduced thousands of women to Django.
- 🔍 Discovery & connections Meet developers, designers, educators, and innovators from across the continent. Share stories. Build partnerships. Celebrate African tech talent.
Co-hosting with UbuCon Africa 2025
This year’s DjangoCon Africa comes with a special twist: we’re proud to co-host the first-ever UbuCon Africa — a regional gathering of Ubuntu users and contributors.
From August 13–15, UbuCon Africa will bring together Linux and open source enthusiasts to celebrate people-powered tech, collaboration, and the Ubuntu spirit of “I am because we are.” Whether you're a die-hard Debian dev or just curious about Ubuntu — you’ll feel right at home.
🎟 Secure your spot & get involved
Whether you’re looking to attend, speak, sponsor, or volunteer, DjangoCon Africa has a place for you.
This is more than just a conference. It’s a celebration of community, learning, and open source built by and for people across Africa and beyond.
Join us in Arusha this August. Let’s shape the future of Django together.
July 24, 2025
Ned Batchelder
Coverage 7.10.0: patch
Years ago I greeted a friend returning from vacation and asked how it had been. She answered, “It was good, I got a lot done!” I understand that feeling. I just had a long vacation myself, and used the time to clean up some old issues and add some new features in coverage.py v7.10.
The major new feature is a configuration option,
[run] patch
. With it, you specify named
patches that coverage can use to monkey-patch some behavior that gets in the way
of coverage measurement.
The first is subprocess
. Coverage works great when you start your
program with coverage measurement, but has long had the problem of how to also
measure the coverage of sub-processes that your program created. The existing
solution had been a complicated two-step process of creating obscure .pth files
and setting environment variables. Whole projects appeared on PyPI to handle
this for you.
Now, patch = subprocess
will do this for you automatically, and clean
itself up when the program ends. It handles sub-processes created by the
subprocess module, the
os.system() function, and any of the
execv or spawnv families of
functions.
This alone has spurred one user to exclaim,
The latest release of Coverage feels like a Christmas present! The native support for Python subprocesses is so good!
Another patch is _exit
. This patches
os._exit() so that coverage saves its data before
exiting. The os._exit() function is an immediate and abrupt termination of the
program, skipping all kinds of registered clean up code. This patch makes it
possible to collect coverage data from programs that end this way.
The third patch is execv
. The execv functions
end the current program and replace it with a new program in the same process.
The execv
patch arranges for coverage to save its data before the
current program is ended.
Now that these patches are available, it seems silly that it’s taken so long. They (mostly) weren’t difficult. I guess it took looking at the old issues, realizing the friction they caused, and thinking up a new way to let users control the patching. Monkey-patching is a bit invasive, so I’ve never wanted to do it implicitly. The patch option gives the user an explicit way to request what they need without having to get into the dirty details themselves.
Another process-oriented feature was contributed by Arkady Gilinsky: with
--save-signal=USR1
you can specify a user signal that coverage will
attend to. When you send the signal to your running coverage process, it will
save the collected data to disk. This gives a way to measure coverage in a
long-running process without having to end the process.
There were some other fixes and features along the way, like better HTML
coloring of multi-line statements, and more default exclusions
(if TYPE_CHECKING:
and ...
).
It feels good to finally address some of these pain points. I also closed some stale issues and pull requests. There is more to do, always more to do, but this feels like a real step forward. Give coverage 7.10.0 a try and let me know how it works for you.
TestDriven.io
Deploying a Django App to Sevalla
This tutorial looks at how to deploy a Django application to Sevalla.
The Python Show
Writing Creating TUI Applications with Textual and Python
In this episode, Mike Driscoll talks about his latest book, Creating TUI Applications with Textual and Python.
Learn about how and why the book came about, what were some pain points in writing the book and how much fun Textual is. You will also learn about some of the current problems in self-publishing tech books.
You can purchase Creating TUI Applications with Textual and Python on the following websites:
Python Software Foundation
PSF Board Election Nominations Opening July 29th
This year’s PSF Board Election nomination period opens next week on Tuesday, July 29th, 2:00 pm UTC and closes on Tuesday, August 12th, 2:00 pm UTC. Who runs for the board? People who care about the Python community, who want to see it flourish and grow, and also have a few hours a month to attend regular meetings, serve on committees, participate in conversations, and promote the Python community. Check out the following resources to learn more about the PSF, as well as what being a part of the PSF Board entails:
- Life as Python Software Foundation Director video on YouTube
- FAQs About the PSF Board video on YouTube
- Our past few Annual Impact Reports:
Board Election Timeline
- Nominations open: Tuesday, July 29th, 2:00 pm UTC
- Nomination cut-off: Tuesday, August 12th, 2:00 pm UTC
- Announce candidates: Thursday, August 14th
- Voter affirmation cut-off: Tuesday, August 26th, 2:00 pm UTC
- Voting start date: Tuesday, September 2nd, 2:00 pm UTC
- Voting end date: Tuesday, September 16th, 2:00 pm UTC
Not sure what UTC is for you locally? Check this UTC time converter!
Nomination details
You can nominate yourself or someone else. We encourage you to reach out to people before you nominate them to ensure they are enthusiastic about the potential of joining the Board.
To submit a nomination for yourself or someone else, use the 2025 PSF Board Election Nomination Form on our website. The nomination form opens on Tuesday, July 29th, 2:00 pm UTC and closes on Tuesday, August 12th, 2:00 pm UTC.
To support potential candidates and nominators, the 2025 PSF Board Election team has created a nomination resource (embedded below). It includes tips, formatting instructions, and guidance on what to include in a nomination. The goal is to help nominees understand what to expect and ensure that all candidates are provided the same clear and consistent standards.
Voting Reminder!
Every PSF Voting Member (Supporting, Contributing, and Fellow) needs to affirm their membership to vote in this year’s election. You should have received an email from "psf@psfmember.org <Python Software Foundation>" with the subject "[Action Required] Affirm your PSF Membership voting intention for 2025 PSF Board Election" that contains information on how to affirm your voting status.
You can see your membership record and status on your PSF Member User Information page. If you are a voting-eligible member and do not already have a login, please create an account on psfmember.org first and then email psf-elections@python.org so we can link your membership to your account.
Techiediaries - Django
Python Roadmap with Free Courses/Certifcates to High-Paying Jobs
This article serves as a focused investment of your time. We’ll walk you through five free, targeted certifications, each crafted to prepare you for a specific role that ranks among the highest-paying in the market, all hinging on your Python skills.
July 23, 2025
Real Python
Python's Requests Library (Guide)
The Requests library is the go-to package for making HTTP requests in Python. It abstracts the complexities of making requests behind an intuitive API. Though not part of Python’s standard library, it’s worth considering Requests to perform HTTP actions like GET, POST, and more.
By the end of this tutorial, you’ll understand that:
- Requests is not a built-in Python module—it’s a third-party library that you must install separately.
- You make a GET request in Python using
requests.get()
with the desired URL. - To add headers to requests, pass a dictionary of headers to the
headers
parameter in your request. - To send POST data, use the
data
parameter for form-encoded data or thejson
parameter for JSON data. response.text
gives you a string representation of the response content, whileresponse.content
provides raw bytes.
This tutorial guides you through customizing requests with headers and data, handling responses, authentication, and optimizing performance using sessions and retries.
If you want to explore the code examples that you’ll see in this tutorial, then you can download them here:
Get Your Code: Click here to download the free sample code that shows you how to use Python’s Requests library.
Take the Quiz: Test your knowledge with our interactive “Python's Requests Library” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python's Requests LibraryTest your understanding of the Python Requests library for making HTTP requests and interacting with web services.
Get Started With Python’s Requests Library
Even though the Requests library is a common staple for many Python developers, it’s not included in Python’s standard library. That way, the library can continue to evolve more freely as a self-standing project.
Note: If you’re looking to make HTTP requests with Python’s standard library only, then Python’s urllib.request
is a good choice for you.
Since Requests is a third-party library, you need to install it before using it in your code. As a good practice, you should install external packages into a virtual environment, but you may choose to install requests
into your global environment if you plan to use it across multiple projects.
Whether you’re working in a virtual environment or not, you’ll need to install requests
:
$ python -m pip install requests
Once pip
finishes installing requests
, you can use it in your application. Importing requests
looks like this:
import requests
Now that you’re all set up, it’s time to begin your journey with Requests. Your first goal will be to make a GET
request.
Make a GET Request
HTTP methods, such as GET
and POST
, specify the action you want to perform when making an HTTP request. In addition to GET
and POST
, there are several other common methods that you’ll use later in this tutorial.
One of the most commonly used HTTP methods is GET
, which retrieves data from a specified resource. To make a GET
request using Requests, you can invoke requests.get()
.
To try this out, you can make a GET
request to GitHub’s REST API by calling get()
with the following URL:
>>> import requests
>>> requests.get("https://api.github.com")
<Response [200]>
Congratulations! You’ve made your first request. Now you’ll dive a little deeper into the response of that request.
Inspect the Response
A Response
is the object that contains the results of your request. Try making that same request again, but this time store the return value in a variable so you can get a closer look at its attributes and behaviors:
>>> import requests
>>> response = requests.get("https://api.github.com")
Read the full article at https://realpython.com/python-requests/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Daniel Roy Greenfeld
TIL: Single source version package builds with uv
- Remove
version
inpyproject.toml
and replace withdynamic = ["version"]
- Add
[tool.setuptools.dynamic]
and specify the location of the version using this dialogue:version = { attr = "mypackage.__version__" }
- In your project's root
__init__.py
, add a__version__
attribute.
Example:
# pyproject.toml
[project]
name = "mypackage"
dynamic = ["version"]
# version = "0.1.0" # Don't set version here
[tool.setuptools.dynamic]
version = { attr = "mypackage.__version__" } # Set version here
Don't forget to specify the version in Python:
# mypackage/__init__.py
__version__ = "0.1.0"
Thanks to Audrey Roy Greenfeld for pairing with me on getting this to work.
Seth Michael Larson
Nintendo Switch 2 physical game price differences
Last week I was able to purchase a Nintendo Switch 2. The console was due to arrive on Monday, so I also picked up a physical copy of Mario Kart World for $80 USD (compared to $70 USD for digital). This is the first time I can remember that Nintendo had a different price for an identical game, just based on the medium. At first glance this seems like a $10 USD difference, but there's a detail that gets obscured by comparing sticker price alone: who is paying storage costs.
Mario Kart World requires 23GB of storage. That's not a trivial amount of storage space! I suspect Nintendo's rollout of "Game-Key Cards" isn't simply because companies wanted users to have a better experience than opening an empty jewel case. The new hardware capabilities of the Switch 2 mean games will require more storage for HD textures, sounds, video, and models.
Fast and large storage is expensive, and the more a publisher has to spend on storage the fewer margins there are for their games. Pushing games to be digital means that you, the user, are paying the storage costs instead of the publisher (in addition to the downsides of digital-only media, like fewer ownership rights, less flexibility, DMCA, etc).
So how much of that $10 USD savings for buying digitally is diminished by having to pay for storage instead of Nintendo or other games publishers? Remember that the Nintendo Switch 2 requires the use of microSD Express (or "EX") cards, so let's look at "prices per GB" for that storage medium on the market today:
- SanDisk 128GB ($60 USD, 0.47 USD/GB)
- SanDisk 256GB ($72 USD, 0.28 USD/GB)
- SanDisk 512GB ($120 USD, 0.23 USD/GB)
- Lexar 1TB ($200 USD, 0.19 USD/GB)
So if we assume that microSD Express prices don't suddenly drop, around 23 cents per GB is a decent price. Many users will be paying more to avoid paying another $100 USD on top of the Nintendo Switch 2 price. If Mario Kart World uses 23 GB, that's $5.29 USD worth of microSD Express storage (23 × $0.23 = $5.29). So when storage costs are considered, the difference between physical and digital games is less than $5 USD.
You might already also have digital games from your previous Nintendo Switch (you did migrate them over, right?) that will be taking up this more valuable storage space, too. How many more games do you expect to purchase over the Nintendo Switch 2's likely 8+ year lifetime? The Nintendo GameCube app for the Nintendo Switch 2 is expected to grow by 1.3GB per game added to the service. GameCube archivists will recognize that number as being the size of a mini-DVD, the medium used by GameCube games.
All of this adds up, and you might need to upgrade your storage medium down the line. If you do upgrade, combined with the Switch 2's single microSD card slot means you'll be double-paying if you don't pay for enough storage up-front.
Buying physical copies of games whenever possible (not "Game-Key Cards") will alleviate all of these concerns, even if they're a few dollars more expensive outright. Reminder that this is all food for thought. Being the physical media lover that I am, I want everyone to have all the facts when deciding how to buy their games.
July 22, 2025
Python Morsels
Don't call dunder methods
It's best to avoid calling dunder methods. It's common to define dunder methods, but uncommon to call them directly.
Table of contents
What is a dunder method?
In Python, a method with two underscores around it (__like_this__
) is referred to as a "dunder method", which stands for "double-underscore method".
As noted on my Python terminology page, the phrase "dunder method" isn't an official term.
Officially, dunder methods are referred to as "special methods", though that term doesn't sound "special" enough so folks sometimes say "dunder method". You'll also sometimes hear "magic method", though "magic method" seems to be less popular in recent years (after all, these methods aren't magical).
Dunder methods essentially act as a sort of "hook", allowing us (Python programmers) to customize the behavior of built-in Python behaviors.
Define dunder methods on your classes
Which dunder methods you should …
Read the full article: https://www.pythonmorsels.com/avoid-dunder-methods/
PyCoder’s Weekly
Issue #691: Inheritance, Logging, marimo, and More (July 22, 2025)
#691 – JULY 22, 2025
View in Browser »
Inheritance Over Composition, Sometimes
In a older post, Adrian wrote some code using inheritance. He got questions from his readers as why it wouldn’t just be simpler to use functions. This post re-implements the code with inheritance, composition, and plain old functions and then compares the approaches.
ADRIAN
Logging an Uncaught Exception
Uncaught exceptions will crash an application. If you don’t know how to log these, it can be difficult to troubleshoot such a crash.
ANDREW WEGNER
Prevent Postgres Slowdowns on Python Apps with this Check List
Avoid performance regressions in your Python app by staying on top of Postgres maintenance. This monthly check list outlines what to monitor, how to catch slow queries early, and ways to ensure indexes, autovacuum, and I/O are performing as expected →
PGANALYZE
Getting Started With marimo
Notebooks
Discover how marimo notebook simplifies coding with reactive updates, UI elements, and sandboxing for safe, sharable notebooks.
REAL PYTHON course
Articles & Tutorials
How to Use atexit
for Cleanup
Divakar recently came across Python’s atexit
module and became curious about practical use cases in real-world applications. To explore it, he created a simple client-server app that uses a clean-up function.
DIVAKAR PATIL • Shared by Divakar Patil
2048: Iterators and Iterables
Making a terminal based version of the 2048 game, Ned waded into a classic iterator/iterable confusion. This article shows you how they’re different and how confusing them can cause you problems in your code.
NED BATCHELDER
Ditch the Vibes, Get the Context with Augment Code
You ship to production; vibes won’t cut it. Augment Code’s powerful AI coding agent meets Python developers exactly where they are (in PyCharm, VS Code or even Vim), delivering deep context into even the gnarliest codebases and learning how you work. Ditch the vibes and try Augment Code today →
AUGMENT CODE sponsor
A Cleaner Way to Work With Databases in Python
The SQLModel library offers a clean, Pythonic alternative to writing raw SQL by combining the power of SQLAlchemy with the validation and type safety of Pydantic.
AHMED LEMINE • Shared by Bob Belderbos
Python Scope and the LEGB Rule
Understanding Python’s variable scope and the LEGB rule helps you avoid name collisions and unexpected behavior. Learn to manage scope and write better code.
REAL PYTHON
How to Debug Common Python Errors
Learn how to debug Python errors using tracebacks, print(), breakpoints, and tests. Master the tools you need to fix bugs faster and write better code.
REAL PYTHON
Making a Simple HTTP Server With Asyncio Protocols
Learn how to build a fast, minimal HTTP server using asyncio.Protocol
, complete with routing, parsing, and response handling from scratch.
JACOB PADILLA
An Intro to Asciimatics: Another Python TUI Package
Asciimatics is a Text-based User Interface library with an emphasis on animations. Learn how to bring some fun to your terminal.
MIKE DRISCOLL
Koan 1: The Empty Path
Use __bool__
, __len__
and other tools to better understand truthiness, falsiness, and the meaning of emptiness in Python.
VIVIS DEV
Prohibiting inbox.ru Email Domain Registrations
“A recent spam campaign against PyPI has prompted an administrative action, preventing using the inbox.ru email domain.”
MIKE FIEDLER
Do You Really Know How or
and and
Work?
This article explores the Python expression 5 or 0
which may not evaluate to what you think it does.
STEPHEN GRUPPETTA
A Real-Time Dashboard With FastAPI & WebSockets
Learn how to evelop a real-time inventory tracking dashboard with FastAPI, Postgres, and WebSockets.
ABDULAZEEZ ABDULAZEEZ ADESHINA • Shared by Michael Herman
Projects & Code
Run Arbitrary Code in 3rd Party Libraries With dowhen
GITHUB.COM/GAOGAOTIANTIAN • Shared by Tian Gao
Events
Weekly Real Python Office Hours Q&A (Virtual)
July 23, 2025
REALPYTHON.COM
PyOhio 2025
July 26 to July 28, 2025
PYOHIO.ORG
PyDelhi User Group Meetup
July 26, 2025
MEETUP.COM
Python Sheffield
July 29, 2025
GOOGLE.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #691.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Python Insider
Python 3.14 release candidate 1 is go!
It’s the first 3.14 release candidate!
https://www.python.org/downloads/release/python-3140rc1/
This is the first release candidate of Python 3.14
This release, 3.14.0rc1, is the penultimate release preview. Entering the release candidate phase, only reviewed code changes which are clear bug fixes are allowed between this release candidate and the final release. The second candidate (and the last planned release preview) is scheduled for Tuesday, 2025-08-26, while the official release of 3.14.0 is scheduled for Tuesday, 2025-10-07.
There will be no ABI changes from this point forward in the 3.14 series, and the goal is that there will be as few code changes as possible.
Call to action
We strongly encourage maintainers of third-party Python projects to prepare their projects for 3.14 during this phase, and where necessary publish Python 3.14 wheels on PyPI to be ready for the final release of 3.14.0, and to help other projects do their own testing. Any binary wheels built against Python 3.14.0rc1 will work with future versions of Python 3.14. As always, report any issues to the Python bug tracker.
Please keep in mind that this is a preview release and while it’s as close to the final release as we can get it, its use is not recommended for production environments.
Core developers: time to work on documentation now
- Are all your changes properly documented?
- Are they mentioned in What’s New?
- Did you notice other changes you know of to have insufficient documentation?
Major new features of the 3.14 series, compared to 3.13
Some of the major new features and changes in Python 3.14 are:
New features
- PEP 779: Free-threaded Python is officially supported
- PEP 649: The evaluation of type annotations is now deferred, improving the semantics of using annotations.
- PEP 750: Template string literals (t-strings) for custom string processing, using the familiar syntax of f-strings.
- PEP 734: Multiple interpreters in the stdlib.
- PEP
784: A new module
compression.zstd
providing support for the Zstandard compression algorithm. - PEP
758:
except
andexcept*
expressions may now omit the brackets. - Syntax highlighting in PyREPL, and support for color in unittest, argparse, json and calendar CLIs.
- PEP 768: A zero-overhead external debugger interface for CPython.
- UUID
versions 6-8 are now supported by the
uuid
module, and generation of versions 3-5 are up to 40% faster. - PEP
765: Disallow
return
/break
/continue
that exit afinally
block. - PEP 741: An improved C API for configuring Python.
- A new type of interpreter. For certain newer compilers, this interpreter provides significantly better performance. Opt-in for now, requires building from source.
- Improved error messages.
- Builtin implementation of HMAC with formally verified code from the HACL* project.
- A new command-line interface to inspect running Python processes using asynchronous tasks.
- The pdb module now supports remote attaching to a running Python process.
(Hey, fellow core developer, if a feature you find important is missing from this list, let Hugo know.)
For more details on the changes to Python 3.14, see What’s new in Python 3.14. The next pre-release of Python 3.14 will be the final release candidate, 3.14.0rc2, scheduled for 2025-08-26.
Build changes
- PEP 761: Python 3.14 and onwards no longer provides PGP signatures for release artifacts. Instead, Sigstore is recommended for verifiers.
- Official macOS and Windows release binaries include an experimental JIT compiler.
Incompatible changes, removals and new deprecations
- Incompatible changes
- Python removals and deprecations
- C API removals and deprecations
- Overview of all pending deprecations
Python install manager
The installer we offer for Windows is being replaced by our new install manager, which can be installed from the Windows Store or from its download page. See our documentation for more information. The JSON file available for download below contains the list of all the installable packages available as part of this release, including file URLs and hashes, but is not required to install the latest release. The traditional installer will remain available throughout the 3.14 and 3.15 releases.
More resources
- Online documentation
- PEP 745, 3.14 Release Schedule
- Report bugs at github.com/python/cpython/issues
- Help fund Python and its community
And now for something completely different
Today, 22nd July, is Pi Approximation Day, because 22/7 is a common approximation of π and closer to π than 3.14.
22/7 is a Diophantine approximation, named after Diophantus of Alexandria (3rd century CE), which is a way of estimating a real number as a ratio of two integers. 22/7 has been known since antiquity; Archimedes (3rd century BCE) wrote the first known proof that 22/7 overestimates π by comparing 96-sided polygons to the circle it circumscribes.
Another approximation is 355/113. In Chinese mathematics, 22/7 and 355/113 are respectively known as Yuelü (约率; yuēlǜ; “approximate ratio”) and Milü (密率; mìlǜ; “close ratio”).
Happy Pi Approximation Day!
Enjoy the new release
Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organisation contributions to the Python Software Foundation.
Regards from a Helsinki heatwave after an excellent EuroPython,
Your release team,
Hugo van Kemenade
Ned Deily
Steve Dower
Łukasz Langa
Real Python
Exploring Python Closures: Examples and Use Cases
In Python, a closure is typically a function defined inside another function. This inner function grabs the objects defined in its enclosing scope and associates them with the inner function object itself. The resulting combination is called a closure.
Closures are a common feature in functional programming languages. In Python, closures can be pretty useful because they allow you to create function-based decorators, which are powerful tools.
In this video course, you’ll:
- Learn what closures are and how they work in Python
- Get to know common use cases of closures
- Explore alternatives to closures
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Test and Code
235: pytest-django - Adam Johnson
In this episode, special guest Adam Johnson joins the show and examines pytest-django, a popular plugin among Django developers. He highlights its advantages over the built-in unittest framework, including improved test management and debugging. Adam addresses transition challenges, evolving fixture practices, and offers tips for optimizing test performance. This episode is a concise guide for developers looking to enhance their testing strategies with pytest-django.
Links:
- pytest-django - a plugin for pytest that provides a set of useful tools for testing Django applications and projects.
Help support the show AND learn pytest:
- The Complete pytest course is now a bundle, with each part available separately.
- pytest Primary Power teaches the super powers of pytest that you need to learn to use pytest effectively.
- Using pytest with Projects has lots of "when you need it" sections like debugging failed tests, mocking, testing strategy, and CI
- Then pytest Booster Rockets can help with advanced parametrization and building plugins.
- Whether you need to get started with pytest today, or want to power up your pytest skills, PythonTest has a course for you.
death and gravity
When to use classes in Python? When you repeat similar sets of functions
Are you having trouble figuring out when to use classes or how to organize them?
Have you repeatedly searched for "when to use classes in Python", read all the articles and watched all the talks, and still don't know whether you should be using classes in any given situation?
Have you read discussions about it that for all you know may be right, but they're so academic you can't parse the jargon?
Have you read articles that all treat the "obvious" cases, leaving you with no clear answer when you try to apply them to your own code?
My experience is that, unfortunately, the best way to learn this is to look at lots of examples.
Most guidelines tend to either be too vague if you don't already know enough about the subject, or too specific and saying things you already know.
This is one of those things that once you get it seems obvious and intuitive, but it's not, and is quite difficult to explain properly.
So, instead of prescribing a general approach, let's look at:
- one specific case where you may want to use classes
- examples from real-world code
- some considerations you should keep in mind
- The heuristic
- Example: Retrievers
- Example: Flask's tagged JSON
- Formalizing this
- Counter-example: modules
- Try it out
The heuristic #
If you repeat similar sets of functions, consider grouping them in a class.
That's it.
In its most basic form, a class is when you group data with functions that operate on that data; sometimes, there is no data, but it can still be useful to group the functions into an abstract object that exists only to make things easier to use / understand.
Depending on whether you choose which class to use at runtime, this is sometimes called the strategy pattern.
Note
As Wikipedia puts it, "A heuristic is a practical way to solve a problem. It is better than chance, but does not always work. A person develops a heuristic by using intelligence, experience, and common sense."
So, this is not the correct thing to do all the time, or even most of the time.
Instead, I hope that this and other heuristics can help build the right intuition for people on their way from "I know the class syntax, now what?" to "proper" object-oriented design.
Example: Retrievers #
My feed reader library retrieves and stores web feeds (Atom, RSS and so on).
Usually, feeds come from the internet, but you can also use local files. The parsers for various formats don't really care where a feed is coming from, so they always take an open file as input.
reader supports conditional requests – that is, only retrieve a feed if it changed. To do this, it stores the ETag HTTP header from a response, and passes it back as the If-None-Match header of the next request; if nothing changed, the server can respond with 304 Not Modified instead of sending back the full content.
Let's have a look at how the code to retrieve feeds evolved over time; this version omits a few details, but it will end up with a structure similar to that of the full version. In the beginning, there was a function – URL and old ETag in, file and new ETag out:
def retrieve(url, etag=None):
if any(url.startswith(p) for p in ('http://', 'https://')):
headers = {}
if etag:
headers['If-None-Match'] = etag
response = requests.get(url, headers=headers, stream=True)
response.raise_for_status()
if response.status_code == 304:
response.close()
return None, etag
etag = response.headers.get('ETag', etag)
response.raw.decode_content = True
return response.raw, etag
# fall back to file
path = extract_path(url)
return open(path, 'rb'), None
We use Requests to get HTTP URLs, and return the underlying file-like object.1
For local files, we suport both bare paths and file URIs; for the latter, we do a bit of validation – file:feed and file://localhost/feed are OK, but file://invalid/feed and unknown:feed2 are not:
def extract_path(url):
url_parsed = urllib.parse.urlparse(url)
if url_parsed.scheme == 'file':
if url_parsed.netloc not in ('', 'localhost'):
raise ValueError("unknown authority for file URI")
return urllib.request.url2pathname(url_parsed.path)
if url_parsed.scheme:
raise ValueError("unknown scheme for file URI")
# no scheme, treat as a path
return url
Problem: can't add new feed sources #
One of reader's goals is to be extensible. For example, it should be possible to add new feed sources like an FTP server (ftp://...) or Twitter without changing reader code; however, our current implementation makes it hard to do so.
We can fix this by extracting retrieval logic into separate functions, one per protocol:
def http_retriever(url, etag):
headers = {}
# ...
return response.raw, etag
def file_retriever(url, etag):
path = extract_path(url)
return open(path, 'rb'), None
...and then routing to the right one depending on the URL prefix:
# sorted by key length (longest first)
RETRIEVERS = {
'https://': http_retriever,
'http://': http_retriever,
# fall back to file
'': file_retriever,
}
def get_retriever(url):
for prefix, retriever in RETRIEVERS.items():
if url.lower().startswith(prefix.lower()):
return retriever
raise ValueError("no retriever for URL")
def retrieve(url, etag=None):
retriever = get_retriever(url)
return retriever(url, etag)
Now, plugins can register retrievers by adding them to RETRIEVERS
(in practice, there's a method for that,
so users don't need to care about it staying sorted).
Problem: can't validate URLs until retrieving them #
To add a feed, you call add_feed() with the feed URL.
But what if you pass an invalid URL? The feed gets stored in the database, and you get an "unknown scheme for file URI" error on the next update. However, this can be confusing – a good API should signal errors near the action that triggered them. This means add_feed() needs to validate the URL without actually retrieving it.
For HTTP, Requests can do the validation for us;
for files, we can call extract_path()
and ignore the result.
Of course, we should select the appropriate logic in the same way we select retrievers,
otherwise we're back where we started.
Now, there's more than one way of doing this. We could keep a separate validator registry, but that may accidentally become out of sync with the retriever one.
URL_VALIDATORS = {
'https://': http_url_validator,
'http://': http_url_validator,
'': file_url_validator,
}
Or, we could keep a (retriever, validator) pair in the retriever registry. This is better, but it's not all that readable (what if need to add a third thing?); also, it makes customizing behavior that affects both the retriever and validator harder.
RETRIEVERS = {
'https://': (http_retriever, http_url_validator),
'http://': (http_retriever, http_url_validator),
'': (file_retriever, file_url_validator),
}
Better yet, we can use a class to make the grouping explicit:
class HTTPRetriever:
def retrieve(self, url, etag):
headers = {}
# ...
return response.raw, etag
def validate_url(self, url):
session = requests.Session()
session.get_adapter(url)
session.prepare_request(requests.Request('GET', url))
class FileRetriever:
def retrieve(self, url, etag):
path = extract_path(url)
return open(path, 'rb'), None
def validate_url(self, url):
extract_path(url)
We then instantiate them,
and update retrieve()
to call the methods:
http_retriever = HTTPRetriever()
file_retriever = FileRetriever()
def retrieve(url, etag=None):
retriever = get_retriever(url)
return retriever.retrieve(url, etag)
validate_url()
works just the same:
def validate_url(url):
retriever = get_retriever(url)
retriever.validate_url(url)
And there you have it – if you repeat similar sets of functions, consider grouping them in a class.
Not just functions, attributes too #
Say you want to update feeds in parallel, using multiple threads.
Retrieving feeds is mostly waiting around for I/O, so it will benefit the most from it. Parsing, on the other hand, is pure Python, CPU bound code, so threads won't help due to the global interpreter lock.
However, because we're streaming the reponse body,
I/O is not done when the retriever returns the file,
but when the parser finishes reading it.3
We can move all the (network) I/O in retrieve()
by reading the response into a temporary file
and returning it instead.
We'll allow any retriever to opt into this behavior by using a class attribute:
class HTTPRetriever:
slow_to_read = True
class FileRetriever:
slow_to_read = False
If a retriever is slow to read, retrieve()
does the swap:
def retrieve(url, etag=None):
retriever = get_retriever(url)
file, etag = retriever.retrieve(url, etag)
if file and retriever.slow_to_read:
temp = tempfile.TemporaryFile()
shutil.copyfileobj(file, temp)
file.close()
temp.seek(0)
file = temp
return file, etag
Example: Flask's tagged JSON #
The Flask web framework provides an extendable compact representation for non-standard JSON types called tagged JSON (code). The serializer class delegates most conversion work to methods of various JSONTag subclasses (one per supported type):
check()
checks if a Python value should be tagged by that tagtag()
converts it to tagged JSONto_python()
converts a JSON value back to Python (the serializer uses thekey
tag attribute to find the correct tag)
Interestingly, tag instances have an attribute pointing back to the serializer, likely to allow recursion – when (un)packing a possibly nested collection, you need to recursively (un)pack its values. Passing the serializer to each method would have also worked, but when your functions take the same arguments...
Formalizing this #
OK, the retriever code works.
But, how should you communicate to others
(readers, implementers, interpreters, type checkers)
that an HTTPRetriever is the same kind of thing as a FileRetriever,
and as anything else that can go in RETRIEVERS
?
Duck typing #
Here's the definition of duck typing:
A programming style which does not look at an object's type to determine if it has the right interface; instead, the method or attribute is simply called or used ("If it looks like a duck and quacks like a duck, it must be a duck.") [...]
This is what we're doing now! If it retrieves like a retriever and validates URLs like a retriever, then it's a retriever.
You see this all the time in Python. For example, json.dump() takes a file-like object; now, the full text file interface has lots methods and attributes, but dump() only cares about write(), and will accept any object implementing it:
>>> class MyFile:
... def write(self, s):
... print(f"writing: {s}")
...
>>> f = MyFile()
>>> json.dump({'one': 1}, f)
writing: {
writing: "one"
writing: :
writing: 1
writing: }
The main way to communicate this is through documentation:
Serialize obj [...] to fp (a
.write()
-supporting file-like object)
Inheritance #
Nevertheless, you may want to be more explicit about the relationships between types. The easiest option is to use a base class, and require retrievers to inherit from it.
class Retriever:
slow_to_read = False
def retrieve(self, url, etag):
raise NotImplementedError
def validate_url(self, url):
raise NotImplementedError
This allows you to check you the type with isinstance(), provide default methods and attributes, and will help type checkers and autocompletion, at the expense of forcing a dependency on the base class.
>>> class MyRetriever(Retriever): pass
>>> retriever = MyRetriever()
>>> retriever.slow_to_read
False
>>> isinstance(retriever, Retriever)
True
What it won't do is check subclasses actually define the methods:
>>> retriever.validate_url('myurl')
Traceback (most recent call last):
...
NotImplementedError
Abstract base classes #
This is where abstract base classes come in. The decorators in the abc module allow defining abstract methods that must be overriden:
class Retriever(ABC):
@abstractproperty
def slow_to_read(self):
return False
@abstractmethod
def retrieve(self, url, etag):
raise NotImplementedError
@abstractmethod
def validate_url(self, url):
raise NotImplementedError
This is checked at runtime (but only that methods and attributes are present, not their signatures or types):
>>> class MyRetriever(Retriever): pass
>>> MyRetriever()
Traceback (most recent call last):
...
TypeError: Can't instantiate abstract class MyRetriever with abstract methods retrieve, slow_to_read, validate_url
>>> class MyRetriever(Retriever):
... slow_to_read = False
... def retrieve(self, url, etag): ...
... def validate_url(self, url): ...
...
>>> MyRetriever()
<__main__.MyRetriever object at 0x1037aac50>
Tip
You can also use ABCs to register arbitrary types as "virtual subclasses"; this allows them to pass isinstance() checks without inheritance, but won't check for required methods:
>>> class MyRetriever: pass
>>> Retriever.register(MyRetriever)
<class '__main__.MyRetriever'>
>>> isinstance(MyRetriever(), Retriever)
True
Protocols #
Finally, we have protocols, aka structural subtyping, aka static duck typing. Introduced in PEP 544, they go in the opposite direction – what if instead declaring what the type of something is, we declare what methods it has to have to be of a specific type?
You define a protocol by inheriting typing.Protocol:
class Retriever(Protocol):
@property
def slow_to_read(self) -> bool:
...
def retrieve(self, url: str, etag: str | None) -> tuple[IO[bytes] | None, str | None]:
...
def validate_url(self, url: str) -> None:
...
...and then use it in type annotations:
def mount_retriever(prefix: str, retriever: Retriever) -> None:
raise NotImplementedError
Some other code (not necessarily yours, not necessarily aware the protocol even exists) defines an implementation:
class MyRetriever:
slow_to_read = False
def validate_url(self):
pass
...and then uses it with annotated code:
mount_retriever('my', MyRetriever())
A type checker like mypy will check if the provided instance conforms to the protocol – not only that methods exist, but that their signatures are correct too – all without the implementation having to declare anything.
$ mypy myproto.py
myproto.py:11: error: Argument 2 to "mount_retriever" has incompatible type "MyRetriever"; expected "Retriever" [arg-type]
myproto.py:11: note: "MyRetriever" is missing following "Retriever" protocol member:
myproto.py:11: note: retrieve
myproto.py:11: note: Following member(s) of "MyRetriever" have conflicts:
myproto.py:11: note: Expected:
myproto.py:11: note: def validate_url(self, url: str) -> None
myproto.py:11: note: Got:
myproto.py:11: note: def validate_url(self) -> Any
Found 1 error in 1 file (checked 1 source file)
Tip
If you decorate your protocol with runtime_checkable, you can use it in isinstance() checks, but like ABCs, it only checks methods are present.
Counter-example: modules #
If a class has no state and you don't need inheritance, you can use a module instead:
# module.py
slow_to_read = False
def retrieve(url, etag):
raise NotImplementedError
def validate_url(url):
raise NotImplementedError
From a duck typing perspective, this is a valid retriever, since it has all the expected methods and attributes. So much so, that it's also compatible with protocols:
import module
mount_retriever('mod', module)
$ mypy module.py
Success: no issues found in 1 source file
I tried to keep the retriever example stateless, but real world classes rarely are (it may be immutable state, but it's state nonetheless). Also, you're limited to exactly one implementation per module, which is usually too much like Java for my taste.
Tip
For a somewhat forced, but illustrative example of a stateful concurrent.futures executor implemented like this, and a comparison with class-based alternatives, check out Inheritance over composition, sometimes.
Try it out #
If you're doing something and you think you need a class, do it and see how it looks. If you think it's better, keep it, otherwise, revert the change. You can always switch in either direction later.
If you got it right the first time, great! If not, by having to fix it you'll learn something, and next time you'll know better.
Also, don't beat yourself up.
Sure, there are nice libraries out there that use classes in just the right way, after spending lots of time to find the right abstraction. But abstraction is difficult and time consuming, and in everyday code good enough is just that – good enough – you don't need to go to the extreme.
Learned something new today? Share this with others, it really helps!
Want to know when new articles come out? Subscribe here to get new stuff straight to your inbox!
This code has a potential bug: if we were using a persistent session instead of a transient one, the connection would never be released, since we're not closing the response after we're done with it. In the actual code, we're doing both, but the only way do so reliably is to return a context manager; I omitted this because it doesn't add anything to our discussion about classes. [return]
We're handling unknown URI schemes here because bare paths don't have a scheme, so anything that didn't match a known scheme must be a bare path. Also, on Windows (not supported yet), the drive letter in a path like c:\feed.xml is indistinguishable from a scheme. [return]
Unless the response is small enough to fit in the TCP receive buffer. [return]
July 21, 2025
Real Python
What Does isinstance() Do in Python?
Python’s isinstance()
function helps you determine if an object is an instance of a specified class or its superclass, aiding in writing cleaner and more robust code. You use it to confirm that function parameters are of the expected types, allowing you to handle type-related issues preemptively. This tutorial explores how isinstance()
works, its use with subclasses, and how it differs from type()
.
By the end of this tutorial, you’ll understand that:
isinstance()
checks if an object is a member of a class or superclass.type()
checks an object’s specific class, whileisinstance()
considers inheritance.isinstance()
correctly identifies instances of subclasses.- There’s an important difference between
isinstance()
andtype()
.
Exploring isinstance()
will deepen your understanding of the objects you work with and help you write more robust, error-free code.
To get the most out of this tutorial, it’s recommended that you have a basic understanding of object-oriented programming. More specifically, you should understand the concepts of classes, objects—also known as instances—and inheritance.
For this tutorial, you’ll mostly use the Python REPL and some Python files. You won’t need to install any libraries since everything you’ll need is part of core Python. All the code examples are provided in the downloadable materials, and you can access these by clicking the link below:
Get Your Code: Click here to download the free sample code that you’ll use to learn about isinstance() in Python.
Take the Quiz: Test your knowledge with our interactive “What Does isinstance() Do in Python?” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
What Does isinstance() Do in Python?Take this quiz to learn how Python's isinstance() introspection function reveals object classes and why it might not always show what you expect.
It’s time to start this learning journey, where you’ll discover the nature of the objects you use in your code.
Why Would You Use the Python isinstance()
Function?
The isinstance()
function determines whether an object is an instance of a class. It also detects whether the object is an instance of a superclass. To use isinstance()
, you pass it two arguments:
- The instance you want to analyze
- The class you want to compare the instance against
These arguments must only be passed by position, not by keyword.
If the object you pass as the first argument is an instance of the class you pass as the second argument, then isinstance()
returns True
. Otherwise, it returns False
.
Note: You’ll commonly see the terms object and instance used interchangeably. This is perfectly correct, but remembering that an object is an instance of a class can help you see the relationship between the two more clearly.
When you first start learning Python, you’re told that objects are everywhere. Does this mean that every integer, string, list, or function you come across is an object? Yes, it does! In the code below, you’ll analyze some basic data types:
>>> shape = "sphere"
>>> number = 8
>>> isinstance(shape, str)
True
>>> isinstance(number, int)
True
>>> isinstance(number, float)
False
You create two variables, shape
and number
, which hold str
and int
objects, respectively. You then pass shape
and str
to the first call of isinstance()
to prove this. The isinstance()
function returns True
, showing that "sphere"
is indeed a string.
Next, you pass number
and int
to the second call to isinstance()
, which also returns True
. This tells you 8
is an integer. The third call returns False
because 8
isn’t a floating-point number.
Knowing the type of data you’re passing to a function is essential to prevent problems caused by invalid types. While it’s better to avoid passing incorrect data in the first place, using isinstance()
gives you a way to avert any undesirable consequences.
Take a look at the code below:
>>> def calculate_area(length, breadth):
... return length * breadth
>>> calculate_area(5, 3)
15
>>> calculate_area(5, "3")
'33333'
Your function takes two numeric values, multiplies them, and returns the answer. Your function works, but only if you pass it two numbers. If you pass it a number and a string, your code won’t crash, but it won’t do what you expect either.
The string gets replicated when you pass a string and an integer to the multiplication operator (*
). In this case, the "3"
gets replicated five times to form "33333"
, which probably isn’t the result you expected.
Things get worse when you pass in two strings:
Read the full article at https://realpython.com/what-does-isinstance-do-in-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Bytes
#441 It's Michaels All the Way Down
<strong>Topics covered in this episode:</strong><br> <ul> <li><em>* Distributed sqlite follow up: <a href="https://turso.tech?featured_on=pythonbytes">Turso</a> and <a href="https://litestream.io?featured_on=pythonbytes">Litestream</a></em>*</li> <li><em>* <a href="https://peps.python.org/pep-0792/?featured_on=pythonbytes">PEP 792 – Project status markers in the simple index</a></em>*</li> <li><strong><a href="https://hugovk.dev/blog/2025/run-coverage-on-tests/?featured_on=pythonbytes">Run coverage on tests</a></strong></li> <li><strong><a href="https://github.com/rzane/docker2exe?featured_on=pythonbytes">docker2exe</a>: Convert a Docker image to an executable</strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=U8K-NBsGCGc' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="441">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by Digital Ocean: <a href="https://pythonbytes.fm/digitalocean-gen-ai"><strong>pythonbytes.fm/digitalocean-gen-ai</strong></a> Use code <strong>DO4BYTES</strong> and get $200 in free credit</p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1: Distributed sqlite follow up: <a href="https://turso.tech?featured_on=pythonbytes">Turso</a> and <a href="https://litestream.io?featured_on=pythonbytes">Litestream</a></strong></p> <ul> <li>Michael Booth: <ul> <li><a href="https://turso.tech?featured_on=pythonbytes">Turso</a> marries the familiarity and simplicity of SQLite with modern, scalable, and distributed features.</li> <li>Seems to me that Turso is to SQLite what MotherDuck is to DuckDB.</li> </ul></li> <li>Mike Fiedler <ul> <li>Continue to use the SQLite you love and care about (even the one inside Python runtime) and launch a daemon that watches the db for changes and replicates changes to an S3-type object store.</li> <li>Deeper dive: <a href="https://fly.io/blog/litestream-revamped/?featured_on=pythonbytes">Litestream: Revamped</a></li> </ul></li> </ul> <p><strong>Brian #2: <a href="https://peps.python.org/pep-0792/?featured_on=pythonbytes">PEP 792 – Project status markers in the simple index</a></strong></p> <ul> <li>Currently 3 status markers for packages <ul> <li>Trove Classifier status</li> <li>Indices can be yanked</li> <li>PyPI projects - admins can quarantine a project, owners can archive a project</li> </ul></li> <li>Proposal is to have something that can have only one state <ul> <li>active</li> <li>archived</li> <li>quarantined</li> <li>deprecated</li> </ul></li> <li>This has been Approved, but not Implemented yet.</li> </ul> <p><strong>Brian #3:</strong> <a href="https://hugovk.dev/blog/2025/run-coverage-on-tests/?featured_on=pythonbytes">Run coverage on tests</a></p> <ul> <li>Hugo van Kemenade</li> <li>And apparently, run Ruff with at least F811 turned on</li> <li>Helps with copy/paste/modify mistakes, but also subtler bugs like consumed generators being reused.</li> </ul> <p><strong>Michael #4:</strong> <a href="https://github.com/rzane/docker2exe?featured_on=pythonbytes">docker2exe</a>: Convert a Docker image to an executable</p> <ul> <li>This tool can be used to convert a Docker image to an executable that you can send to your friends.</li> <li>Build with a simple command: <code>$ docker2exe --name alpine --image alpine:3.9</code></li> <li>Requires docker on the client device</li> <li>Probably doesn’t map volumes/ports/etc, though could potentially be exposed in the dockerfile.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li>Back catalog of Test & Code is now on YouTube under @TestAndCodePodcast <ul> <li>So far 106 of 234 episodes are up. The rest are going up according to daily limits.</li> <li>Ordering is rather chaotic, according to upload time, not release ordering.</li> </ul></li> <li>There will be a new episode this week <ul> <li>pytest-django with Adam Johnson</li> </ul></li> </ul> <p><strong>Joke: <a href="https://x.com/PR0GRAMMERHUM0R/status/1939806175475765389?featured_on=pythonbytes">If programmers were doctors</a></strong></p>
Daniel Roy Greenfeld
uv run for running tests on versions of Python
The uv library is not just useful for dependency management, it also comes with a run
subcommand that doesn't just run Python scripts, it allows for specific Python versions and setting of dependencies within that run. Between runs it caches everything so it runs fast.
For example, if I have a FastAPI project I could run tests on it using this command:
uv run --with pytest --with httpx pytest
But what if I want to test a particular version of Python? Then I simple specify the version of Python to run the test:
uv run --python=3.13 --with pytest --with httpx pytest
Here's where it gets fun. I can use a Makefile
(or a justfile) to test on multiple Python versions.
testall: ## Run all the tests for all the supported Python versions
uv run --python=3.10 --with pytest --with httpx pytest
uv run --python=3.11 --with pytest --with httpx pytest
uv run --python=3.12 --with pytest --with httpx pytest
uv run --python=3.13 --with pytest --with httpx pytest
If you want to use pyproject.toml
dependency groups, switch from the --with
flag to the -extra
flag. For example, if your testing dependencies are in a test
group:
[project.optional-dependencies]
test = [
# For the test client
"httpx>=0.28.1",
# Test runner
"pytest>=8.4.0",
]
You could then run tests across multiple versions of Python thus:
testall: ## Run all the tests for all the supported Python versions
uv run --python=3.10 --extra test pytest
uv run --python=3.11 --extra test pytest
uv run --python=3.12 --extra test pytest
uv run --python=3.13 --extra test pytest
And there you have it, a simple replacement for Nox or Tox. Of course those tools have lots more features that some users may care about. However, for my needs this works great and eliminates a dependency+configuration from a number of my projects.
Thanks to Audrey Roy Greenfeld for pairing with me on getting this to work.
July 20, 2025
Go Deh
All Truth in Truthtables!
(Best viewed on a larger than phone screen)
To crib from my RosettaCode tasks description and examples:
A truth table is a display of the inputs to, and the output of a Boolean equation organised as a table where each row gives one combination of input values and the corresponding value of the equation.
And as examples:
Boolean expression: A ^ B A B : A ^ B 0 0 : 0 0 1 : 1 1 0 : 1 1 1 : 0 Boolean expression: S | ( T ^ U ) S T U : S | ( T ^ U ) 0 0 0 : 0 0 0 1 : 1 0 1 0 : 1 0 1 1 : 0 1 0 0 : 1 1 0 1 : 1 1 1 0 : 1 1 1 1 : 1
Format
A truth table has a header row of columns showing first the names of inputs assigned to each column; a visual separator - e.g. ':'; then the column name for the output result.
The body of the table, under the inputs section, contains rows of all binary combinations of the inputs. It is usually arranged as each row of the input section being a binary count from zero to 2**input_count - 1
The body of the table, under the result section, contains rows showing the binary output produced from the input configuration in the same row, to the left.
Format used
I am interested in the number of inputs rather than their names so will show vector i with the most significant indices to the left, (so the binary count in the input sections body looks right).
Similarly I am interested in the bits in the result column rather than a name so will just call the result column r.
From one result to many
Here's the invocation, and truth tables produced for some simple boolean operators:
OR i[1] i[0] : r ================= 0 0 : 0 0 1 : 1 1 0 : 1 1 1 : 1
XOR
i[1] i[0] : r
=================
0 0 : 0
0 1 : 1
1 0 : 1
1 1 : 0
AND i[1] i[0] : r ================= 0 0 : 0 0 1 : 0 1 0 : 0 1 1 : 1
For those three inputs, we can extend the table to show result columns for OR, XOR and then AND, like this:
OR, XOR, then AND result *columns* i[1] i[0] : r[0] r[1] r[2] =========================== 0 0 : 0 0 0 0 1 : 1 1 0 1 0 : 1 1 0 1 1 : 1 0 1
All Truth
Just how many results are possible?
Well, i = 2 inputs gives 2**i = 4 possible input boolean combinations; so a result column has 2**i = 4 bits.
The number of different result columns is therefore 2**(2**i) = 2**4 = 16
We can show all possible results by successive results being a binary count, but this time by column in the results section, (with the LSB being closest to the header row)
The pp_table function automatically generates all possible results if a second parameter of None is used
All Truths of two inputs! i[1] i[0] : r[0] r[1] r[2] r[3] r[4] r[5] r[6] r[7] r[8] r[9] r[10] r[11] r[12] r[13] r[14] r[15] ============================================================================================================== 0 0 : 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 : 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 1 0 : 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 1 : 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
We might say that it shows all possible truths for up-to-and-including two inputs. That is because results include outputs not dependent on any or all of those two inputs. For example r[0], and r[15] do not depend on any input as they give constant outputs of 0 and 1, respectively. r[5] is simply ~i[0] and does not depend on i[1]
It's Big, Oh!
The results grow as 2**(2**i), sometimes called double exponential growth! it gets large, quickly!!
i | 2**(2**i) --|---------- 0 | 2 1 | 4 2 | 16 3 | 256 4 | 65_536 5 | 4_294_967_296
The code
Contemplating using AI and needing to get it to understand what I wanted, as well as endless prompting to get it to do what I want, the way I wanted it; I decided on writing it all by myself - I knew I wanted it just so, and the coding would be nothing new to me, just nailing what I wanted to show.
END.
Armin Ronacher
Welcoming The Next Generation of Programmers
This post is addressed to the Python community, one I am glad to be a member of.
I’m product of my community. A decade ago I wrote about how much I owed the Python community. Recently I found myself reminiscing again. This year at EuroPython I even gave a brief lightning talk recalling my time in the community — it made me tear up a little.
There were two reasons for this trip down memory lane. First, I had the opportunity to be part of the new Python documentary, which brought back a flood of memories (good and bad). Second, I’ve found myself accidentally pulled towards agentic coding and vibe coders1. Over the last month and a half I have spoken with so many people on AI and programming and realized that a growing number of them are people I might not, in the past, have described as “programmers.” Even on the way to the conference I had the pleasure to engage in a multi-hour discussion on the train with an air traffic controller who ventured into programming because of ChatGPT to make his life easier.
I’m not sure where I first heard it, but I like the idea that you are what you do. If you’re painting (even your very first painting) you are a painter. Consequently if you create a program, by hand or with the aid of an agent, you are a programmer. Many people become programmers essentially overnight by picking up one of these tools.
Heading to EuroPython this year I worried that the community that shaped me might not be receptive to AI and agentic programming. Some of that fear felt warranted: over the last year I saw a number of dismissive posts in my circles about using AI for programming. Yet I have also come to realize that acceptance of AI has shifted significantly. More importantly there is pretty wide support of the notion that newcomers will and should be writing AI-generated code.
That matters, because my view is that AI will not lead to fewer programmers. In fact, the opposite seems likely. AI will bring more people into programming than anything else we have done in the last decade.
For the Python community in particular, this is a moment to reflect. Python has demonstrated its inclusivity repeatedly — think of how many people have become successful software engineers through outreach programs (like PyLadies) and community support. I myself can credit much of my early carreer from learning from others on the Python IRC channels.
We need to pay close attention to vibe coding. And that not because it might produce lower‑quality code, but because if we don’t intentionally welcome the next generation learning through these tools, they will miss out on important lessons many of us learned the hard way. It would be a mistake to treat them as outcasts or “not real” programmers. Remember that many of our first programs did not have functions, were a mess of GOTO and things copy/pasted together.
Every day someone becomes a programmer because they figured out how to make ChatGPT build something. Lucky for us: in many of those cases the AI picks Python. We should treat this as an opportunity and anticipate an expansion in the kinds of people who might want to attend a Python conference. Yet many of these new programmers are not even aware that programming communities and conferences exist. It’s in the Python community’s interest to find ways to pull them in.
Consider this: I can name the person who brought me into Python. But if you were brought in via ChatGPT or a programming agent, there may be no human there — just the AI. That lack of human connection is, I think, the biggest downside. So we will need to compensate: to reach out, to mentor, to create on‑ramps. To instil the idea that you should be looking for a community, because the AI won’t do that. We need to turn a solitary interaction with an AI into a shared journey with a community, and to move them towards learning the important lessons about engineering. We do not want to have a generation of developers held captive by a companies building vibe-coding tools with little incentive for their users to break from those shackles.
-
I’m using vibe coders here as people that give in to having the machine program for them. I believe that many programmers will start in this way before they transition to more traditional software engineering.↩
July 18, 2025
Mike Driscoll
Announcing Squall: A TUI SQLite Editor
Squall is a SQLite viewer and editor that runs in your terminal. Squall is written in Python and uses the Textual package. Squall allows you to view and edit SQLite databases using SQL. You can check out the code on GitHub.
Here is what Squall looks like using the Chinook database:
Currently, there is only one command-line option: -f
or --filename
, which allows you to pass a database path to Squall to load.
Example Usage:
squall -f path/to/database.sqlite
The instructions assume you have uv or pip installed.
uv tool install squall_sql
uv tool install git+https://github.com/driscollis/squall
If you want to upgrade to the latest version of Squall SQL, then you will want to run one of the following commands.
uv tool install git+https://github.com/driscollis/squall -U --force
pip install squall-sql
If you have cloned the package and want to run Squall, one way to do so is to navigate to the cloned repository on your hard drive using your Terminal. Then run the following command while inside the src
folder:
python -m squall.squall
The post Announcing Squall: A TUI SQLite Editor appeared first on Mouse Vs Python.
The Python Coding Stack
Do You Really Know How `or` And `and` Work in Python?
Let's start with an easy question. Play along, please. I know you know how to use the or
keyword, just bear with me for a bit…
Have you answered? If you haven't, please do, even if this is a simple question for you.
…Have you submitted your answer now?
I often ask this question when running live courses, and people are a bit hesitant to answer because it seems to be such a simple, even trivial, question. Most people eventually answer: True
.
OK, let's dive further into how or
works, and we'll also explore and
in this article.
or
You may not have felt the need to cheat when answering the question above. But you could have just opened your Python REPL and typed in the expression. Let's try it:

Wait. What?!
The output is not True
. Why 5
? Let's try it again with different operands:
Hmm?!
Truthy and Falsy
Let's review the concept of truthiness in Python. Every Python object is either truthy or falsy. When you pass a truthy object to the built-in bool()
, you get True
. And, you guessed it, you'll get False
when you pass a falsy object to bool()
.
In situations where Python is expecting a True
or False
, such as after the if
or while
keywords, Python will use the object's truthiness value if the object isn't a Boolean (True
or False
).
Back to or
Let's get back to the expression 5 or 0
. The integer 5
is truthy. You can confirm this by running bool(5)
, which returns True
. But 0
is falsy. In fact, 0
is the only falsy integer. Every other integer is truthy. Therefore, 5 or 0
should behave like True
. If you write if 5 or 0:
, you'll expect Python to execute the block of code after the if
statement. And it does.
But you've seen that 5 or 0
evaluates to 5
. And 5
is not True
. But it's truthy. So, the statement if 5 or 0:
becomes if 5:
, and since 5
is truthy, this behaves as if it were if True:
.
But why does 5 or 0
give you 5
?
or
Only Needs One Truthy Value
The or
keyword is looking at its two operands, the one before and the one after the or
keyword. It only needs one of them to be true (by which I mean truthy) for the whole expression to be true (truthy).
So, what happens when you run the expression 5 or 0
? Python looks at the first operand, which is 5
. It's truthy, so the or
expression simply gives back this value. It doesn't need to bother with the second operand because if the first operand is truthy, the value of the second operand is irrelevant. Recall that or
only needs one operand to be truthy. It doesn't matter if only one or both operands are truthy.
So, what happens if the first operand is falsy?
The first of these expressions has one truthy and one falsy operand. But the first operand, 0
, is falsy. Therefore, the or
expression must look at the second operand. It's truthy. The or
expression gives back the second operand. Therefore, the output of the or
expression is truthy. Great.
But the or
expression doesn't return the second operand because the second operand is truthy. Instead, it returns the second operand because the first operand is falsy.
When the first operand in an or
expression is falsy, the result of the or
expression is determined solely by the second operand. If the second operand is truthy, then the or
expression is truthy. But if the second operand is falsy
, the whole or
expression is falsy. Recall that the previous two sentences apply to the case when the first operand is falsy.
That's why the second example above, 0 or ""
, returns the empty string, which is the second operand. An empty string is falsy—try bool("")
to confirm this. Any non-empty string is truthy.
So:
or
always evaluates to the first operand when the first operand is truthyor
always evaluates to the second operand when the first operand is falsy
But there's more to this…
Lazy Evaluation • Short Circuiting
Let's get back to the expression 5 or 0
. The or
looks at the first operand. It decides it's truthy, so its output is this first operand.
It never even looks at the second operand.
Do you want proof? Consider the following or
expression:
What's bizarre about this code at first sight? The expression int("hello")
is not valid since you can't convert the string "hello"
to an integer. Let's confirm this:
But the or
expression above, 5 or int("hello")
, didn't raise this error. Why?
Because Python never evaluated the second operand. Since the first operand, 5
, is truthy, Python decides to be lazy—it doesn't need to bother with the second operand. This is called short-circuit evaluation.
That's why 5 or int("hello")
doesn't raise the ValueError
you might expect from the second operand.
However, if the first operand is falsy, then Python needs to evaluate the second operand:
In this case, you get the ValueError
raised by the second operand.
Lazy is good (some will be pleased to read this). Python is being efficient when it evaluates expressions lazily. It saves time by avoiding the evaluation of expressions it doesn't need!
and
How about the and
keyword? The reasoning you need to use to understand and
is similar to the one you used above when reading about or
. But the logic is reversed. Let's try this out:
The and
keyword requires both operands to be truthy for the whole expression to be true (truthy). In the first example above, 5 and 0
, the first operand is truthy. Therefore, and
needs to also check the second operand. In fact, if the first operand in an and
expression is truthy, the second operand will determine the value of the whole expression.
When the first operand is truthy, and
always returns the second operand. In the first example, 5 and 0
, the second operand is 0
, which is falsy. So, the whole and
expression is falsy.
But in the second example, 5 and "hello"
, the second operand is "hello"
, which is truthy since it's a non-empty string. Therefore, the whole expression is truthy.
What do you think happens to the second operand when the first operand in an and
expression is falsy?
The first operand is falsy. It doesn't matter what the second operand is, since and
needs both operands to be truthy to evaluate to a truthy value.
And when the first operand in an and
expression is falsy, Python's lazy evaluation kicks in again. The second operand is never evaluated. You have a short-circuit evaluation:
Once again, you use the invalid expression int("hello")
as the second operand. This expression would raise an error when Python evaluates it. But, as you can see, the expression 0 and int("hello")
never raises this error since it never evaluates the second operand.
Let's summarise how and
works:
and
always evaluates to the first operand when the first operand is falsyand
always evaluates to the second operand when the first operand is truthy
Compare this to the bullet point summary for the or
expression earlier in this article.
Do you want to try video courses designed and delivered in the same style as these posts? You can get a free trial at The Python Coding Place and you also get access to a members-only forum.
More on Short-Circuiting
Here's code you may see that uses the or
expression’s short-circuiting behaviour:
Now, you're assigning the value of the or
expression to a variable name, person
. So, what will person
hold?
Let's try this out in two scenarios:
In the first example, you type your name when prompted. Or you can type my name, whatever you want! Therefore, the call to input()
returns a non-empty string, which is truthy. The or
expression evaluates to this first operand, which is the return value of the input()
call. So, person
is the string returned by input()
.
However, in the second example, you simply hit enter when prompted to type in a name. You leave the name field blank. In this case, input()
returns the empty string, ""
. And an empty string is falsy. Therefore, or
evaluates to the second operand, which is the string "Unknown"
. This string is assigned to person
.
Final Words
So, or
and and
don't always evaluate to a Boolean. They'll evaluate to one of their two operands, which can be any object—any data type. Since all objects in Python are either truthy or falsy, it doesn't matter that or
and and
don't return Booleans!
Now you know!
Do you want to join a forum to discuss Python further with other Pythonistas? Upgrade to a paid subscription here on The Python Coding Stack to get exclusive access to The Python Coding Place's members' forum. More Python. More discussions. More fun.
And you'll also be supporting this publication. I put plenty of time and effort into crafting each article. Your support will help me keep this content coming regularly and, importantly, will help keep it free for everyone.
Image by Paolo Trabattoni from Pixabay
Code in this article uses Python 3.13
The code images used in this article are created using Snappify. [Affiliate link]
You can also support this publication by making a one-off contribution of any amount you wish.
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Further reading related to this article’s topic:
Appendix: Code Blocks
Code Block #1
5 or 0
# 5
Code Block #2
"hello" or []
# 'hello'
Code Block #3
0 or 5
# 5
0 or ""
# ''
Code Block #4
5 or int("hello")
# 5
Code Block #5
int("hello")
# Traceback (most recent call last):
# File "<input>", line 1, in <module>
# ValueError: invalid literal for int() with base 10: 'hello'
Code Block #6
0 or int("hello")
# Traceback (most recent call last):
# File "<input>", line 1, in <module>
# ValueError: invalid literal for int() with base 10: 'hello'
Code Block #7
5 and 0
# 0
5 and "hello"
# 'hello'
Code Block #8
0 and 5
# 0
Code Block #9
0 and int("hello")
# 0
Code Block #10
person = input("Enter name: ") or "Unknown"
Code Block #11
person = input("Enter name: ") or "Unknown"
# Enter name: >? Stephen
person
# 'Stephen'
person = input("Enter name: ") or "Unknown"
# Enter name: >?
person
# 'Unknown'
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Talk Python to Me
#514: Python Language Summit 2025
Every year the core developers of Python convene in person to focus on high priority topics for CPython and beyond. This year they met at PyCon US 2025. Those meetings are closed door to keep focused and productive. But we're lucky that Seth Michael Larson was in attendance and wrote up each topic presented and the reactions and feedback to each. We'll be exploring this year's Language Summit with Seth. It's quite insightful to where Python is going and the pressing matters.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/seer'>Seer: AI Debugging, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/sentryagents'>Sentry AI Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading">Links from the show</h2> <div><strong>Seth on Mastodon</strong>: <a href="https://fosstodon.org/@sethmlarson" target="_blank" >@sethmlarson@fosstodon.org</a><br/> <strong>Seth on Twitter</strong>: <a href="https://twitter.com/sethmlarson?featured_on=talkpython" target="_blank" >@sethmlarson</a><br/> <strong>Seth on Github</strong>: <a href="https://github.com/sethmlarson?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>Python Language Summit 2025</strong>: <a href="https://pyfound.blogspot.com/2025/06/python-language-summit-2025.html?featured_on=talkpython" target="_blank" >pyfound.blogspot.com</a><br/> <strong>WheelNext</strong>: <a href="https://wheelnext.dev/?featured_on=talkpython" target="_blank" >wheelnext.dev</a><br/> <strong>Free-Threaded Wheels</strong>: <a href="https://hugovk.github.io/free-threaded-wheels/?featured_on=talkpython" target="_blank" >hugovk.github.io</a><br/> <strong>Free-Threaded Python Compatibility Tracking</strong>: <a href="https://py-free-threading.github.io/tracking/?featured_on=talkpython" target="_blank" >py-free-threading.github.io</a><br/> <strong>PEP 779: Criteria for supported status for free-threaded Python</strong>: <a href="https://discuss.python.org/t/pep-779-criteria-for-supported-status-for-free-threaded-python/84319/123?featured_on=talkpython" target="_blank" >discuss.python.org</a><br/> <strong>PyPI Data</strong>: <a href="https://py-code.org/?featured_on=talkpython" target="_blank" >py-code.org</a><br/> <strong>Senior Engineer tries Vibe Coding</strong>: <a href="https://www.youtube.com/watch?v=_2C2CNmK7dQ&ab_channel=Programmersarealsohuman" target="_blank" >youtube.com</a><br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=t7Ov3ICo8Kc" target="_blank" >youtube.com</a><br/> <strong>Episode #514 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/514/python-language-summit-2025#takeaways-anchor" target="_blank" >talkpython.fm/514</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/514/python-language-summit-2025" target="_blank" >talkpython.fm</a><br/> <strong>Developer Rap Theme Song: Served in a Flask</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>--- Stay in touch with us ---</strong><br/> <strong>Subscribe to Talk Python on YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" >youtube.com</a><br/> <strong>Talk Python on Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm at bsky.app</a><br/> <strong>Talk Python on Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes at bsky.app</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Matt Layman
Enhancing Chatbot State Management with LangGraph
Picture this: it’s late and I’m deep in a coding session, wrestling with a chatbot that’s starting to feel more like a living thing than a few lines of Python. Today’s mission? Supercharge the chatbot’s ability to remember and verify user details like names and birthdays using LangGraph. Let’s unpack the journey, from shell commands to Git commits, and see how this bot got a memory upgrade. For clarity, this is my adventure running through the LangGraph docs.