Planet Python
Last update: January 14, 2025 09:42 PM UTC
January 14, 2025
Mike Driscoll
Textual – Switching Screens in Your Terminal
The Screen
is a container for your widgets. These screens occupy the dimensions of your terminal by default. While you can have many different screens in a single application, only one screen may be active at a time.
When you create your App
class, Textual will create a screen object implicitly. Yes, Textual requires you to have at least one screen or your application won’t work. If you do not create a new screen or switch to a different one, the default screen is where your widgets will get mounted or composed to.
Screens are a great way to organize your application. Many applications have settings pages, help pages, and more. These are just a few examples of how you can use screens.
Now that you know what a screen is, you’re ready to learn how to create new ones!
Creating Screens
When you create an application, you create a Screen
implicitly. But how do you create your own Screen
? Fortunately, Textual has made that easy. All you need to do is import the Screen
class from textual.screen
and extend it as needed.
You can style screens the same way you do other widgets, except for the dimensions as screens are always the same size as your terminal window.
To see how this all works, you will create an application with two screens:
- Your main screen
- You second screen, which will be green
You will be able to switch between the screens using a button. Each screen has its own button and its own event or message handler.
Open up your favorite Python IDE and create a new file called two_screens.py
with the following contents:
# two_screens.py from textual import on from textual.app import App, ComposeResult from textual.screen import Screen from textual.widgets import Button class GreenScreen(Screen): def compose(self) -> ComposeResult: self.styles.background = "green" yield Button("Main Screen", id="main") @on(Button.Pressed, "#main") def on_main(self) -> None: self.dismiss() class MainAop(App): def compose(self) -> ComposeResult: yield Button("Switch", id="switch") @on(Button.Pressed, "#switch") def on_switch(self) -> None: self.push_screen(GreenScreen()) if __name__ == "__main__": app = MainAop() app.run()
You use Textual’s handy on
decorator to match against the button’s id
. That keeps the message from bubbling around to other event handlers, which is what could happen if you had used on_button_pressed()
, for example.
When you run your application, you will see something like this:
Try clicking the buttons and switching between the screens.
Of course, you don’t need to use button’s at all, if you don’t want to. You could use keyboard shortcuts instead. Why not give that a try?
Go back to your Python IDE and create a new file called two_screens_keys_only.py
with this code in it:
# two_screens_keys_only.py from textual.app import App, ComposeResult from textual.screen import Screen from textual.widgets import Label class GreenScreen(Screen): BINDINGS = [("escape", "app.pop_screen", "Dismiss the screen")] def compose(self) -> ComposeResult: self.styles.background = "green" yield Label("Second Screen") class MainAop(App): SCREENS = {"green": GreenScreen} BINDINGS = [("n", "push_screen('green')", "Green Screen")] def compose(self) -> ComposeResult: yield Label("Main screen") if __name__ == "__main__": app = MainAop() app.run()
Using keyboard shortcuts makes your code a little less verbose. However, since you aren’t using a Footer
widget, the shortcuts are not shown on-screen to the user. When you are on the main screen, you must press the letter “n” on your keyboard to switch to the GreenScreen
. Then when you want to switch back, you press “Esc” or escape.
Here’s what the screen looks like on the GreenScreen
:
Now try using the keys mentioned to swap between the two screens. Feel free to change the keyboard bindings to keys of your own choosing.
Wrapping Up
Textual can do much more with Screens than what is covered in this brief tutorial. However, you can use this information as a great starting point for learning how to add one more additional screens to your GUI in your terminal.
Play around with these examples and then run over to the Textual documentation to learn about some of the other widgets you can add to bring your application to life.
Want to Learn More?
If you’d like to learn more about Textual, check out my book: Creating TUI Applications with Textual and Python, which you can find on the following websites:
The post Textual – Switching Screens in Your Terminal appeared first on Mouse Vs Python.
Peter Bengtsson
How I run standalone Python in 2025
`uv run --python 3.12 --with requests python $@` to quickly start a Python interpreter with the `requests` package installed without creating a whole project.
PyCoder’s Weekly
Issue #664: Django vs FastAPI, Interacting With Python, Data Cleaning, and More (Jan. 14, 2025)
#664 – JANUARY 14, 2025
View in Browser »
Django vs. FastAPI, an Honest Comparison
David has worked with Django for a long time, but recently has done some deeper coding with FastAPI. As a result, he’s able to provide a good contrast between the libraries and why/when you might choose one over the other.
DAVID DAHAN
Ways to Start Interacting With Python
In this video course, you’ll explore the various ways of interacting with Python. You’ll learn about the REPL for quick testing and running scripts, as well as how to work with different IDEs, and Python’s IDLE.
REAL PYTHON course
Optimize Postgres Performance and Reduce Costs with Crunchy Bridge
Discover why YNAB (You Need A Budget) switched to fully managed Postgres on Crunchy Bridge. With a 30% increase in performance and a 10% reduction in costs, YNAB leverages Crunchy Bridge’s seamless scaling, high availability, and expert support to optimize their database management →
CRUNCHY DATA sponsor
Data Cleaning in Data Science
“Real-world data needs cleaning before it can give us useful insights. Learn how how you can perform data cleaning in data science on your dataset.”
HELEN SCOTT
PyConf Hyderabad Feb 22-23
PYCONFHYD.ORG • Shared by Poruri Sai Rahul
Discussions
Python Jobs
Backend Software Engineer (Anywhere)
Articles & Tutorials
Building New Structures for Learning Python
What are the new ways we can teach and share our knowledge about Python? How can we improve the structure of our current offerings and build new educational resources for our audience of Python learners? This week on the show, Real Python core team members Stephen Gruppetta and Martin Breuss join us to discuss enhancements to the site and new ways to learn Python.
REAL PYTHON podcast
Automated Accessibility Audits for Python Web Apps
This article covers how to automatically audit your web apps for accessibility standards. The associated second part covers how to do snapshot testing for the same
PAMELAFOX.ORG
Integrate Auth0 With Just a Few Lines of Code
Whether your end users are consumers, businesses, or both, Auth0 provides the foundational requirements out of the box allowing you to customize your solution with APIs, 30+ SDKs, and Quickstarts. Try Auth0 free today with up to 25K active users - no credit card needed to sign up →
AUTH0 sponsor
Software Bill of Materials Packaging Proposal
A new Python packaging proposal, PEP 770, introduces SBOM support to tackle the “phantom dependency” problem, making it easier to track non-Python components that security tools often miss.
SOCKET.DEV • Shared by Sarah Gooding
Unpacking kwargs
With Custom Objects
You may have unpacked a dictionary using **kwargs
mechanism in Python, but did you know you can write this capability into your own classes? This quick TIL article covers how to write a __getitem__()
method.
RODRIGO GIRÃO SERRÃO
Musings on Tracing in PyPy
What started as an answer to a question on Twitter has turned into a very deep dive on tracing JITs, how they compare to method-based JITs, and how all that works in the alternative Python interpreter PyPy.
CF BOLZ-TEREICK
From Default Line Charts to Journal-Quality Infographics
“Everyone who has used Matplotlib knows how ugly the default charts look like.” In this series of posts, Vladimir shares some tricks to make your visualizations stand out and reflect your individual style.
VLADIMIR ZHYVOV
Stupid pipx
Tricks
This post talks about pipx
a wrapper to pip
that allows you to use Python packages like applications. This post talks about the strengths and weaknesses of pipx
and just what you can do with it.
KARL KNECHTEL
Towards PyPy3.11: An Update
The alternative Python interpreter PyPy is working towards a Python 3.11 compatible release. This post talks about how that is going and the challenges along the way.
PYPY.ORG
Learn SQL With Python
This tutorial teaches the fundamentals of SQL by using Python to build applications that interact with a relational PostgresSQL database.
PATRICK KENNEDY • Shared by Patrick Kennedy
PEP 769: Add a Default Keyword Argument to attrgetter
and itemgetter
This proposal aims to enhance the operator module by adding a default keyword argument to the attrgetter and itemgetter functions.
PYTHON.ORG
Posit Connect Cloud: Share the Work you Make With Streamlit, FastAPI, Shiny, & Other FOSS Frameworks
Posit Connect Cloud lets you publish, host, and manage Streamlit, Dash & other apps, dashboards, APIs, and more. A centralized platform for sharing Python-based data science, it streamlines deployment and boosts collaboration—amplifying your impact.
POSIT sponsor
Unit Testing vs. Integration Testing
Discover the key differences between unit testing vs integration testing and learn how to automate both with Python.
FEDERICO TROTTA
Why Is hash(-1) == hash(-2)
in Python?
Somewhat surprisingly, hash(-1) == hash(-2)
in CPython. This post examines how and discovers why this is the case.
OMAIR MAJID
Projects & Code
Migrate a Project From Poetry/Pipenv to uv
GITHUB.COM/MKNIEWALLNER • Shared by Mathieu Kniewallner
Events
Weekly Real Python Office Hours Q&A (Virtual)
January 15, 2025
REALPYTHON.COM
PyData Bristol Meetup
January 16, 2025
MEETUP.COM
PyLadies Amsterdam
January 16, 2025
MEETUP.COM
PyLadies Dublin
January 16, 2025
PYLADIES.COM
Chattanooga Python User Group
January 17 to January 18, 2025
MEETUP.COM
PyCon+Web 2025
January 24 to January 26, 2025
PYCONWEB.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #664.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Python Morsels
Python's range() function
The range
function can be used for counting upward, countdown downward, or performing an operation a number of times.
Table of contents
Counting upwards in Python
How can you count from 1 to 10 in Python?
You could make a list of all those numbers and then loop over it:
>>> numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> for n in numbers:
... print(n)
...
1
2
3
4
5
6
7
8
9
10
But that could get pretty tedious. Imagine if we were working with 100 numbers... or 1,000 numbers!
Instead, we could use one of Python's built-in functions: the range
function.
The range
function accepts a start
integer and a stop
integer:
>>> for n in range(1, 11):
... print(n)
...
1
2
3
4
5
6
7
8
9
10
The range
function counts upward starting from that start
number, and it stops just before that stop
number.
So we're stopping at 10
here instead of going all the way to 11
.
You can also call range
with just one argument:
>>> for n in range(5):
... print(n)
...
0
1
2
3
4
When range
is given one argument, it starts at 0
, and it stops just before that argument.
So range
can accept one argument (the stop
value) where it starts at 0
, and it stops just before that number.
And range
can also accept two arguments: a start
value and a stop
value.
But range
also accepts a third argument!
Using range
with a step
value
The range
function can accept …
Read the full article: https://www.pythonmorsels.com/range/
Daniel Roy Greenfeld
TIL: Using inspect and timeit together
Two libraries in Python's standard library that are useful for keeping load testing code all in one module.
Real Python
Building Dictionary Comprehensions in Python
Dictionary comprehensions are a concise and quick way to create, transform, and filter dictionaries in Python. They can significantly enhance your code’s conciseness and readability compared to using regular for
loops to process your dictionaries.
Understanding dictionary comprehensions is crucial for you as a Python developer because they’re a Pythonic tool for dictionary manipulation and can be a valuable addition to your programming toolkit.
In this video course, you’ll learn how to:
- Create dictionaries using dictionary comprehensions
- Transform existing dictionaries with comprehensions
- Filter key-value pairs from dictionaries using conditionals
- Decide when to use dictionary comprehensions
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
Django security releases issued: 5.1.5, 5.0.11, and 4.2.18
In accordance with our security release policy, the Django team is issuing releases for Django 5.1.5, Django 5.0.11, and Django 4.2.18. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.
CVE-2024-56374: Potential denial-of-service vulnerability in IPv6 validation
Lack of upper bound limit enforcement in strings passed when performing IPv6 validation could lead to a potential denial-of-service attack. The undocumented and private functions clean_ipv6_address and is_valid_ipv6_address were vulnerable, as was the django.forms.GenericIPAddressField form field, which has now been updated to define a max_length of 39 characters.
The django.db.models.GenericIPAddressField model field was not affected.
Thanks to Saravana Kumar for the report.
This issue has severity "moderate" according to the Django security policy.
Affected supported versions
- Django main
- Django 5.1
- Django 5.0
- Django 4.2
Resolution
Patches to resolve the issue have been applied to Django's main, 5.1, 5.0, and 4.2 branches. The patches may be obtained from the following changesets.
CVE-2024-56374: Potential denial-of-service vulnerability in IPv6 validation
- On the main branch
- On the 5.1 branch
- On the 5.0 branch
- On the 4.2 branch
The following releases have been issued
- Django 5.1.5 (download Django 5.1.5 | 5.1.5 checksums)
- Django 5.0.11 (download Django 5.0.11 | 5.0.11 checksums)
- Django 4.2.18 (download Django 4.2.18 | 4.2.18 checksums)
The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E
General notes regarding security reporting
As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum, nor via the django-developers list. Please see our security policies for further information.
Python Insider
Python 3.14.0 alpha 4 is out
Hello, three dot fourteen dot zero alpha four!
https://www.python.org/downloads/release/python-3140a4/
This is an early developer preview of Python 3.14
Major new features of the 3.14 series, compared to 3.13
Python 3.14 is still in development. This release, 3.14.0a4, is the fourth of seven planned alpha releases.
Alpha releases are intended to make it easier to test the current state of new features and bug fixes and to test the release process.
During the alpha phase, features may be added up until the start of the beta phase (2025-05-06) and, if necessary, may be modified or deleted up until the release candidate phase (2025-07-22). Please keep in mind that this is a preview release and its use is not recommended for production environments.
Many new features for Python 3.14 are still being planned and written. Among the new major new features and changes so far:
- PEP 649: deferred evaluation of annotations
- PEP 741: Python configuration C API
- PEP 761: Python 3.14 and onwards no longer provides PGP signatures for release artifacts. Instead, Sigstore is recommended for verifiers.
- Improved error messages
- Many removals of deprecated classes, functions, methods and parameters in various standard library modules.
- New deprecations, many of which are scheduled for removal from Python 3.16
- C API removals and deprecations
- (Hey, fellow core developer, if a feature you find important is missing from this list, let Hugo know.)
The next pre-release of Python 3.14 will be 3.14.0a5, currently scheduled for 2025-02-11.
More resources
- Online documentation
- PEP 745, 3.14 Release Schedule
- Report bugs at https://github.com/python/cpython/issues
- Help fund Python and its community
And now for something completely different
In Python, you can use Greek letters as constants. For example:
from math import pi as π
def circumference(radius: float) -> float:
return 2 * π * radius
print(circumference(6378.137)) # 40075.016685578485
Enjoy the new release
Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organisation contributions to the Python Software Foundation.
Regards from a slushy, slippery Helsinki,
Your release team,
Hugo van Kemenade @hugovk
Ned Deily @nad
Steve Dower @steve.dower
Łukasz Langa @ambv
Eli Bendersky
Reverse mode Automatic Differentiation
Automatic Differentiation (AD) is an important algorithm for calculating the derivatives of arbitrary functions that can be expressed by a computer program. One of my favorite CS papers is "Automatic differentiation in machine learning: a survey" by Baydin, Perlmutter, Radul and Siskind (ADIMLAS from here on). While this post attempts to be useful on its own, it serves best as a followup to the ADIMLAS paper - so I strongly encourage you to read that first.
The main idea of AD is to treat a computation as a nested sequence of function compositions, and then calculate the derivative of the outputs w.r.t. the inputs using repeated applications of the chain rule. There are two methods of AD:
- Forward mode: where derivatives are computed starting at the inputs
- Reverse mode: where derivatives are computed starting at the outputs
Reverse mode AD is a generalization of the backpropagation technique used in training neural networks. While backpropagation starts from a single scalar output, reverse mode AD works for any number of function outputs. In this post I'm going to be describing how reverse mode AD works in detail.
While reading the ADIMLAS paper is strongly recommended but not required, there is one mandatory pre-requisite for this post: a good understanding of the chain rule of calculus, including its multivariate formulation. Please read my earlier post on the subject first if you're not familiar with it.
Linear chain graphs
Let's start with a simple example where the computation is a linear chain of primitive operations: the Sigmoid function.
This is a basic Python implementation:
def sigmoid(x):
return 1 / (1 + math.exp(-x))
To apply the chain rule, we'll break down the calculation of S(x) to a sequence of function compositions, as follows:
\[\begin{align*} f(x)&=-x\\ g(f)&=e^f\\ w(g)&=1+g\\ v(w)&=\frac{1}{w} \end{align*}\]Take a moment to convince yourself that S(x) is equivalent to the composition v\circ(w\circ(g\circ f))(x).
The same decomposition of sigmoid into primitives in Python would look as follows:
def sigmoid(x):
f = -x
g = math.exp(f)
w = 1 + g
v = 1 / w
return v
Yet another representation is this computational graph:
Each box (graph node) represents a primitive operation, and the name assigned to it (the green rectangle on the right of each box). An arrows (graph edge) represent the flow of values between operations.
Our goal is to find the derivative of S w.r.t. x at some point , denoted as S'(x_0). The process starts by running the computational graph forward with our value of . As an example, we'll use x_0=0.5:
Since all the functions in this graph have a single input and a single output, it's sufficient to use the single-variable formulation of the chain rule.
\[(g \circ f)'(x_0)={g}'(f(x_0)){f}'(x_0)\]To avoid confusion, let's switch notation so we can explicitly see which derivatives are involved. For and g(f) as before, we can write the derivatives like this:
\[f'(x)=\frac{df}{dx}\quad g'(f)=\frac{dg}{df}\]Each of these is a function we can evaluate at some point; for example, we denote the evaluation of f'(x) at as \frac{df}{dx}(x_0). So we can rewrite the chain rule like this:
\[\frac{d(g \circ f)}{dx}(x_0)=\frac{dg}{df}(f(x_0))\frac{df}{dx}(x_0)\]Reverse mode AD means applying the chain rule to our computation graph, starting with the last operation and ending at the first. Remember that our final goal is to calculate:
\[\frac{dS}{dx}(x_0)\]Where S is a composition of multiple functions. The first composition we unravel is the last node in the graph, where v is calculated from w. This is the chain rule for it:
\[\frac{dS}{dw}=\frac{d(S \circ v)}{dw}(x_0)=\frac{dS}{dv}(v(x_0))\frac{dv}{dw}(x_0)\]The formula for S is S(v)=v, so its derivative is 1. The formula for v is v(w)=\frac{1}{w}, so its derivative is -\frac{1}{w^2}. Substituting the value of w computed in the forward pass, we get:
\[\frac{dS}{dw}(x_0)=1\cdot\frac{-1}{w^2}\bigg\rvert_{w=1.61}=-0.39\]Continuing backwards from v to w:
\[\frac{dS}{dg}(x_0)=\frac{dS}{dw}(x_0)\frac{dw}{dg}(x_0)\]We've already calculated \frac{dS}{dw}(x_0) in the previous step. Since w=1+g, we know that w'(g)=1, so:
\[\frac{dS}{dg}(x_0)=-0.39\cdot1=-0.39\]Continuing similarly down the chain, until we get to the input x:
\[\begin{align*} \frac{dS}{df}(x_0)&=\frac{dS}{dg}(x_0)\frac{dg}{df}(x_0)=-0.39\cdot e^f\bigg\rvert_{f=-0.5}=-0.24\\ \frac{dS}{dx}(x_0)&=\frac{dS}{df}(x_0)\frac{df}{dx}(x_0)=-0.24\cdot -1=0.24 \end{align*}\]We're done; the value of the derivative of the sigmoid function at x=0.5 is 0.24; this can be easily verified with a calculator using the analytical derivative of this function.
As you can see, this procedure is rather mechanical and it's not surprising that it can be automated. Before we get to automation, however, let's review the more common scenario where the computational graph is a DAG rather than a linear chain.
General DAGs
The sigmoid sample we worked though above has a very simple, linear computational graph. Each node has a single predecessor and a single successor; moreover, the function itself has a single input and single output. Therefore, the single-variable chain rule is sufficient here.
In the more general case, we'll encounter functions that have multiple inputs, may also have multiple outputs [1], and the internal nodes are connected in non-linear patterns. To compute their derivatives, we have to use the multivariate chain rule.
As a reminder, in the most general case we're dealing with a function that has n inputs, denoted a=a_1,a_2\cdots a_n, and m outputs, denoted f_1,f_2\cdots f_m. In other words, the function is mapping .
The partial derivative of output i w.r.t. input j at some point a is:
Assuming f is differentiable at a, then the complete derivative of f w.r.t. its inputs can be represented by the Jacobian matrix:
The multivariate chain rule then states that if we compose f\circ g (and assuming all the dimensions are correct), the derivative is:
This is the matrix multiplication of and .
Linear nodes
As a warmup, let's start with a linear node that has a single input and a single output:
In all these examples, we assume the full graph output is S, and its derivative by the node's outputs is \frac{\partial S}{\partial f}. We're then interested in finding \frac{\partial S}{\partial x}. Since since f:\mathbb{R}\to\mathbb{R}, the Jacobian is just a scalar:
\[Df=\frac{\partial f}{\partial x}\]And the chain rule is:
\[D(S\circ f)=DS(f)\cdot Df=\frac{\partial S}{\partial f}\frac{\partial f}{\partial x}\]No surprises so far - this is just the single variable chain rule!
Fan-in
Let's move on to the next scenario, where f has two inputs:
Once again, we already have the derivative \frac{\partial S}{\partial f} available, and we're interested in finding the derivative of S w.r.t. the inputs.
In this case, f:\mathbb{R}^2\to\mathbb{R}, so the Jacobian is a 1x2 matrix:
\[Df=\left [ \frac{\partial f}{\partial x_1} \quad \frac{\partial f}{\partial x_2} \right ]\]And the chain rule here means multiplying a 1x1 matrix by a 1x2 matrix:
\[D(S\circ f)=DS(f)\cdot Df= \left [ \frac{\partial S}{\partial f} \right ] \left [ \frac{\partial f}{\partial x_1} \quad \frac{\partial f}{\partial x_2} \right ] = \left [ \frac{\partial S}{\partial f} \frac{\partial f}{\partial x_1} \quad \frac{\partial S}{\partial f} \frac{\partial f}{\partial x_2} \right ]\]Therefore, we see that the output derivative propagates to each input separately:
\[\begin{align*} \frac{\partial S}{\partial x_1}&=\frac{\partial S}{\partial f} \frac{\partial f}{\partial x_1}\\ \frac{\partial S}{\partial x_2}&=\frac{\partial S}{\partial f} \frac{\partial f}{\partial x_2} \end{align*}\]Fan-out
In the most general case, f may have multiple inputs but its output may also be used by more than one other node. As a concrete example, here's a node with three inputs and an output that's used in two places:
While we denote each output edge from f with a different name, f has a single output! This point is a bit subtle and important to dwell on: yes, f has a single output, so in the forward calculation both f_1 and f_2 will have the same value. However, we have to treat them differently for the derivative calculation, because it's very possible that \frac{\partial S}{\partial f_1} and \frac{\partial S}{\partial f_2} are different!
In other words, we're reusing the machinery of multi-output functions here. If f had multiple outputs (e.g. a vector function), everything would work exactly the same.
In this case, since we treat f as f:\mathbb{R}^3\to\mathbb{R}^2, its Jacobian is a 2x3 matrix:
\[Df= \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \frac{\partial f_1}{\partial x_3} \\ \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \frac{\partial f_2}{\partial x_3} \\ \end{bmatrix}\]The Jacobian DS(f) is a 1x2 matrix:
\[DS(f)=\left [ \frac{\partial S}{\partial f_1} \quad \frac{\partial S}{\partial f_2} \right ]\]Applying the chain rule:
\[\begin{align*} D(S\circ f)=DS(f)\cdot Df&= \left [ \frac{\partial S}{\partial f_1} \quad \frac{\partial S}{\partial f_2} \right ] \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \frac{\partial f_1}{\partial x_3} \\ \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \frac{\partial f_2}{\partial x_3} \\ \end{bmatrix}\\ &= \left [ \frac{\partial S}{\partial f_1}\frac{\partial f_1}{\partial x_1}+\frac{\partial S}{\partial f_2}\frac{\partial f_2}{\partial x_1}\qquad \frac{\partial S}{\partial f_1}\frac{\partial f_1}{\partial x_2}+\frac{\partial S}{\partial f_2}\frac{\partial f_2}{\partial x_2}\qquad \frac{\partial S}{\partial f_1}\frac{\partial f_1}{\partial x_3}+\frac{\partial S}{\partial f_2}\frac{\partial f_2}{\partial x_3} \right ] \end{align*}\]Therefore, we have:
\[\begin{align*} \frac{\partial S}{\partial x_1}&=\frac{\partial S}{\partial f_1}\frac{\partial f_1}{\partial x_1}+\frac{\partial S}{\partial f_2}\frac{\partial f_2}{\partial x_1}\\ \frac{\partial S}{\partial x_2}&=\frac{\partial S}{\partial f_1}\frac{\partial f_1}{\partial x_2}+\frac{\partial S}{\partial f_2}\frac{\partial f_2}{\partial x_2}\\ \frac{\partial S}{\partial x_3}&=\frac{\partial S}{\partial f_1}\frac{\partial f_1}{\partial x_3}+\frac{\partial S}{\partial f_2}\frac{\partial f_2}{\partial x_3} \end{align*}\]The key point here - which we haven't encountered before - is that the derivatives through f add up for each of its outputs (or for each copy of its output). Qualitatively, it means that the sensitivity of f's input to the output is the sum of its sensitivities across each output separately. This makes logical sense, and mathematically it's just the consequence of the dot product inherent in matrix multiplication.
Now that we understand how reverse mode AD works for the more general case of DAG nodes, let's work through a complete example.
General DAGs - full example
Consider this function (a sample used in the ADIMLAS paper):
\[f(x_1, x_2)=ln(x_1)+x_1 x_2-sin(x_2)\]It has two inputs and a single output; once we decompose it to primitive operations, we can represent it with the following computational graph [2]:
As before, we begin by running the computation forward for the values of x_1,x_2 at which we're interested to find the derivative. Let's take x_1=2 and x_2=5:
Recall that our goal is to calculate \frac{\partial f}{\partial x_1} and \frac{\partial f}{\partial x_2}. Initially we know that \frac{\partial f}{\partial v_5}=1 [3].
Starting with the v_5 node, let's use the fan-in formulas developed earlier:
\[\begin{align*} \frac{\partial f}{\partial v_4}&=\frac{\partial f}{\partial v_5} \frac{\partial v_5}{\partial v_4}=1\cdot 1=1\\ \frac{\partial f}{\partial v_3}&=\frac{\partial f}{\partial v_5} \frac{\partial v_5}{\partial v_3}=1\cdot -1=-1 \end{align*}\]Next, let's tackle v_4. It also has a fan-in configuration, so we'll use similar formulas, plugging in the value of \frac{\partial f}{\partial v_4} we've just calculated:
\[\begin{align*} \frac{\partial f}{\partial v_1}&=\frac{\partial f}{\partial v_4} \frac{\partial v_4}{\partial v_1}=1\cdot 1=1\\ \frac{\partial f}{\partial v_2}&=\frac{\partial f}{\partial v_4} \frac{\partial v_4}{\partial v_2}=1\cdot 1=1 \end{align*}\]On to v_1. It's a simple linear node, so:
\[\frac{\partial f}{\partial x_1}^{(1)}=\frac{\partial f}{\partial v_1} \frac{\partial v_1}{\partial x_1}=1\cdot \frac{1}{x_1}=0.5\]Note the (1) superscript though! Since x_1 is a fan-out node, it will have more than one contribution to its derivative; we've just computed the one from v_1. Next, let's compute the one from v_2. That's another fan-in node:
\[\begin{align*} \frac{\partial f}{\partial x_1}^{(2)}&=\frac{\partial f}{\partial v_2} \frac{\partial v_2}{\partial x_1}=1\cdot x_2=5\\ \frac{\partial f}{\partial x_2}^{(1)}&=\frac{\partial f}{\partial v_2} \frac{\partial v_2}{\partial x_2}=1\cdot x_1=2 \end{align*}\]We've calculated the other contribution to the x_1 derivative, and the first out of two contributions for the x_2 derivative. Next, let's handle v_3:
\[\frac{\partial f}{\partial x_2}^{(2)}=\frac{\partial f}{\partial v_3} \frac{\partial v_3}{\partial x_2}=-1\cdot cos(x_2)=-0.28\]Finally, we're ready to add up the derivative contributions for the input arguments. x_1 is a "fan-out" node, with two outputs. Recall from the section above that we just sum their contributions:
\[\frac{\partial f}{\partial x_1}=\frac{\partial f}{\partial x_1}^{(1)}+\frac{\partial f}{\partial x_1}^{(2)}=0.5+5=5.5\]And:
\[\frac{\partial f}{\partial x_2}=\frac{\partial f}{\partial x_2}^{(1)}+\frac{\partial f}{\partial x_2}^{(2)}=2-0.28=1.72\]And we're done! Once again, it's easy to verify - using a calculator and the analytical derivatives of f(x_1,x_2) - that these are the right derivatives at the given points.
Backpropagation in ML, reverse mode AD and VJPs
A quick note on reverse mode AD vs forward mode (please read the ADIMLAS paper for much more details):
Reverse mode AD is the approach commonly used for machine learning and neural networks, because these tend to have a scalar loss (or error) output that we want to minimize. In reverse mode, we have to run AD once per output, while in forward mode we'd have to run it once per input. Therefore, when the input size is much larger than the output size (as is the case in NNs), reverse mode is preferable.
There's another advantage, and it relates to the term vector-jacobian product (VJP) that you will definitely run into once you start digging deeper in this domain.
The VJP is basically a fancy way of saying "using the chain rule in reverse mode AD". Recall that in the most general case, the multivariate chain rule is:
However, in the case of reverse mode AD, we typically have a single output from the full graph, so is a row vector. The chain rule then means multiplying this row vector by a matrix representing the node's jacobian. This is the vector-jacobian product, and its output is another row vector. Scroll back to the Fan-out sample to see an example of this.
This may not seem very profound so far, but it carries an important meaning in terms of computational efficiency. For each node in the graph, we don't have to store its complete jacobian; all we need is a function that takes a row vector and produces the VJP. This is important because jacobians can be very large and very sparse [4]. In practice, this means that when AD libraries define the derivative of a computation node, they don't ask you to register a complete jacobian for each operation, but rather a VJP.
This also provides an additional way to think about the relative efficiency of reverse mode AD for ML applications; since a graph typically has many inputs (all the weights), and a single output (scalar loss), accumulating from the end going backwards means the intermediate products are VJPs that are row vectors; accumulating from the front would mean multiplying full jacobians together, and the intermediate results would be matrices [5].
A simple Python implementation of reverse mode AD
Enough equations, let's see some code! The whole point of AD is that it's automatic, meaning that it's simple to implement in a program. What follows is the simplest implementation I could think of; it requires one to build expressions out of a special type, which can then calculate gradients automatically.
Let's start with some usage samples; here's the Sigmoid calculation presented earlier:
xx = Var(0.5)
sigmoid = 1 / (1 + exp(-xx))
print(f"xx = {xx.v:.2}, sigmoid = {sigmoid.v:.2}")
sigmoid.grad(1.0)
print(f"dsigmoid/dxx = {xx.gv:.2}")
We begin by building the Sigmoid expression using Var values (more on this later). We can then run the grad method on a Var, with an output gradient of 1.0 and see that the gradient for xx is 0.24, as calculated before.
Here's the expression we used for the DAG section:
x1 = Var(2.0)
x2 = Var(5.0)
f = log(x1) + x1 * x2 - sin(x2)
print(f"x1 = {x1.v:.2}, x2 = {x2.v:.2}, f = {f.v:.2}")
f.grad(1.0)
print(f"df/dx1 = {x1.gv:.2}, df/dx2 = {x2.gv:.2}")
Once again, we build up the expression, then call grad on the final value. It will populate the gv attributes of input Vars with the derivatives calculated w.r.t. these inputs.
Let's see how Var works. The high-level overview is:
- A Var represents a node in the computational graph we've been discussing in this post.
- Using operator overloading and custom math functions (like the exp, sin and log seen in the samples above), when an expression is constructed out of Var values, we also build the computational graph in the background. Each Var has links to its predecessors in the graph (the other Vars that feed into it).
- When the grad method is called, it runs reverse mode AD through the computational graph, using the chain rule.
Here's the Var class:
class Var:
def __init__(self, v):
self.v = v
self.predecessors = []
self.gv = 0.0
v is the value (forward calculation) of this Var. predecessors is the list of predecessors, each of this type:
@dataclass
class Predecessor:
multiplier: float
var: "Var"
Consider the v5 node in DAG sample, for example. It represents the calculation v4-v3. The Var representing v5 will have a list of two predecessors, one for v4 and one for v3. Each of these will have a "multiplier" associated with it:
- For v3, Predecessor.var points to the Var representing v3 and Predecessor.multiplier is -1, since this is the derivative of v5 w.r.t. v3
- Similarly, for v4, Predecessor.var points to the Var representing v4 and Predecessor.multiplier is 1.
Let's see some overloaded operators of Var [6]:
def __add__(self, other):
other = ensure_var(other)
out = Var(self.v + other.v)
out.predecessors.append(Predecessor(1.0, self))
out.predecessors.append(Predecessor(1.0, other))
return out
# ...
def __mul__(self, other):
other = ensure_var(other)
out = Var(self.v * other.v)
out.predecessors.append(Predecessor(other.v, self))
out.predecessors.append(Predecessor(self.v, other))
return out
And some of the custom math functions:
def log(x):
"""log(x) - natural logarithm of x"""
x = ensure_var(x)
out = Var(math.log(x.v))
out.predecessors.append(Predecessor(1.0 / x.v, x))
return out
def sin(x):
"""sin(x)"""
x = ensure_var(x)
out = Var(math.sin(x.v))
out.predecessors.append(Predecessor(math.cos(x.v), x))
return out
Note how the multipliers for each node are exactly the derivatives of its output w.r.t. corresponding input. Notice also that in some cases we use the forward calculated value of a Var's inputs to calculate this derivative (e.g. in the case of sin(x), the derivative is cos(x), so we need the actual value of x).
Finally, this is the grad method:
def grad(self, gv):
self.gv += gv
for p in self.predecessors:
p.var.grad(p.multiplier * gv)
Some notes about this method:
- It has to be invoked on a Var node that represents the entire computation.
- Since this function walks the graph backwards (from the outputs to the inputs), this is the direction our graph edges are pointing (we keep track of the predecessors of each node, not the successors).
- Since we typically want the derivative of some output "loss" w.r.t. each Var, the computation will usually start with grad(1.0), because the output of the entire computation is the loss.
- For each node, grad adds the incoming gradient to its own, and propagates the incoming gradient to each of its predecessors, using the relevant multiplier.
- The addition self.gv += gv is key to managing nodes with fan-out. Recall our discussion from the DAG section - according to the multivariate chain rule, fan-out nodes' derivatives add up for each of their outputs.
- This implementation of grad is very simplistic and inefficient because it will process the same Var multiple times in complex graphs. A more efficient implementation would sort the graph topologically first and then would only have to visit each Var once.
- Since the gradient of each Var adds up, one shouldn't be reusing Vars between different computations. Once grad was run, the Var should not be used for other grad calculations.
The full code for this sample is available here.
Conclusion
The goal of this post is to serve as a supplement for the ADIMLAS paper; once again, if the topic of AD is interesting to you, I strongly encourage you to read the paper! I hope this post added something on top - please let me know if you have any questions.
Industrial strength implementations of AD, like autograd and JAX, have much better ergonomics and performance than the toy implementation shown above. That said, the underlying principles are similar - reverse mode AD on computational graphs.
I'll discuss an implementation of a more sophisticated AD system in a followup post.
[1] | In this post we're only looking at single-output graphs, however, since these are typically sufficient in machine learning (the output is some scalar "loss" or "error" that we're trying to minimize). That said, for functions with multiple outputs the process is very similar - we just have to run the reverse mode AD process for each output variable separately. |
[2] | Note that the notation here is a bit different from the one used for the sigmoid function. This notation is adopted from the ADIMLAS paper, which uses v_i for all temporary values within the graph. I'm keeping the notations different to emphasize they have absolutely no bearing on the math and the AD algorithm. They're just a naming convention. |
[3] | For consistency, I'll be using the partial derivative notation throughout this example, even for nodes that have a single input and output. |
[4] | For an example of gigantic, sparse jacobians see my older post on backpropagation through a fully connected layer. |
[5] | There are a lot of additional nuances here to explain; I strongly recommend this excellent lecture by Matthew Johnson (of JAX and autograd fame) for a deeper overview. |
[6] | These use the utility function ensure_var; all it does is wrap the its argument in a Var if it's not already a Var. This is needed to wrap constants in the expression, to ensure that the computational graph includes everything. |
Python Software Foundation
Powering Python together in 2025, thanks to our community!
We are so very grateful for each of you who donated or became new members during our end-of-year fundraiser and membership drive. We raised $30,000 through the PyCharm promotion offered by JetBrains– WOW! Including individual donations, Supporting Memberships, donations to our Fiscal Sponsorees, and JetBrains’ generous partnership we raised around $99,000 for the PSF’s mission supporting Python and its community.
Your generous support means we can dive into 2025 ready to invest in our key goals for the year. Some of our goals include:
- Embrace the opportunities and tackle the challenges that come with scale
- Foster long term sustainable growth- for Python, the PSF, and the community
- Improve workflows through iterative improvement in collaboration with the community
Each bit of investment from the Python community—money, time, energy, ideas, and enthusiasm—helps us to reach these goals!
We want to specifically call out to our new members: welcome aboard, thank you for joining us, and we are so appreciative of you! We’re looking forward to having your voice take part in the PSF’s future. If you aren’t a member of the PSF yet, check out our Membership page, which includes details about our sliding scale memberships. We are happy to welcome new members any time of year!
As always, we want to thank those in the community who took the time to share our posts on social media and their local or project based networks. We’re excited about what 2025 has in store for Python and the PSF, and as always, we’d love to hear your ideas and feedback. Looking for how to keep in touch with us? You can find all the ways in our "Where to find the PSF?" blog post.
We wish you a perfectly Pythonic year ahead!
- The PSF Team
P.s. Want to continue to help us make an impact? Check out our “Do you know the PSF's next sponsor?” blog post and share with your employer!
Python⇒Speed
Catching memory leaks with your test suite
Resource leaks are an unpleasant type of bug. Little by little your program uses more memory, or more file descriptors, or some other limited resource. Everything seems fine—until you run, and now your program is dead.
In many cases you can catch these sort of bugs in advance, by tweaking your test suite. Or, after you’ve discovered such a bug, you can use your test suite to identify what is causing it. In this article we’ll cover:
- An example of a memory leaks.
- When your test suite may be a good way to identify the causes of leaks.
- How to catch leaks using
pytest
. - Other types of leaks.
January 13, 2025
Real Python
Build a Personal Diary With Django and Python
Creating a Django diary allows you to build a personal, secure web app on your computer without using external cloud services. This tutorial guides you through setting up a Django project, where you can create, read, update, and delete entries. You’ll explore key concepts such as models, class-based views, and templating, giving you a blueprint for future Django projects.
By the end of this tutorial, you’ll understand that:
- Building a diary is a great beginner project because it involves fundamental web app concepts like CRUD operations and authentication.
- Class-based views provide a structured way to handle common web app patterns with less code.
- You can leverage the Django admin site for authentication by reusing its login mechanism to secure your diary entries.
This tutorial will guide you step-by-step to your final diary. If you’re just starting out with Django and want to finish your first real project, then this tutorial is for you!
To get the complete source code for the Django project and its steps, click the link below:
Get Source Code: Click here to get the source code you’ll use to build a personal diary web app with Django and Python in this tutorial.
Demo Video
On the main page of your diary, you’ll have a list of entries. You can scroll through them and create new ones with a click of a button. The styling is provided in this tutorial, so you can focus on the Django part of the code. Here’s a quick demo video of how it will look in action:
By the end of the tutorial, you’ll be able to flawlessly navigate your diary to create, read, update, and delete entries on demand.
Project Overview
The tutorial is divided into multiple steps. That way, you can take breaks and continue at your own pace. In each step, you’ll tackle a specific area of your diary project:
- Setting up your Django diary project
- Creating entries on the back end
- Displaying entries on the front end
- Adding styling
- Managing entries on the front end
- Improving your user experience
- Implementing authentication
By following along, you’ll explore the basics of web apps and how to add common features of a Django project. After finishing the tutorial, you’ll have created your own personal diary app and will have a Django project blueprint to build upon.
Prerequisites
You don’t need any previous knowledge of Django to complete this project. If you want to learn more about the topics you encounter in this tutorial, you’ll find links to resources along the way.
However, you should be comfortable using the command line and have a basic knowledge of Python and classes. Although it helps to know about virtual environments and pip
, you’ll learn how to set everything up as you work through the tutorial.
Step 1: Setting Up Your Django Diary
Start the project by creating your project directory and setting up a virtual environment. This setup will keep your code isolated from any other projects on your machine. You can name your project folder and the virtual environment any way you want. In this tutorial, the project folder is named my-diary
, and the virtual environment is named .venv/
.
Select your operating system below, then use your platform-specific command to create the project folder and navigate into the newly-created folder:
First, you create a new folder names my-diary/
and then you navigate into the folder.
In my-diary/
, set up a virtual environment:
Read the full article at https://realpython.com/django-diary-project-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Bytes
#416 A Ghostly Episode
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong>Terminals & Shells</strong></li> <li><strong><a href="https://github.com/Vizonex/Winloop?featured_on=pythonbytes">Winloop</a>: An Alternative library for uvloop compatibility with windows</strong></li> <li><strong>Ruff & uv</strong></li> <li><strong><a href="https://pypi.org/project/uv-secure/?featured_on=pythonbytes">uv-secure</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=gZUpvyRb_ok' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="416">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes"><strong>@mkennedy.codes</strong></a> <strong>(bsky)</strong></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes"><strong>@brianokken.bsky.social</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/pythonbytes.fm"><strong>@pythonbytes.fm</strong></a> <strong>(bsky)</strong></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it. </p> <p><strong>Brian #1:</strong> <strong>Terminals & Shells</strong></p> <ul> <li><a href="https://ghostty.org?featured_on=pythonbytes"><strong>Ghostty</strong></a> is out <ul> <li>Started by Mitchel Hashimoto, one of the co-founders of Hashicorp</li> <li>“Ghostty is a terminal emulator that differentiates itself by being fast, feature-rich, and native. While there are many excellent terminal emulators available, they all force you to choose between speed, features, or native UIs. Ghostty provides all three.”</li> <li>Currently for macOS & Linux (Windows planned)</li> <li><a href="https://ghostty.org/docs/install/release-notes/1-0-1?featured_on=pythonbytes">Version 1.0.1 released Dec 31</a>, <a href="https://mitchellh.com/writing/ghostty-is-coming?featured_on=pythonbytes">announced in Oct</a></li> <li>Features: cross-platform, windows, tabs, and splits, Themes, Ligatures, …</li> <li>Shell Integration: Some Ghostty features require integrating with your shell. Ghostty can automatically inject shell integration for bash, zsh, fish, and elvish.</li> </ul></li> <li><a href="https://fishshell.com/blog/fish-4b/?featured_on=pythonbytes">Fish is moving to Rust</a> <ul> <li>“fish is a smart and user-friendly command line shell with clever features that just work, without needing an advanced degree in bash scriptology.”</li> <li>“fish 4.0 is a big upgrade. It’s got lots of new features to make using the command line easier and more enjoyable, such as more natural key binding and expanded history search. And under the hood, we’ve rebuilt the foundation in Rust.”</li> </ul></li> </ul> <p><strong>Michael #2:</strong> <a href="https://github.com/Vizonex/Winloop?featured_on=pythonbytes">Winloop</a>: An Alternative library for uvloop compatibility with windows</p> <ul> <li>via <a href="https://bsky.app/profile/owen7ba.bsky.social?featured_on=pythonbytes">Owen Lamont</a></li> <li>An alternative library for uvloop compatibility with windows .</li> <li>It always felt disappointing when libuv is available for windows <a href="https://github.com/MagicStack/uvloop/issues/14#issuecomment-575826367]">but windows was never compatible with uvloop</a>.</li> </ul> <p><strong>Brian #3:</strong> <strong>Ruff & uv</strong></p> <ul> <li><a href="https://astral.sh/blog/ruff-v0.9.0#the-ruff-2025-style-guide">Ruff 0.9.0 has a new 2025 style guide </a> <ul> <li>f-string formatting improvements <ul> <li>Now formats expressions interpolated inside f-string curly braces</li> <li>Quotes normalized according to project config</li> <li>Unnecessary escapes removed</li> <li>Examines interpolated expressions to see if splitting the string over multiple lines is ok</li> </ul></li> <li>Other changes to, but it’s the f-string improvements I’m excited about.</li> </ul></li> <li><a href="https://www.python.org/downloads/release/python-3140a3/?featured_on=pythonbytes">Python 3.14.0a3 is out</a>, <a href="https://bsky.app/profile/crmarsh.com/post/3lf6abwpios2a?featured_on=pythonbytes">and available with uv</a> <ul> <li><code>uv python install 3.14 --preview</code></li> </ul></li> </ul> <p><strong>Michael #4:</strong> <a href="https://pypi.org/project/uv-secure/?featured_on=pythonbytes">uv-secure</a></p> <ul> <li>by <a href="https://bsky.app/profile/owen7ba.bsky.social/post/3le533elhys2y?featured_on=pythonbytes">Owen Lamont</a> (yes again :) )</li> <li>This tool will scan PyPi dependencies listed in your uv.lock files (or uv generated requirements.txt files) and check for known vulnerabilities listed against those packages and versions in the PyPi json API.</li> <li>I don't intend uv-secure to ever create virtual environments or do dependency resolution - the plan is to leave that all to uv since it does that so well and just target lock files and fully pinned and dependency resolved requirements.txt files).</li> <li>Works “out of the box” with a requirements.txt from uv pip compile.</li> </ul> <p><img src="https://blobs.pythonbytes.fm/uv-secure-example.png" alt="" /></p> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li><a href="https://testandcode.com/episodes/pytest-plugins?featured_on=pythonbytes">Test & Code Season 2: pytest plugins</a> <ul> <li>Season 1 was something like 223 episodes over 9.5 years</li> <li>Started the summer of 2015</li> </ul></li> <li>Send in pytest plugin suggestions to <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">Brian on BlueSky</a> or <a href="https://fosstodon.org/@brianokken">Mastodon</a> <ul> <li>or <a href="https://pythontest.com/contact/?featured_on=pythonbytes">the contact form at pythontest.com</a></li> </ul></li> </ul> <p>Michael:</p> <ul> <li><a href="https://talkpython.fm/blog/posts/introducing-episode-deep-dives-at-talk-python/?featured_on=pythonbytes">Episode Deep Dive</a> feature at Talk Python <ul> <li>Feedback on social media: <ul> <li>Those deep dives look really handy. <code>&lt;looks at another one></code> Yes, those ARE really handy! Thanks for doing that.</li> <li>wow, yes please! This is awesome.</li> <li>Wow, this is amazing. … It helps when going back to check something (without having to re-listen).</li> </ul></li> </ul></li> <li>https://pycon.pyug.at/en/ </li> <li><a href="https://www.linkedin.com/posts/owen-lamont_i-was-inspired-by-some-of-the-funny-ai-generated-ugcPost-7280205646430400512-FSVN/?utm_source=share&utm_medium=member_desktop&featured_on=pythonbytes">Heavy metal status codes</a></li> <li><a href="https://wandering.shop/@leonardr/113783579673634848?featured_on=pythonbytes">Beautiful Soup feedback CFA</a> via Sumana Harihareswara</li> </ul> <p><strong>Joke:</strong> <a href="https://www.reddit.com/r/programminghumor/comments/1hjx3v6/had_to_share/?featured_on=pythonbytes">That's a stupid cup</a></p>
Zato Blog
Understanding API rate-limiting techniques
Understanding API rate-limiting techniques
Enabling rate-limiting in Zato means that access to Zato APIs can be throttled per endpoint, user or service - including options to make limits apply to specific IP addresses only - and if limits are exceeded within a selected period of time, the invocation will fail. Let's check how to use it all.
API rate limiting works on several levels and the configuration is always checked in the order below, which follows from the narrowest, most specific parts of the system (endpoints), through users which may apply to multiple endpoints, up to services which in turn may be used by both multiple endpoints and users.
- First, per-endpoint limits
- Then, per-user limits
- Finally, per-service limits
When a request arrives through an endpoint, that endpoint's rate limiting configuration is checked. If the limit is already reached for the IP address or network of the calling application, the request is rejected.
Next, if there is any user associated with the endpoint, that account's rate limits are checked in the same manner and, similarly, if they are reached, the request is rejected.
Finally, if the endpoint's underlying service is configured to do so, it also checks if its invocation limits are not exceeded, rejecting the message accordingly if they are.
Note that the three levels are distinct yet they overlap in what they allow one to achieve.
For instance, it is possible to have the same user credentials be used in multiple endpoints and express ideas such as "Allow this and that user to invoke my APIs 1,000 requests/day but limit each endpoint to at most 5 requests/minute no matter which user".
Moreover, because limits can be set on services, it is possible to make it even more flexible, e.g. "Let this service be invoked at most 10,000 requests/hour, no matter which user it is, with particular users being able to invoke at most 500 requests/minute, no matter which service, topping it off with per separate limits for REST vs. SOAP vs. JSON-RPC endpoint, depending on what application is invoke the endpoints". That lets one conveniently express advanced scenarios that often occur in practical situations.
Also, observe that API rate limiting applies to REST, SOAP and JSON-RPC endpoints only, it is not used with other API endpoints, such as AMQP, IBM MQ, SAP, task scheduler or any other technologies. However, per-service limits work no matter which endpoint the service is invoked with and they will work with endpoints such as WebSockets, ZeroMQ or any other.
Lastly, limits pertain to with incoming requests only - any outgoing ones, from Zato to external resources - are not covered by it.
Per-IP restrictions
The architecture is made even more versatile thanks to the fact that for each object - endpoint, user or service - different limits can be configured depending on the caller's IP address.
This adds yet another dimension and allows to express ideas commonly witnessed in API-based projects, such as:
- External applications, depending on their IP addresses, can have their own limits
- Internal users, e.g. employees of the company using VPN, may have hire limits if their addresses are in the 172.x.x.x range
- For performance testing purposes, access to Zato from a few selected hosts may have no limits at all
IP-based limits work hand in hand are an integral part of the mechanism - they do not rule out per-endpoit, user or service limits. In fact, for each such object, multiple IP-using limits can be set independently, thus allowing for highest degree of flexibility.
Exact or approximate
Rate limits come in two types:
- Exact
- Approximate
Exact rate limits are just that, exact - they en that a limit is not exceeded at all, not even by a single request.
Approximate limits may let a very small number of requests to exceed the limit with the benefit being that approximate limits are faster to check than exact ones.
When to use which type depends on a particular project:
-
In some projects, it does not really matter if callers have a limit of 1,000 requests/minute or 1,005 requests/minute because the difference is too tiny to make a business impact. Approximate limits work best in this case.
-
In other projects, there may be requirements that the limit never be exceeded no matter the circumstances. Use exact limits here.
Python code and web-admin
Alright, let's check how to define the limits in the Zato Dashboard. We will use the sample service below:
# -*- coding: utf-8 -*-
# Zato
from zato.server.service import Service
class Sample(Service):
name = 'api.sample'
def handle(self):
# Return a simple string on response
self.response.payload = 'Hello there!\n'
Now, in web-admin, we will configure limits - separately for the service, a new and a new REST API channel (endpoint).
Points of interest:
- Configuration for each type of object is independent - within the same invocation some limits may be exact, some may be approximate
- There can be multiple configuration entries for each object
- A unit of time is "m", "h" or "d", depending on whether the limit is per minute, hour or day, respectively
- All limits within the same configuration are checked in the order of their definition which is why the most generic ones should be listed first
Testing it out
Now, all is left is to invoke the service from curl.
As long as limits are not reached, a business response is returned:
But if a limit is reached, the caller receives an error message with the 429 HTTP status.
$ curl -v http://my.user:password@localhost:11223/api/sample
* Trying 127.0.0.1...
...
< HTTP/1.1 429 Too Many Requests
< Server: Zato
< X-Zato-CID: b8053d68612d626d338b02
...
{"zato_env":{"result":"ZATO_ERROR","cid":"b8053d68612d626d338b02eb",
"details":"Error 429 Too Many Requests"}}
$
Note that the caller never knows what the limit was - that information is saved in Zato server logs along with other details so that API authors can correlate what callers get with the very rate limiting definition that prevented them from accessing the service.
zato.common.rate_limiting.common.RateLimitReached: Max. rate limit of 100/m reached;
from:`10.74.199.53`, network:`*`; last_from:`127.0.0.1;
last_request_time_utc:`2025-01-12T15:30:41.943794;
last_cid:`5f4f1ef65490a23e5c37eda1`; (cid:b8053d68612d626d338b02)
And this is it - we have created a new API rate limiting definition in Zato and tested it out successfully!
More resources
➤ Python API integration tutorial
➤ What is an integration platform?
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
➤ Open-source iPaaS in Python
January 12, 2025
Real Python
Python's assert: Debug and Test Your Code Like a Pro
Python’s assert
statement allows you to write sanity checks in your code. These checks are known as assertions, and you can use them to test if certain assumptions remain true while you’re developing your code. If any of your assertions turn false, it indicates a bug by raising an AssertionError
.
Assertions are a convenient tool for documenting, debugging, and testing code during development. Once you’ve debugged and tested your code with the help of assertions, then you can turn them off to optimize the code for production. You disable assertions by running Python in optimized mode with the -O
or -OO
options, or by setting the PYTHONOPTIMIZE
environment variable.
By the end of this tutorial, you’ll understand that:
assert
in Python is a statement for setting sanity checks in your code.assert()
checks a condition and raises anAssertionError
if false.- You should use asserts for debugging and testing, not for data validation or error handling.
raise
andassert
are different becauseraise
manually triggers an exception, whileassert
checks a condition and raises an exception automatically if the condition fails.
To get the most out of this tutorial, you should have previous knowledge of expressions and operators, functions, conditional statements, and exceptions. Having a basic understanding of documenting, debugging, and testing Python code is also a plus.
Delve into the tutorial to learn how to effectively use assertions for documenting, debugging, and testing your Python code, along with understanding their limitations and best practices for production environments.
Free Download: Get a sample chapter from Python Tricks: The Book that shows you Python’s best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.
Getting to Know Assertions in Python
Python implements a feature called assertions that’s pretty useful during the development of your applications and projects. You’ll find this feature in several other languages too, such as C and Java, and it comes in handy for documenting, debugging, and testing your code.
If you’re looking for a tool to strengthen your debugging and testing process, then assertions are for you. In this section, you’ll learn the basics of assertions, including what they are, what they’re good for, and when you shouldn’t use them in your code.
What Are Assertions?
In Python, assertions are statements that you can use to set sanity checks during the development process. Assertions allow you to test the correctness of your code by checking if some specific conditions remain true, which can come in handy while you’re debugging code.
The assertion condition should always be true unless you have a bug in your program. If the condition turns out to be false, then the assertion raises an exception and terminates the execution of your program.
With assertions, you can set checks to make sure that invariants within your code stay invariant. By doing so, you can check assumptions like preconditions and postconditions. For example, you can test conditions along the lines of This argument is not None
or This return value is a string. These kinds of checks can help you catch errors as soon as possible when you’re developing a program.
What Are Assertions Good For?
Assertions are mainly for debugging. They’ll help you ensure that you don’t introduce new bugs while adding features and fixing other bugs in your code. However, they can have other interesting use cases within your development process. These use cases include documenting and testing your code.
The primary role of assertions is to trigger the alarms when a bug appears in a program. In this context, assertions mean Make sure that this condition remains true. Otherwise, throw an error.
In practice, you can use assertions to check preconditions and postconditions in your programs at development time. For example, programmers often place assertions at the beginning of functions to check if the input is valid (preconditions). Programmers also place assertions before functions’ return values to check if the output is valid (postconditions).
Assertions make it clear that you want to check if a given condition is and remains true. In Python, they can also include an optional message to unambiguously describe the error or problem at hand. That’s why they’re also an efficient tool for documenting code. In this context, their main advantage is their ability to take concrete action instead of being passive, as comments and docstrings are.
Finally, assertions are also ideal for writing test cases in your code. You can write concise and to-the-point test cases because assertions provide a quick way to check if a given condition is met or not, which defines if the test passes or not.
You’ll learn more about these common use cases of assertions later in this tutorial. Now you’ll learn the basics of when you shouldn’t use assertions.
When Not to Use Assertions?
In general, you shouldn’t use assertions for data processing or data validation, because you can disable assertions in your production code, which ends up removing all your assertion-based processing and validation code. Using assertions for data processing and validation is a common pitfall, as you’ll learn in Understanding Common Pitfalls of assert
later in this tutorial.
Additionally, assertions aren’t an error-handling tool. The ultimate purpose of assertions isn’t to handle errors in production but to notify you during development so that you can fix them. In this regard, you shouldn’t write code that catches assertion errors using a regular try
… except
statement.
Understanding Python’s assert
Statements
Read the full article at https://realpython.com/python-assert-statement/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Variables in Python: Usage and Best Practices
In Python, variables are symbolic names that refer to objects or values stored in your computer’s memory. They allow you to assign descriptive names to data, making it easier to manipulate and reuse values throughout your code. You create a Python variable by assigning a value using the syntax variable_name = value
.
By the end of this tutorial, you’ll understand that:
- Variables in Python are symbolic names pointing to objects or values in memory.
- You define variables by assigning them a value using the assignment operator.
- Python variables are dynamically typed, allowing type changes through reassignment.
- Python variable names can include letters, digits, and underscores but can’t start with a digit. You should use snake case for multi-word names to improve readability.
- Variables exist in different scopes (global, local, non-local, or built-in), which affects how you can access them.
- You can have an unlimited number of variables in Python, limited only by computer memory.
To get the most out of this tutorial, you should be familiar with Python’s basic data types and have a general understanding of programming concepts like loops and functions.
Don’t worry if you don’t have all this knowledge yet and you’re just getting started. You won’t need this knowledge to benefit from working through the early sections of this tutorial.
Get Your Code: Click here to download the free sample code that shows you how to use variables in Python.
Take the Quiz: Test your knowledge with our interactive “Variables in Python: Usage and Best Practices” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Variables in Python: Usage and Best PracticesIn this quiz, you'll test your understanding of variables in Python. Variables are symbolic names that refer to objects or values stored in your computer's memory, and they're essential building blocks for any Python program.
Getting to Know Variables in Python
In Python, variables are names associated with concrete objects or values stored in your computer’s memory. By associating a variable with a value, you can refer to the value using a descriptive name and reuse it as many times as needed in your code.
Variables behave as if they were the value they refer to. To use variables in your code, you first need to learn how to create them, which is pretty straightforward in Python.
Creating Variables With Assignments
The primary way to create a variable in Python is to assign it a value using the assignment operator and the following syntax:
variable_name = value
In this syntax, you have the variable’s name on the left, then the assignment (=
) operator, followed by the value you want to assign to the variable at hand. The value in this construct can be any Python object, including strings, numbers, lists, dictionaries, or even custom objects.
Note: To learn more about assignments, check out Python’s Assignment Operator: Write Robust Assignments.
Here are a few examples of variables:
>>> word = "Python"
>>> number = 42
>>> coefficient = 2.87
>>> fruits = ["apple", "mango", "grape"]
>>> ordinals = {1: "first", 2: "second", 3: "third"}
>>> class SomeCustomClass: pass
>>> instance = SomeCustomClass()
In this code, you’ve defined several variables by assigning values to names. The first five examples include variables that refer to different built-in types. The last example shows that variables can also refer to custom objects like an instance of your SomeCustomClass
class.
Setting and Changing a Variable’s Data Type
Apart from a variable’s value, it’s also important to consider the data type of the value. When you think about a variable’s type, you’re considering whether the variable refers to a string, integer, floating-point number, list, tuple, dictionary, custom object, or another data type.
Python is a dynamically typed language, which means that variable types are determined and checked at runtime rather than during compilation. Because of this, you don’t need to specify a variable’s type when you’re creating the variable. Python will infer a variable’s type from the assigned object.
Note: In Python, variables themselves don’t have data types. Instead, the objects that variables reference have types.
For example, consider the following variables:
>>> name = "Jane Doe"
>>> age = 19
>>> subjects = ["Math", "English", "Physics", "Chemistry"]
>>> type(name)
<class 'str'>
>>> type(age)
<class 'int'>
>>> type(subjects)
<class 'list'>
In this example, name
refers to the "Jane Doe"
value, so the type of name
is str
. Similarly, age
refers to the integer number 19
, so its type is int
. Finally, subjects
refers to a list, so its type is list
. Note that you don’t have to explicitly tell Python which type each variable is. Python determines and sets the type by checking the type of the assigned value.
Read the full article at https://realpython.com/python-variables/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
How to Get a List of All Files in a Directory With Python
To get all the files in a directory with Python, you can leverage the pathlib
module. This tutorial covers how to use methods like .iterdir()
, .glob()
, and .rglob()
to list directory contents.
For a direct list of files and folders, you use .iterdir()
. The .glob()
and .rglob()
methods support glob patterns for filtering files with specific extensions or names. For advanced filtering, you can combine these methods with comprehensions or filter functions.
By the end of this tutorial, you’ll understand that you can:
- List all files of a directory in Python using
pathlib.Path().iterdir()
- Find all files with a particular extension with
pathlib.Path().glob("*.extension")
- Use
pathlib.Path().rglob("*")
to recursively find all files in a directory and its subdirectories
You’ll explore the most general-purpose techniques in the pathlib
module for listing items in a directory, but you’ll also learn a bit about some alternative tools.
Source Code: Click here to download the free source code, directories, and bonus materials that showcase different ways to list files and folders in a directory with Python.
Before pathlib
came out in Python 3.4, if you wanted to work with file paths, then you’d use the os
module. While this was very efficient in terms of performance, you had to handle all the paths as strings.
Handling paths as strings may seem okay at first, but once you start bringing multiple operating systems into the mix, things get more tricky. You also end up with a bunch of code related to string manipulation, which can get very abstracted from what a file path is. Things can get cryptic pretty quickly.
Note: Check out the downloadable materials for some tests that you can run on your machine. The tests will compare the time it takes to return a list of all the items in a directory using methods from the pathlib
module, the os
module, and even the Python 3.12 version of pathlib
. That version includes the well-known walk()
function, which you won’t cover in this tutorial.
That’s not to say that working with paths as strings isn’t feasible—after all, developers managed fine without pathlib
for many years! The pathlib
module just takes care of a lot of the tricky stuff and lets you focus on the main logic of your code.
It all begins with creating a Path
object, which will be different depending on your operating system (OS). On Windows, you’ll get a WindowsPath
object, while Linux and macOS will return PosixPath
:
With these OS-aware objects, you can take advantage of the many methods and properties available, such as ones to get a list of files and folders.
Note: If you’re interested in learning more about pathlib
and its features, then check out Python’s pathlib Module: Taming the File System and the pathlib
documentation.
Now, it’s time to dive into listing folder contents. Be aware that there are several ways to do this, and picking the right one will depend on your specific use case.
Getting a List of All Files and Folders in a Directory in Python
Before getting started on listing, you’ll want a set of files that matches what you’ll encounter in this tutorial. In the supplementary materials, you’ll find a folder called Desktop. If you plan to follow along, download this folder and navigate to the parent folder and start your Python REPL there:
Source Code: Click here to download the free source code, directories, and bonus materials that showcase different ways to list files and folders in a directory with Python.
You could also use your own desktop too. Just start the Python REPL in the parent directory of your desktop, and the examples should work, but you’ll have your own files in the output instead.
Note: You’ll mainly see WindowsPath
objects as outputs in this tutorial. If you’re following along on Linux or macOS, then you’ll see PosixPath
instead. That’ll be the only difference. The code you write is the same on all platforms.
If you only need to list the contents of a given directory, and you don’t need to get the contents of each subdirectory too, then you can use the Path
object’s .iterdir()
method. If your aim is to move through directories and subdirectories recursively, then you can jump ahead to the section on recursive listing.
The .iterdir()
method, when called on a Path
object, returns a generator that yields Path
objects representing child items. If you wrap the generator in a list()
constructor, then you can see your list of files and folders:
>>> import pathlib
>>> desktop = pathlib.Path("Desktop")
>>> # .iterdir() produces a generator
>>> desktop.iterdir()
<generator object Path.iterdir at 0x000001A8A5110740>
>>> # Which you can wrap in a list() constructor to materialize
>>> list(desktop.iterdir())
[WindowsPath('Desktop/Notes'),
WindowsPath('Desktop/realpython'),
WindowsPath('Desktop/scripts'),
WindowsPath('Desktop/todo.txt')]
Passing the generator produced by .iterdir()
to the list()
constructor provides you with a list of Path
objects representing all the items in the Desktop directory.
Read the full article at https://realpython.com/get-all-files-in-directory-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
How to Write Beautiful Python Code With PEP 8
PEP 8, sometimes spelled PEP8 or PEP-8, is the official style guide for Python code. PEP 8 gives guidelines on naming conventions, code layout, and other best practices. Guido van Rossum, Barry Warsaw, and Alyssa Coghlan wrote it in 2001 with a focus on enhancing code readability and consistency. By adhering to PEP 8, you ensure that your Python code is readable and maintainable, which helps with collaboration and professional development.
By the end of this tutorial, you’ll understand that:
- PEP 8 is a guide for writing clean, readable, and consistent Python code.
- PEP 8 is still relevant in modern Python development.
- Following PEP 8 is recommended for all Python developers.
- Python uses snake case for variable names—lowercase words separated by underscores.
- Python function names should also use snake case.
- Class names in Python use camel case, with each word starting with a capital letter.
PEP stands for Python Enhancement Proposal, and there are many PEPs. These documents primarily describe new features proposed for the Python language, but some PEPs also focus on design and style and aim to serve as a resource for the community. PEP 8 is one of these style-focused PEPs.
In this tutorial, you’ll cover the key guidelines laid out in PEP 8 and explore beginner to intermediate programming topics. You can learn about more advanced topics by reading the full PEP 8 documentation.
Get Your Code Click here to download the free sample code that shows you how to write PEP 8 compliant code.
Take the Quiz: Test your knowledge with our interactive “How to Write Beautiful Python Code With PEP 8” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Write Beautiful Python Code With PEP 8In this quiz, you'll test your understanding of PEP 8, the Python Enhancement Proposal that provides guidelines and best practices on how to write Python code. By working through this quiz, you'll revisit the key guidelines laid out in PEP 8 and how to set up your development environment to write PEP 8 compliant Python code.
Why We Need PEP 8
“Readability counts.”
— The Zen of Python
PEP 8 exists to improve the readability of Python code. But why is readability so important? Why is writing readable code one of the guiding principles of the Python language, according to the Zen of Python?
Note: You may encounter the term Pythonic when the Python community refers to code that follows the idiomatic writing style specific to Python. Pythonic code adheres to Python’s design principles and philosophy and emphasizes readability, simplicity, and clarity.
As Guido van Rossum said, “Code is read much more often than it’s written.” You may spend a few minutes, or a whole day, writing a piece of code to process user authentication. Once you’ve written it, you’re never going to write it again.
But you’ll definitely have to read it again. That piece of code might remain part of a project you’re working on. Every time you go back to that file, you’ll have to remember what that code does and why you wrote it, so readability matters.
It can be difficult to remember what a piece of code does a few days, or weeks, after you wrote it.
If you follow PEP 8, you can be sure that you’ve named your variables well. You’ll know that you’ve added enough whitespace so it’s easier to follow logical steps in your code. You’ll also have commented your code well. All of this will mean your code is more readable and easier to come back to. If you’re a beginner, following the rules of PEP 8 can make learning Python a much more pleasant task.
Note: Following PEP 8 is particularly important if you’re looking for a development job. Writing clear, readable code shows professionalism. It’ll tell an employer that you understand how to structure your code well.
If you have more experience writing Python code, then you may need to collaborate with others. Writing readable code here is crucial. Other people, who may have never met you or seen your coding style before, will have to read and understand your code. Having guidelines that you follow and recognize will make it easier for others to read your code.
Naming Conventions
“Explicit is better than implicit.”
— The Zen of Python
When you write Python code, you have to name a lot of things: variables, functions, classes, packages, and so on. Choosing sensible names will save you time and energy later. You’ll be able to figure out, from the name, what a certain variable, function, or class represents. You’ll also avoid using potentially confusing names that might result in errors that are difficult to debug.
One suggestion is to never use l
, O
, or I
single letter names as these can be mistaken for 1
and 0
, depending on what typeface a programmer uses.
For example, consider the code below, where you assign the value 2
to the single letter O
:
O = 2 # ❌ Not recommended
Doing this may look like you’re trying to reassign 2
to zero. While making such a reassignment isn’t possible in Python and will cause a syntax error, using an ambigious variable name such as O
can make your code more confusing and harder to read and reason about.
Naming Styles
Read the full article at https://realpython.com/python-pep8/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Primer on Jinja Templating
Jinja is a powerful template engine commonly used in Python web applications to create dynamic web pages. Jinja also supports standalone usage, enabling you to create text files with programmatically filled content, making it versatile beyond web frameworks like Flask and Django.
In this tutorial, you’ll learn how to install Jinja, create and render Jinja templates, and use Jinja’s features such as conditional statements and loops. You’ll also explore how to use filters and macros to enhance your templates’ functionality, and discover how to nest templates and integrate Jinja seamlessly into a Flask web application.
By the end of this tutorial, you’ll understand that:
- Jinja is used to create dynamic web templates and generate text files with programmatic content.
- A templating engine processes templates with dynamic content, rendering them as static pages.
- You use Jinja templates in HTML by embedding dynamic placeholders and rendering them with Python.
- You create an
if-else
statement in Jinja using{% if condition %} ... {% else %} ... {% endif %}
. - You create a
for
loop in Jinja using{% for item in list %} ... {% endfor %}
. - Jinja is primarily used in Python but can be integrated with other languages and frameworks.
You’ll start by using Jinja on its own to cover the basics of Jinja templating. Later you’ll build a basic Flask web project with two pages and a navigation bar to leverage the full potential of Jinja.
Throughout the tutorial, you’ll build an example app that showcases some of Jinja’s wide range of features. To see what it’ll do, skip ahead to the final section.
You can also find the full source code of the web project by clicking on the link below:
Source Code: Click here to download the source code that you’ll use to explore Jinja’s capabilities.
Take the Quiz: Test your knowledge with our interactive “Primer on Jinja Templating” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Primer on Jinja TemplatingIn this quiz, you'll test your understanding of Jinja templating. Jinja is a powerful tool for building rich templates in Python web applications, and it can also be used to create text files with programmatic content.
Get Started With Jinja
Jinja is not only a city in the Eastern Region of Uganda and a Japanese temple, but also a template engine. You commonly use template engines for web templates that receive dynamic content from the back end and render it as a static page in the front end.
But you can use Jinja without a web framework running in the background. That’s exactly what you’ll do in this section. Specifically, you’ll install Jinja and build your first templates.
Install Jinja
Before exploring any new package, it’s a good idea to create and activate a virtual environment. That way, you’re installing any project dependencies in your project’s virtual environment instead of system-wide.
Select your operating system below and use your platform-specific command to set up a virtual environment:
With the above commands, you create and activate a virtual environment named venv
by using Python’s built-in venv
module.
The parentheses (()
) surrounding venv
in front of the prompt indicate that you’ve successfully activated the virtual environment.
After you’ve created and activated your virtual environment, it’s time to install Jinja with pip
:
(venv) $ python -m pip install Jinja2
Don’t forget the 2
at the end of the package name.
Otherwise, you’ll install an old version that isn’t compatible with Python 3.
It’s worth noting that although the current major version is actually greater than 2
, the package that you’ll install is nevertheless called Jinja2
.
You can verify that you’ve installed a modern version of Jinja by running pip list
:
(venv) $ python -m pip list
Package Version
---------- -------
Jinja2 3.x
...
To make things even more confusing, after installing Jinja with an uppercase J
, you have to import it with a lowercase j
in Python.
Try it out by opening the interactive Python interpreter and running the following commands:
>>> import Jinja2
Traceback (most recent call last):
...
ModuleNotFoundError: No module named 'Jinja2'
>>> import jinja2
>>> # No error
Read the full article at https://realpython.com/primer-on-jinja-templating/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
January 11, 2025
Real Python
Python's urllib.request for HTTP Requests
If you’re looking to make HTTP requests in Python using the built-in urllib.request
module, then this tutorial is for you. urllib.request
lets you perform HTTP operations without having to add external dependencies.
This tutorial covers how to execute GET and POST requests, handle HTTP responses, and even manage character encodings. You’ll also learn how to handle common errors and differentiate between urllib.request
and the requests
library.
By the end of this tutorial, you’ll understand that:
urllib
is part of Python’s standard library.urllib
is used to make HTTP requests.- You can open a URL with
urllib
by importingurlopen
and calling it with the target URL. - To send a POST request using
urllib
, you pass data tourlopen()
or aRequest
object. - The
requests
package offers a higher-level interface with intuitive syntax. urllib3
is different from the built-inurllib
module.
In this tutorial, you’ll learn how to make basic HTTP requests, how to deal with character encodings of HTTP messages, and how to solve some common errors when using urllib.request
. Finally, you’ll explore why both urllib
and the requests
library exist and when to use one or the other.
If you’ve heard of HTTP requests, including GET and POST, then you’re probably ready for this tutorial. Also, you should’ve already used Python to read and write to files, ideally with a context manager, at least once.
Learn More: Click here to join 290,000+ Python developers on the Real Python Newsletter and get new Python tutorials and news that will make you a more effective Pythonista.
Basic HTTP GET Requests With urllib.request
Before diving into the deep end of what an HTTP request is and how it works, you’re going to get your feet wet by making a basic GET request to a sample URL. You’ll also make a GET request to a mock REST API for some JSON data. In case you’re wondering about POST Requests, you’ll be covering them later in the tutorial, once you have some more knowledge of urllib.request
.
Beware: Depending on your exact setup, you may find that some of these examples don’t work. If so, skip ahead to the section on common urllib.request
errors for troubleshooting.
If you’re running into a problem that’s not covered there, be sure to comment below with a precise and reproducible example.
To get started, you’ll make a request to www.example.com
, and the server will return an HTTP message. Ensure that you’re using Python 3 or above, and then use the urlopen()
function from urllib.request
:
>>> from urllib.request import urlopen
>>> with urlopen("https://www.example.com") as response:
... body = response.read()
...
>>> body[:15]
b'<!doctype html>'
In this example, you import urlopen()
from urllib.request
. Using the context manager with
, you make a request and receive a response with urlopen()
. Then you read the body of the response and close the response object. With that, you display the first fifteen positions of the body, noting that it looks like an HTML document.
There you are! You’ve successfully made a request, and you received a response. By inspecting the content, you can tell that it’s likely an HTML document. Note that the printed output of the body is preceded by b
. This indicates a bytes literal, which you may need to decode. Later in the tutorial, you’ll learn how to turn bytes into a string, write them to a file, or parse them into a dictionary.
The process is only slightly different if you want to make calls to REST APIs to get JSON data. In the following example, you’ll make a request to {JSON} Placeholder for some fake to-do data:
>>> from urllib.request import urlopen
>>> import json
>>> url = "https://jsonplaceholder.typicode.com/todos/1"
>>> with urlopen(url) as response:
... body = response.read()
...
>>> todo_item = json.loads(body)
>>> todo_item
{'userId': 1, 'id': 1, 'title': 'delectus aut autem', 'completed': False}
In this example, you’re doing pretty much the same as in the previous example. But in this one, you import urllib.request
and json
, using the json.loads()
function with body
to decode and parse the returned JSON bytes into a Python dictionary. Voila!
If you’re lucky enough to be using error-free endpoints, such as the ones in these examples, then maybe the above is all that you need from urllib.request
. Then again, you may find that it’s not enough.
Now, before doing some urllib.request
troubleshooting, you’ll first gain an understanding of the underlying structure of HTTP messages and learn how urllib.request
handles them. This understanding will provide a solid foundation for troubleshooting many different kinds of issues.
The Nuts and Bolts of HTTP Messages
To understand some of the issues that you may encounter when using urllib.request
, you’ll need to examine how a response is represented by urllib.request
. To do that, you’ll benefit from a high-level overview of what an HTTP message is, which is what you’ll get in this section.
Before the high-level overview, a quick note on reference sources. If you want to get into the technical weeds, the Internet Engineering Task Force (IETF) has an extensive set of Request for Comments (RFC) documents. These documents end up becoming the actual specifications for things like HTTP messages. RFC 7230, part 1: Message Syntax and Routing, for example, is all about the HTTP message.
If you’re looking for some reference material that’s a bit easier to digest than RFCs, then the Mozilla Developer Network (MDN) has a great range of reference articles. For example, their article on HTTP messages, while still technical, is a lot more digestible.
Now that you know about these essential sources of reference information, in the next section you’ll get a beginner-friendly overview of HTTP messages.
Understanding What an HTTP Message Is
Read the full article at https://realpython.com/urllib-request/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python's pathlib Module: Taming the File System
Python’s pathlib
module helps streamline your work with file and directory paths. Instead of relying on traditional string-based path handling, you can use the Path
object, which provides a cross-platform way to read, write, move, and delete files.
pathlib
also brings together functionality previously spread across other libraries like os
, glob
, and shutil
, making file operations more straightforward. Plus, it includes built-in methods for reading and writing text or binary files, ensuring a clean and Pythonic approach to handling file tasks.
By the end of this tutorial, you’ll understand that:
pathlib
provides an object-oriented interface for managing file and directory paths in Python.- You can instantiate
Path
objects using class methods like.cwd()
,.home()
, or by passing strings toPath
. pathlib
allows you to read, write, move, and delete files efficiently using methods.- To get a list of file paths in a directory, you can use
.iterdir()
,.glob()
, or.rglob()
. - You can use
pathlib
to check if a path corresponds to a file by calling the.is_file()
method on aPath
object.
You’ll also explore a bunch of code examples in this tutorial, which you can use for your everyday file operations. For example, you’ll dive into counting files, finding the most recently modified file in a directory, and creating unique filenames.
It’s great that pathlib
offers so many methods and properties, but they can be hard to remember on the fly. That’s where a cheat sheet can come in handy. To get yours, click the link below:
Free Download: Click here to claim your pathlib
cheat sheet so you can tame the file system with Python.
The Problem With Representing Paths as Strings
With Python’s pathlib
, you can save yourself some headaches. Its flexible Path
class paves the way for intuitive semantics. Before you have a closer look at the class, take a moment to see how Python developers had to deal with paths before pathlib
was around.
Traditionally, Python has represented file paths using regular text strings. However, since paths are more than plain strings, important functionality was spread all around the standard library, including in libraries like os
, glob
, and shutil
.
As an example, the following code block moves files into a subfolder:
import glob
import os
import shutil
for file_name in glob.glob("*.txt"):
new_path = os.path.join("archive", file_name)
shutil.move(file_name, new_path)
You need three import
statements in order to move all the text files to an archive directory.
Python’s pathlib
provides a Path
class that works the same way on different operating systems.
Instead of importing different modules such as glob
, os
, and shutil
, you can perform the same tasks by using pathlib
alone:
from pathlib import Path
for file_path in Path.cwd().glob("*.txt"):
new_path = Path("archive") / file_path.name
file_path.replace(new_path)
Just as in the first example, this code finds all the text files in the current directory and moves them to an archive/
subdirectory.
However, with pathlib
, you accomplish these tasks with fewer import
statements and more straightforward syntax, which you’ll explore in depth in the upcoming sections.
Path Instantiation With Python’s pathlib
One motivation behind pathlib
is to represent the file system with dedicated objects instead of strings. Fittingly, the official documentation of pathlib
is called pathlib
— Object-oriented filesystem paths.
The object-oriented approach is already quite visible when you contrast the pathlib
syntax with the old os.path
way of doing things. It gets even more obvious when you note that the heart of pathlib
is the Path
class:
If you’ve never used this module before or just aren’t sure which class is right for your task,
Path
is most likely what you need. (Source)
In fact, Path
is so frequently used that you usually import it directly:
>>> from pathlib import Path
>>> Path
<class 'pathlib.Path'>
Because you’ll mainly be working with the Path
class of pathlib
, this way of importing Path
saves you a few keystrokes in your code. This way, you can work with Path
directly, rather than importing pathlib
as a module and referring to pathlib.Path
.
There are a few different ways of instantiating a Path
object. In this section, you’ll explore how to create paths by using class methods, passing in strings, or joining path components.
Using Path Methods
Read the full article at https://realpython.com/python-pathlib/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Operators and Expressions in Python
Python operators enable you to perform computations by combining objects and operators into expressions. Understanding Python operators is essential for manipulating data effectively.
This tutorial covers arithmetic, comparison, Boolean, identity, membership, bitwise, concatenation, and repetition operators, along with augmented assignment operators. You’ll also learn how to build expressions using these operators and explore operator precedence to understand the order of operations in complex expressions.
By the end of this tutorial, you’ll understand that:
- Arithmetic operators perform mathematical calculations on numeric values.
- Comparison operators evaluate relationships between values, returning Boolean results.
- Boolean operators create compound logical expressions.
- Identity operators determine if two operands refer to the same object.
- Membership operators check for the presence of a value in a container.
- Bitwise operators manipulate data at the binary level.
- Concatenation and repetition operators manipulate sequence data types.
- Augmented assignment operators simplify expressions involving the same variable.
This tutorial provides a comprehensive guide to Python operators, empowering you to create efficient and effective expressions in your code. To get the most out of this tutorial, you should have a basic understanding of Python programming concepts, such as variables, assignments, and built-in data types.
Free Bonus: Click here to download your comprehensive cheat sheet covering the various operators in Python.
Take the Quiz: Test your knowledge with our interactive “Python Operators and Expressions” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python Operators and ExpressionsTest your understanding of Python operators and expressions.
Getting Started With Operators and Expressions
In programming, an operator is usually a symbol or combination of symbols that allows you to perform a specific operation. This operation can act on one or more operands. If the operation involves a single operand, then the operator is unary. If the operator involves two operands, then the operator is binary.
For example, in Python, you can use the minus sign (-
) as a unary operator to declare a negative number. You can also use it to subtract two numbers:
>>> -273.15
-273.15
>>> 5 - 2
3
In this code snippet, the minus sign (-
) in the first example is a unary operator, and the number 273.15
is the operand. In the second example, the same symbol is a binary operator, and the numbers 5
and 2
are its left and right operands.
Programming languages typically have operators built in as part of their syntax. In many languages, including Python, you can also create your own operator or modify the behavior of existing ones, which is a powerful and advanced feature to have.
In practice, operators provide a quick shortcut for you to manipulate data, perform mathematical calculations, compare values, run Boolean tests, assign values to variables, and more. In Python, an operator may be a symbol, a combination of symbols, or a keyword, depending on the type of operator that you’re dealing with.
For example, you’ve already seen the subtraction operator, which is represented with a single minus sign (-
). The equality operator is a double equal sign (==
). So, it’s a combination of symbols:
>>> 42 == 42
True
In this example, you use the Python equality operator (==
) to compare two numbers. As a result, you get True
, which is one of Python’s Boolean values.
Speaking of Boolean values, the Boolean or logical operators in Python are keywords rather than signs, as you’ll learn in the section about Boolean operators and expressions. So, instead of the odd signs like ||
, &&
, and !
that many other programming languages use, Python uses or
, and
, and not
.
Using keywords instead of odd signs is a really cool design decision that’s consistent with the fact that Python loves and encourages code’s readability.
You’ll find several categories or groups of operators in Python. Here’s a quick list of those categories:
- Assignment operators
- Arithmetic operators
- Comparison operators
- Boolean or logical operators
- Identity operators
- Membership operators
- Concatenation and repetition operators
- Bitwise operators
All these types of operators take care of specific types of computations and data-processing tasks. You’ll learn more about these categories throughout this tutorial. However, before jumping into more practical discussions, you need to know that the most elementary goal of an operator is to be part of an expression. Operators by themselves don’t do much:
>>> -
File "<input>", line 1
-
^
SyntaxError: incomplete input
>>> ==
File "<input>", line 1
==
^^
SyntaxError: incomplete input
>>> or
File "<input>", line 1
or
^^
SyntaxError: incomplete input
As you can see in this code snippet, if you use an operator without the required operands, then you’ll get a syntax error. So, operators must be part of expressions, which you can build using Python objects as operands.
So, what is an expression anyway? Python has simple and compound statements. A simple statement is a construct that occupies a single logical line, like an assignment statement. A compound statement is a construct that occupies multiple logical lines, such as a for
loop or a conditional statement. An expression is a simple statement that produces and returns a value.
Read the full article at https://realpython.com/python-operators-expressions/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Inheritance and Composition: A Python OOP Guide
In Python, understanding inheritance and composition is crucial for effective object-oriented programming. Inheritance allows you to model an is a relationship, where a derived class extends the functionality of a base class. Composition, on the other hand, models a has a relationship, where a class contains objects of other classes to build complex structures. Both techniques promote code reuse, but they approach it differently.
You achieve composition in Python by creating classes that contain objects of other classes, allowing you to reuse code through these contained objects. This approach provides flexibility and adaptability, as changes in component classes minimally impact the composite class.
Inheritance in Python is achieved by defining a class that derives from a base class, inheriting its interface and implementation. You can use multiple inheritance to derive a class from more than one base class, but it requires careful handling of method resolution order (MRO).
By the end of this tutorial, you’ll understand that:
- Composition and inheritance in Python model relationships between classes, enabling code reuse in different ways.
- Composition is achieved by creating classes that contain objects of other classes, allowing for flexible designs.
- Inheritance models an is a relationship, allowing derived classes to extend base class functionality.
- Inheritance in Python is achieved by defining classes that derive from base classes, inheriting their interface and implementation.
Exploring the differences between inheritance and composition helps you choose the right approach for designing robust, maintainable Python applications. Understanding how and when to apply each concept is key to leveraging the full power of Python’s object-oriented programming capabilities.
Get Your Code: Click here to get the free sample code that shows you how to use inheritance and composition in Python.
Take the Quiz: Test your knowledge with our interactive “Inheritance and Composition: A Python OOP Guide” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Inheritance and Composition: A Python OOP GuideIn this quiz, you'll test your understanding of inheritance and composition in Python. These are two major concepts in object-oriented programming that help model the relationship between two classes. By working through this quiz, you'll revisit how to use inheritance and composition in Python, model class hierarchies, and use multiple inheritance.
What Are Inheritance and Composition?
Inheritance and composition are two major concepts in object-oriented programming that model the relationship between two classes. They drive the design of an application and determine how the application should evolve as new features are added or requirements change.
Both of them enable code reuse, but they do it in different ways.
What’s Inheritance?
Inheritance models what’s called an is a relationship. This means that when you have a Derived
class that inherits from a Base
class, you’ve created a relationship where Derived
is a specialized version of Base
.
Inheritance is represented using the Unified Modeling Language, or UML, in the following way:
This model represents classes as boxes with the class name on top. It represents the inheritance relationship with an arrow from the derived class pointing to the base class. The word extends is usually added to the arrow.
Note: In an inheritance relationship:
- Classes that inherit from another are called derived classes, subclasses, or subtypes.
- Classes from which other classes are derived are called base classes or super classes.
- A derived class is said to derive, inherit, or extend a base class.
Say you have the base class Animal
, and you derive from it to create a Horse
class. The inheritance relationship states that Horse
is an Animal
. This means that Horse
inherits the interface and implementation of Animal
, and you can use Horse
objects to replace Animal
objects in the application.
This is known as the Liskov substitution principle. The principle states that if S
is a subtype of T
, then replacing objects of type T
with objects of type S
doesn’t change the program’s behavior.
You’ll see in this tutorial why you should always follow the Liskov substitution principle when creating your class hierarchies, and you’ll learn about the problems that you’ll run into if you don’t.
What’s Composition?
Composition is a concept that models a has a relationship. It enables creating complex types by combining objects of other types. This means that a class Composite
can contain an object of another class Component
. This relationship means that a Composite
has a Component
.
UML represents composition as follows:
The model represents composition through a line that starts with a diamond at the composite class and points to the component class. The composite side can express the cardinality of the relationship. The cardinality indicates the number or the valid range of Component
instances that the Composite
class will contain.
Read the full article at https://realpython.com/inheritance-composition-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
January 10, 2025
Glyph Lefkowitz
The “Active Enum” Pattern
Have you ever written some Python code that looks like this?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
That is to say, have you written code that:
- defined an enum with several members
- associated custom behavior, or custom values, with each member of that enum,
- needed one or more
match
/case
statements (or, if you’ve been programming in Python for more than a few weeks, probably a bigif
/elif
/elif
/else
tree) to do that association?
In this post, I’d like to submit that this is an antipattern; let’s call it the “passive enum” antipattern.
For those of you having a generally positive experience organizing your discrete values with enums, it may seem odd to call this an “antipattern”, so let me first make something clear: the path to a passive enum is going in the correct direction.
Typically - particularly in legacy code that predates Python 3.4 - one begins
with a value that is a bare int
constant, or maybe a str
with some
associated values sitting beside in a few global dict
s.
Starting from there, collecting all of your values into an enum at all is a great first step. Having an explicit listing of all valid values and verifying against them is great.
But, it is a mistake to stop there. There are problems with passive enums, too:
- The behavior can be defined somewhere far away from the data, making it difficult to:
- maintain an inventory of everywhere it’s used,
- update all the consumers of the data when the list of enum values changes, and
- learn about the different usages as a consumer of the API
- Logic may be defined procedurally (via
if
/elif
ormatch
) or declaratively (via e.g. adict
whose keys are your enum and whose values are the required associated value).- If it’s defined procedurally, it can be difficult to build tools to interrogate it, because you need to parse the AST of your Python program. So it can be difficult to build interactive tools that look at the associated data without just calling the relevant functions.
- If it’s defined declaratively, it can be difficult for existing tools
that do know how to interrogate ASTs (mypy, flake8, Pyright, ruff,
et. al.) to make meaningful assertions about it. Does your linter know
how to check that a
dict
whose keys should be every value of your enum is complete?
To refactor this, I would propose a further step towards organizing one’s enum-oriented code: the active enum.
An active enum is one which contains all the logic associated with the first-party provider of the enum itself.
You may recognize this as a more generalized restatement of the object-oriented lens on the principle of “separation of concerns”. The responsibilities of a class ought to be implemented as methods on that class, so that you can send messages to that class via method calls, and it’s up to the class internally to implement things. Enums are no different.
More specifically, you might notice it as a riff on the Active Nothing pattern described in this excellent talk by Sandi Metz, and, yeah, it’s the same thing.
The first refactoring that we can make is, thus, to mechanically move the
method from an external function living anywhere, to a method on SomeNumber
.
At least like this, we present an API to consumers externally that shows that
SomeNumber
has a behavior
method that can be invoked.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
However, this still leaves us with a match
statement that repeats all the
values that we just defined, with no particular guarantee of completeness. To
continue the refactoring, what we can do is change the value of the enum
itself into a simple dataclass to structurally, by definition, contain all the
fields we need:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Here, we give SomeNumber
members a value of NumberValue
, a dataclass that
requires a result: int
and an effect: Callable
to be constructed. Mypy
will properly notice that if x
is a SomeNumber
, that x
will have the type
NumberValue
and we will get proper type checking on its result
(a static
value) and effect
(some associated behaviors)1.
Note that the implementation of behavior
method - still conveniently
discoverable for callers, and with its signature unchanged - is now vastly
simpler.
But what about...
Lookups?
You may be noticing that I have hand-waved over something important to many
enum
users, which is to say, by-value lookup. enum.auto
will have
generated int values for one
, two
, and three
already, and by transforming
those into NumberValue
instances, I can no longer do SomeNumber(1)
.
For the simple, string-enum case, one where you might do class MyEnum: value =
“value”
so that you can do name lookups via MyEnum("value")
, there’s a
simple solution: use square brackets instead of round ones. In this case, with
no matching strings in sight, SomeNumber["one"]
still works.
But, if we want to do integer lookups with our dataclass version here, there’s a simple one-liner that will get them back for you; and, moreover, will let you do lookups on whatever attribute you want:
1 |
|
enum.Flag
?
You can do this with Flag
more or less unchanged, but in the same way that
you can’t expect all your list[T]
behaviors to be defined on T
, the lack of
a 1-to-1 correspondence between Flag
instances and their values makes it more
complex and out of scope for this pattern specifically.
3rd-party usage?
Sometimes an enum is defined in library L and used in application A, where L provides the data and A provides the behavior. If this is the case, then some amount of version shear is unavoidable; this is a situation where the data and behavior have different vendors, and this means that other means of abstraction are required to keep them in sync. Object-oriented modeling methods are for consolidating the responsibility for maintenance within a single vendor’s scope of responsibility. Once you’re not responsible for the entire model, you can’t do the modeling over all of it, and that is perfectly normal and to be expected.
The goal of the Active Enum pattern is to avoid creating the additional complexity of that shear when it does not serve a purpose, not to ban it entirely.
A Case Study
I was inspired to make this post by a recent refactoring I did from a more obscure and magical2 version of this pattern into the version that I am presenting here, but if I am going to call passive enums an “antipattern” I feel like it behooves me to point at an example outside of my own solo work.
So, for a more realistic example, let’s consider a package that all Python
developers will recognize from their day-to-day work,
python-hearthstone
, the
Python library for parsing the data files associated with Blizzard’s popular
computerized collectible card game
Hearthstone.
As I’m sure you already know, there are a lot of enums in this library, but
for one small case study, let’s look a few of the methods in
hearthstone.enums.GameType
.
GameType
has already taken the “step 1” in the direction of an active enum,
as I described above: as_bnet
is an instancemethod on GameType
itself,
making it at least easy to see by looking at the class definition what
operations it supports. However, in the implementation of that
method
(among many others) we can see the worst of both worlds:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
We have procedural code mixed with a data lookup table; raise ValueError
mixed together with value returns. Overall, it looks like this might be hard
to maintain this going forward, or to see what’s going on without a
comprehensive understanding of the game being modeled. Of course for most
python programmers that understanding can be assumed, but, still.
If GameType
were refactored in the manner above3, you’d be able to look at the
member definition for GT_RANKED
and see a mapping of FormatType
to
BnetGameType
, or GT_BATTLEGROUNDS_DUO_FRIENDLY
to see an unconditional
value of BGT_BATTLEGROUNDS_DUO_FRIENDLY
. Given that this enum has 40
elements, with several renamed or removed, it seems reasonable to expect that
more will be added and removed as the game is developed.
Conclusion
If you have large enums that change over time, consider placing the responsibility for the behavior of the values alongside the values directly, and any logic for processing the values as methods of the enum. This will allow you to quickly validate that you have full coverage of any data that is required among all the different members of the enum, and it will allow API clients a convenient surface to discover the capabilities associated with that enum.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!
-
You can get even fancier than this, defining a
typing.Protocol
as your enum’s value, but it’s best to keep things simple and use a very simpledataclass
container if you can. ↩ -
derogatory ↩
-
I did not submit such a refactoring as a PR before writing this post because I don’t have full context for this library and I do not want to harass the maintainers or burden them with extra changes just to make a rhetorical point. If you do want to try that yourself, please file a bug first and clearly explain how you think it would benefit their project’s maintainability, and make sure that such a PR would be welcome. ↩
Test and Code
pytest plugins - a full season
This episode kicks off a season of pytest plugins.
In this episode:
- Introduction to pytest plugins
- The pytest.org pytest plugin list
- Finding pytest related packages on PyPI
- The Top pytest plugins list on pythontest.com
- Exploring popular plugins
- Learning from plugin examples
Links:
- Top pytest plugins list
- pytest.org plugin list
- Top PyPI Packages
- And links to plugins mentioned in the show can be found at pythontest.com/top-pytest-plugins
Learn pytest
- pytest is the number one test framework for Python.
- Learn the basics super fast with Hello, pytest!
- Then later you can become a pytest expert with The Complete pytest Course
- Both courses are at courses.pythontest.com