Planet Python
Last update: November 25, 2025 09:43 PM UTC
November 25, 2025
PyCoder’s Weekly
Issue #710: FastAPI, Floodfill, 20 Years of Django, and More (Nov. 25, 2025)
#710 – NOVEMBER 25, 2025
View in Browser »
Serve a Website With FastAPI Using HTML and Jinja2
Use FastAPI to render Jinja2 templates and serve dynamic sites with HTML, CSS, and JavaScript, then add a color picker that copies hex codes.
REAL PYTHON
Quiz: Serve a Website With FastAPI Using HTML and Jinja2
Review how to build dynamic websites with FastAPI and Jinja2, and serve HTML, CSS, and JS with HTMLResponse and StaticFiles.
REAL PYTHON
Floodfill Algorithm in Python
The floodfill algorithm is used to fill a color in a bounded area. Learn how it works and how to implement it in Python.
RODRIGO GIRÃO SERRÃO
New guide: The Engineering Leader AI Imperative
Augment Code’s new guide features real frameworks to lead your engineering team to systematic transformation: 30% faster PR velocity, 40% reduction in merge times, and 10x task speed-ups across teams. Learn from CTOs at Drata, Webflow, and Tilt who’ve scaled AI across 100+ developer teams →
AUGMENT CODE sponsor
Twenty Years of Django Releases
On November 16th, Django celebrated its 20th anniversary. This quick post highlights a few stats along the way.
DJANGO SOFTWARE FOUNDATION
Python Jobs
Python Video Course Instructor (Anywhere)
Python Tutorial Writer (Anywhere)
Articles & Tutorials
The Uselessness of “Fast” and “Slow” in Programming
“One of the unique aspects of software is how it spans such a large number of orders of magnitude.” The huge difference makes the terms “fast” and “slow” arbitrary. Read on to discover how this effects our thinking as programmers and what mistakes it can cause.
JEREMY BOWERS
New Login Verification for TOTP-based Logins
Previously, when logging into PyPI with a Time-based One-Time Password (TOTP) authenticator, a successful response was sufficient. Now, if you log in from a new device, PyPI will send a verification email. Read all about how this protects PyPI users.
DUSTIN INGRAM
A Better Way to Watch Your Python Apps—Now with AI in the Loop
Scout’s local MCP server lets your AI assistant query real Python telemetry. Call endpoints like get_app_error_groups or get_app_endpoint_traces to surface top errors, latency, and backtraces—no dashboards, no tab-switching, all from chat →
SCOUT APM sponsor
Manim: Create Mathematical Animations
Learn how to use Manim, the animation engine behind 3Blue1Brown, to create clear and compelling visual explanations with Python. This walkthrough shows how you can turn equations and concepts into smooth animations for data science storytelling.
CODECUT.AI • Shared by Khuyen Tran
The Varying Strictness of TypedDict
Brett came across an unexpected typing error when using Pyrefly on his code. He verified it with Pyright, and found the same problem. This post describes the issue and why ty let it pass.
BRETT CANNON
Exploring Class Attributes That Aren’t Really Class Attributes
Syntax used for data classes and typing.NamedTuple confused Stephen when first learning it. Learn why, and how he cleared up his understanding.
STEPHEN GRUPPETTA
Unnecessary Parentheses in Python
Python’s ability to use parentheses for grouping can often confuse new Python users into over-using parentheses in ways that they shouldn’t be used.
TREY HUNNER
Build an MCP Client to Test Servers From Your Terminal
Follow this Python project to build an MCP client that discovers MCP server capabilities and feeds an AI-powered chat with tool calls.
REAL PYTHON
Quiz: Build an MCP Client to Test Servers From Your Terminal
Learn how to create a Python MCP client, start an AI-powered chat session, and run it from the command line. Check your understanding.
REAL PYTHON
Break Out of Loops With Python’s break Keyword
Learn how Python’s break lets you exit for and while loops early, with practical demos from simple games to everyday data tasks.
REAL PYTHON course
Cursor vs. Claude for Django Development
This article looks at how Cursor and Claude compare when developing a Django application.
ŠPELA GIACOMELLI
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
November 26, 2025
REALPYTHON.COM
PyDelhi User Group Meetup
November 29, 2025
MEETUP.COM
Melbourne Python Users Group, Australia
December 1, 2025
J.MP
PyBodensee Monthly Meetup
December 1, 2025
PYBODENSEE.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #710.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
November 25, 2025 07:30 PM UTC
Real Python
Getting Started With Claude Code
Learn how to set up and start using Claude Code to boost your Python workflow. Learn how it differs from Claude Chat and how to use it effectively in your development setup.
You’ll learn how to:
- Install and configure Claude Code
- Run it safely inside project directories
- Work with CLAUDE.md for task context
- Use Git integration for smoother coding workflows
- Apply Claude Code to automate real programming tasks
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 25, 2025 02:00 PM UTC
Python Software Foundation
PSF Code of Conduct Working Group Shares First Transparency Report
The PSF’s Code of Conduct Working Group is a group of volunteers whose purpose is to foster a diverse and inclusive Python community by enforcing the PSF Code of Conduct, along with providing guidance and recommendations to the Python community on codes of conduct, that supports the PSF mission support and facilitate the growth of a diverse and international community of Python programmers.
The working group has recently committed to publishing annual transparency reports and we are pleased to share the first report with you today, for the 2024 calendar year. The initial transparency report took some time to produce, but we've improved our recording keeping practices to make future reports easier to prepare.
The Working Group spent time formalizing our record keeping this year, and going forward we plan to publish our transparency reports in the first quarter of each year. Each year’s report will be added to the same place in the PSF's Code of Conduct documentation so that community members can easily access them. If you have thoughts or feedback on how to make these reports more useful, we welcome you to send us an email at conduct-wg@python.org.
November 25, 2025 01:51 PM UTC
HoloViz
Rich parameters & reactive programming with Param: 2.3 release
November 25, 2025 12:00 AM UTC
Brian Okken
PythonTest Black Friday Deals 2025
Save up to 50% on courses and the pytest book.
The Complete pytest Course bundle
Use BLACKFRIDAY to save 20% off through November.
This code is intended to match lots of 10-20% off deals around
Use SAVE50 to save 50% off through November
This code is intended to match the PragProg deal below
Why leave both in place?
No reason, really. I just thought it’d be fun to have a “choose your own discount” thing.
November 25, 2025 12:00 AM UTC
Seth Michael Larson
Mobile browsers see telephone numbers everywhere
Just like Excel seeing everything as a date,
mobile browsers automatically interpret many numbers as telephone
numbers. When detected, mobile browsers replace the text in the HTML with
a clickable <a href="tel:..."> value that when selected will call the
number denoted. This can be helpful sometimes, but frustrating other
times as random numbers in your HTML suddenly become useless hyperlinks.
Below I've included numbers that may be turned into phone numbers so you can see for yourself why this may be a problem and how many cases there are. Numbers that are detected as a phone number by your browser are highlighted blue by this CSS selector:
a[href^=tel] {
background-color: #00ccff;
}
None of the values below are denoted as telephone number links in the source HTML, they are all automatically created by the browser. Also, if you're not using a mobile browser then the below numbers won't be highlighted. Try opening this page on a mobile phone.
- 2
- 22
- 222
- 2222
- 22222
- 222222
- 2222222
- 22222222
- 222222222
- 2222222222
- 22222222222
- 111111111111
- 222222222222
- 555555555555
- 1111111111111
- 2222222222222 (???)
- 5555555555555
- 11111111111111
- 22222222222222
- 55555555555555
- 111111111111111
- 222222222222222
- 555555555555555
- 2-2
- 2-2-2
- 22-2-2
- 22-22-2
- 22-22-22
- 22-22-222
- 22-222-222
- 222-222-222
- 222-222-2222
- 222-2222-2222
- 2222-2222-2222
- 2222-2222-22222
- 2222-22222-22222
- 22222-22222-22222
- 2 222-222-2222
- +1 222-222-2222
- +2 222-222-2222 (There is no +2 country code...)
- +28 222-222-2222 (Unassigned codes aren't used)
- +1222-222-2222
- +2222-222-2222
- (+1)222-222-2222
- (+2)222-222-2222
- (1)222-222-2222
- (2)222-222-2222
- (1222-222-2222
- (1 222-222-2222
- 1)222-222-2222
- 222–222–2222 (en-dashes)
- 222—222—2222 (em-dashes)
- [1]222-222-2222
- <1>222-222-2222
Are there any other combinations that get detected as telephone numbers that I missed? Send me a pull request or email.
How to prevent automatic telephone number detection?
So how can you prevent browsers from parsing telephone numbers automatically?
Add this HTML to your <head> section:
<meta name="format-detection" content="telephone=no">
This will disable automatic telephone detection, and then you can be explicit about
clickable telephone numbers by using the tel: URL scheme like so:
<a href="tel:+222-222-222-2222">(+222)222-222-2222</a>
Thanks for keeping RSS alive! ♥
November 25, 2025 12:00 AM UTC
November 24, 2025
Rodrigo Girão Serrão
Generalising itertools.pairwise
In this article you will learn about itertools.pairwise, how to use it, and how to generalise it.
In this tutorial you will learn to use and generalise itertools.pairwise.
You will understand what itertools.pairwise does, how to use it, and how to implement a generalised version for when itertools.pairwise isn't enough.
itertools.pairwise
itertools.pairwise is an iterable from the standard module itertools that lets you access overlapping pairs of consecutive elements of the input iterable.
That's quite a mouthful, so let me translate:
You give
pairwisean iterable, like"ABCD", andpairwisegives you pairs back, like("A", "B"),("B", "C"), and("C", "D").
In loops, it is common to unpack the pairs directly to perform some operation on both values.
The example below uses pairwise to determine how the balance of a bank account changed based on the balance history:
from itertools import pairwise # Python 3.10+
balance_history = [700, 1000, 800, 750]
for before, after in pairwise(balance_history):
change = after - before
print(f"Balance changed by {change:+}.")
Balance changed by +300.
Balance changed by -200.
Balance changed by -50.
How to implement pairwise
If you had to implement pairwise, you might think of something like the code below:
def my_pairwise(iterable):
for prev_, next_ in zip(iterable, iterable[1:]):
yield (prev_, next_)
Which directly translates to
def my_pairwise(iterable):
yield from zip(iterable, iterable[1:])
But there is a problem with this implementation, and that is the slicing operation.
pairwise is supposed to work with any iterable and not all iterables are sliceable.
For example, files are iterables but are not sliceable.
There are a couple of different ways to fix this but my favourite uses collections.deque with its parameter maxlen:
from collections import deque
from itertools import islice
def my_pairwise(data):
data = iter(data)
window = deque(islice(data, 1), maxlen=2)
for value in data:
window.append(value)
yield tuple(window)
Generalising itertools.pairwise
pairwise will always produce pairs of consecutive elements, but sometimes you might want tuples of different sizes.
For example, you might want something like “triplewise”, to get triples of consecutive elements, but pairwise can't be used for that.
So, how do you implement that generalisation?
In the upcoming subsections I will present different ways of implementing the function nwise(iterable, n) that accepts an iterable and a positive integer n and produces overlapping tuples of n elements taken from the given iterable.
Some example applications:
nwise("ABCD", 2) -> ("A", "B"), ("B", "C"), ("C", "D")
nwise("ABCD", 3) -> ("A", "B", "C"), ("B", "C", "D")
nwise("ABCD", 4) -> ("A", "B", "C", "D")
Using deque
The implementation of pairwise that I showed above can be adapted for nwise:
from collections import deque
from itertools import islice
def nwise(iterable, n):
iterable = iter(iterable)
window = deque(islice(iterable, n - 1), maxlen=n)
for value in iterable:
window.append(value)
yield tuple(window)
Note that you have to change maxlen=2 to maxlen=n, but also islice(iterable, 1) to islice(iterable, n - 1).
Using tee
Another fundamentally different way of implement nwise is by using itertools.tee to split the input...
November 24, 2025 06:36 PM UTC
EuroPython Society
New EuroPython Society Fellow in 2025
A warm welcome to Martin Borus as the second elected EuroPython Society Fellow in 2025.
EuroPython Society Fellows
EuroPython Society Fellows have contributed significantly towards our mission, the EuroPython conference and the Society as an organisation. They are eligible for a lifetime free attendance of the EuroPython conference and will be listed on our EuroPython Society Fellow Grant page in recognition of their work.
Martin has been part of the Europython Conferences volunteers since 2017.
Some have “met” him the first time in a response on an issue sent to the helpdesk or during the organisation meetings of the Ops team.
Others interacted with him as a volunteer at reception, out in the halls, a tutor in a Humble Data tutorial, a session chair, or a room manager in a tutorial or talk.
While pretending to be just an “On-site volunteer” or a member of the “Operations team” — over time, he has taken over not only the organisation of the registration desk, but also developed and orchestrated the training of session chairs.
He also expanded the programme with the informal session for first time conference attendees, which have been well received by all attendees.
The EuroPython Society Board would like to congratulate and thank all Fellows for their tireless work towards our mission! If you want to send in your nomination, check out our Fellowship page and get in touch!
Many thanks,
EuroPython Society
https://www.europython-society.org/
November 24, 2025 05:58 PM UTC
Trey Hunner
Python Black Friday & Cyber Monday sales (2025)
It’s time for some discounted Python-related skill-building. This is my eighth annual compilation of Python learning-related Black Friday & Cyber Monday deals. If you find a Python-related deal in the next week that isn’t on this list, please contact me.
Python-related sales
It’s not even Black Friday yet, but most of these Python-related Black Friday sales are already live:
- Python Morsels: I’m offering lifetime access for the second time ever (more details below)
- Data School: a new subscription to access all of Kevin’s 7 courses plus all upcoming courses
- Talk Python: AI Python bundle, the Everything Bundle, and Michael’s Talk Python in Production
- Reuven Lerner: get 20% off your first year of the LernerPython+data tier (code
BF2025) - Brian Okken: get 50% off his pytest books and courses (code
SAVE50) - Rodrigo: get 50% off all his books including his all books bundle (code
BF202550) - Mike Driscoll: get 50% off all his Python books and courses (code
BLACKISBACK) - The Python Coding Place: get 50% the all course bundle of 12 courses
- Sundeep Agarwal: ~55% off Sundeep’s all books bundle with code
FestiveOffer - Nodeledge: get 20% off your first payment (code
BF2025) - O'Reilly Media: 40% off the first year with code
CYBERWEEK25($299 instead of $499) - Manning is offering 50% off from Nov 25 to Dec 1
- Wizard Zines: 50% all of Julia Evan’s great zines on various tech topics that I personally find both fun and useful (not Python-related, but one of my favorite annual sales)… this is a one-day Black Friday exclusive sale
I will be keeping an idea on other potential sales and updating this post as I find them. If you’ve seen a sale that I haven’t, please contact me or comment below.
Django-related sales
Adam Johnson has also published a list of Django-related deals for Black Friday (which he’s been doing for a few years now).
More on my sale: Python Morsels Lifetime Access
Python Morsels is a hands-on, exercise-driven Python skill-building platform designed to help developers write cleaner, more idiomatic Python through real-world practice. This growing library of exercises, videos, and courses is aimed primarily at intermediate and professional developers. If you haven’t used Python Morsels, you can read about it in my sale announcement post.
This Black Friday / Cyber Monday, I’m offering lifetime access: all current and future content for in a single payment. If you’d like to build confidence in your everyday Python skills, consider this sale. It’s about 50% cheaper than paying annually for five years.
Get lifetime access to Python Morsels
November 24, 2025 04:00 PM UTC
Nicola Iarocci
Flask started as an April Fool's joke
The story that the Python micro web framework Flask started as an April Fool’s joke is well known in Python circles, but it was nice to see it told by Armin Ronacher himself1.
I’m fond of Flask. It was a breath of fresh air when it came out, and most of my Python open-source work is based on it.
-
The video is produced by the people who also authored the remarkable Python: The Documentary. ↩︎
November 24, 2025 02:29 PM UTC
Real Python
How to Properly Indent Python Code
Knowing how to properly indent Python code is a key skill for becoming an accomplished Python developer. Beginning Python programmers know that indentation is required, but learning to indent code so it’s readable, syntactically correct, and easy to maintain is a skill that takes practice.
By the end of this tutorial, you’ll know:
- How to properly indent Python code in a variety of editors
- How your choice of editor can impact your Python code
- How to indent code using spaces in simple text editors
- How to use code formatters to properly indent Python code automatically
- Why indentation is required when writing Python code
With this knowledge, you’ll be able to write Python code confidently in any environment.
Get Your Code: Click here to download the free sample code you’ll use to learn how to properly indent Python code.
How to Indent Code in Python
Indenting your code means adding spaces to the beginning of a line, which shifts the start of the line to the right, as shown below:
number = 7
if number > 0:
print("It's a positive number")
In this example, the first two lines aren’t indented, while the third line is indented.
How you indent your Python code depends on your coding environment. Most editors and integrated development environments (IDEs) can indent Python code correctly with little to no input from the user. You’ll see examples of this in the sections that follow.
Python-Aware Editors
In most cases, you’ll be working in a Python-aware environment. This might be a full Python IDE such as PyCharm, a code editor like Visual Studio Code, the Python REPL, IPython, IDLE, or even a Jupyter notebook. All these environments understand Python syntax and indent your code properly as you type.
Note: PEP 8 is the style guide for Python that was first introduced in 2001. Among other recommendations, it specifies that code indentation should be four spaces per indentation level. All environments discussed here follow that standard.
Here’s a small example to show this automatic indentation. You’ll use the following code to see how each environment automatically indents as you type:
lucky_number.py
1lucky_number = 7
2for number in range(10):
3 if number == lucky_number:
4 print("Found the lucky number!")
5 else:
6 print(f"{number} is not my lucky number.")
7
8print("Done.")
This example shows how indenting happens automatically and how to de-indent a line of code. De-indenting—also called dedenting—means removing spaces at the beginning of a line, which moves the start of the line to the left. The code on lines 5 and 8 needs to be de-indented relative to the previous lines to close the preceding code blocks. For a detailed explanation of the indentation in this code, expand the collapsible section below.
The code above shows different levels of indentation, combining a for loop with an if statement:
- Line 1 initializes the variable
lucky_numberto the integer value of7. - Line 2 starts a
forloop usingnumberas an iterator over a range of values from0to9. - Line 3 checks if
numberis equal tolucky_number. This line is indented to show it’s part of theforloop’s body. - Line 4 prints a message when the condition
number == lucky_numberisTrue. This line is indented to show it’s part of the body of theifstatement. - Line 5 provides a way to act when the condition on line 3 is
False. This line is de-indented to show it’s part of theifstatement on line 3. - Line 6 prints a message when the condition
number == lucky_numberisFalse. This line is indented to show it’s part of the body of theelseclause. - Line 8 prints a final message. It’s de-indented to show it’s not part of the
forloop, but executes after theforloop is done.
If any part of this code is unfamiliar to you, you can learn more by exploring these resources:
- For more on
forloops, read PythonforLoops: The Pythonic Way. - For more on
range(), read Pythonrange(): Represent Numerical Ranges. - For more on
ifandelseclauses, read Conditional Statements in Python. - For more on the
print()function, read Your Guide to the Pythonprint()Function. - For more on f-string formatting, read Python’s F-String for String Interpolation and Formatting.
Each line of the code above exists at a specific indentation level. All consecutive statements at the same indentation level are considered to be part of the same group or code block. The table below shows each line of code from the example, its indentation level, and what action is needed to achieve that level:
| Code | Indentation Level | Action |
|---|---|---|
lucky_number = 7 |
0 | - |
for number in range(10): |
0 | - |
if number == lucky_number: |
1 | Indent |
print("Found the lucky number!") |
2 | Indent |
else: |
1 | De-indent |
print(f"{number} is not my lucky number.") |
2 | Indent |
print("Done.") |
0 | De-indent |
First, here’s what it looks like to enter this code in PyCharm, a full-featured Python IDE:
Notice that as you hit Enter on line 2, PyCharm immediately indents line 3 for you. The same thing happens on line 4. However, to de-indent line 5 and line 8, you have to either press Backspace or Shift+Tab to move the cursor back to the proper position for the next line.
Read the full article at https://realpython.com/how-to-indent-in-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 24, 2025 02:00 PM UTC
Python Software Foundation
Python is for Everyone: Grab PyCharm Pro for 30% off—plus a special bonus!
So far this year’s PSF fundraising campaign has been a truly inspiring demonstration of the Python community's generosity, strength, and solidarity. We set a special 🥧 themed goal of $314,159.26 (it’s the year of Python 3.14!), and with your support, we are already at 93% of that goal—WOW!! Thank you to every single person who has already donated: you have our deep gratitude, and we are committed to making every dollar count.
🚨 New target alert: If we hit our goal of $100Kπ- we are going to release a nice video AND we are going to set a new goal, as well as an additional super stretch goal. Can you chip in to get us there? We’re confident that with your contributions and support we can reach those new heights. Because Python is for everyone, thanks to you!
Today, we’re excited to share another way for you to participate AND get awesome benefits from JetBrains! We have the opportunity to once again partner with JetBrains to deliver a special promotion: 30% off PyCharm Pro with ALL proceeds going to the PSF. New this year: Folks who take advantage of this offer will also receive a free tier of AI Assistant in PyCharm! Read on to learn more about the PyCharm promotion, how to grab it while it lasts, and other ways you can contribute to the PSF’s 2025 end-of-year fundraiser. Huge thanks to JetBrains for stepping up to provide this awesome deal 🐍⚡️
LIMITED TIME! Grab PyCharm Pro at 30% off with a free tier of AI Assistant:
- Grab a discounted Python IDE: PyCharm! JetBrains is once again supporting the PSF by providing a 30% discount on PyCharm Pro and ALL proceeds will go to the PSF! Your subscription will include a free tier of AI Assistant in PyCharm. You can take advantage of this discount by clicking the button on the JetBrains promotion page, and the discount will be automatically applied when you check out. The promotion will only be available through December 12th, so go grab the deal today!
>>> Get PyCharm Pro! <<<
There are two ways to join our fundraiser through donate.python.org:
- Donate directly to the PSF! Your donation is a direct way to support and power the future of the Python programming language and community you love. Every donation makes a difference, and we work hard to make a little go a long way.
- Become a PSF Supporting Member! When you sign up as a Supporting Member of the PSF, you become a part of the PSF, are eligible to vote in PSF elections, and help us sustain our mission with your annual support. You can sign up as a Supporting Member at the usual annual rate ($99 USD), or you can take advantage of our sliding scale option (starting at $25 USD)!
>>> Donate or Become a Member Today! <<<
If you already donated, you’re already a PSF member, AND you already grabbed PyCharm at 30% off (look at you, you exemplary supporter!🏆) you can:
- Share the fundraiser with your regional and project-based communities: Share this blog post in your Python-related Discords, Slacks, social media accounts- wherever your Python community is! Keep an eye on our social media accounts and repost to share the latest stories and news for the campaign.
- Share your Python story with a call to action: We invite you to share your personal Python, PyCon, or PSF story. What impact has it made in your life, in your community, in your career? Share your story in a blog post or on your social media platform of choice and add a link to donate.python.org.
- Ask your employer to sponsor: If your company is using Python to build its products and services, check to see if they already sponsor the PSF on our Sponsors page. If not, reach out to your organization's internal decision-makers and impress on them just how important it is for us to power the future of Python together, and send them our sponsor prospectus.
Your donations and support:
- Keep Python thriving
- Support CPython and PyPI progress
- Increase security across the Python ecosystem
- Bring the global Python community together
- Make our community more diverse and robust every year
November 24, 2025 10:44 AM UTC
Python Bytes
#459 Inverted dependency trees
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://discuss.python.org/t/pep-814-add-frozendict-built-in-type/104854?featured_on=pythonbytes">PEP 814 – Add frozendict built-in type</a></strong></li> <li><strong>From <a href="https://squidfunk.github.io/mkdocs-material/?featured_on=pythonbytes">Material for MkDocs</a> to <a href="https://zensical.org?featured_on=pythonbytes">Zensical</a></strong></li> <li><strong><a href="https://github.com/tach-org/tach?featured_on=pythonbytes">Tach</a></strong></li> <li><strong>Some Python Speedups in 3.15 and 3.16</strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #0</strong>: <a href="https://talkpython.fm/black-friday?featured_on=pythonbytes">Black Friday is on at Talk Python</a></p> <ul> <li>What’s on offer: <ol> <li>An <a href="https://training.talkpython.fm/courses/bundle/black-friday-ai-2025?featured_on=pythonbytes">AI course mini bundle</a> (22% off)</li> <li>20% off our entire library via the <a href="https://training.talkpython.fm/courses/bundle/everything?featured_on=pythonbytes">Everything Bundle</a> (<a href="https://training.talkpython.fm/bundles-arent-subscriptions?featured_on=pythonbytes">what's that? ;)</a> )</li> <li>The new <a href="https://mikeckennedy.gumroad.com/l/talk-python-in-production-book/black-friday-2025?featured_on=pythonbytes">Talk Python in Production book</a> (25% off)</li> </ol></li> </ul> <p>Brian: This is peer pressure in action</p> <ul> <li>20% off <a href="https://courses.pythontest.com/the-complete-pytest-course-bundle?featured_on=pythonbytes">The Complete pytest Course bundle</a> (use code BLACKFRIDAY) through November <ul> <li>or use save50 for 50% off, your choice.</li> </ul></li> <li><a href="https://pragprog.com/titles/bopytest2/python-testing-with-pytest-second-edition/?featured_on=pythonbytes">Python Testing with pytest, 2nd edition</a>, eBook (50% off with code save50) also through November <ul> <li>I would have picked 20%, but it’s a PragProg wide thing</li> </ul></li> </ul> <p><strong>Michael #1: <a href="https://discuss.python.org/t/pep-814-add-frozendict-built-in-type/104854?featured_on=pythonbytes">PEP 814 – Add frozendict built-in type</a></strong></p> <ul> <li>by Victor Stinner & Donghee Na</li> <li>A new public immutable type <code>frozendict</code> is added to the <code>builtins</code> module.</li> <li>We expect <code>frozendict</code> to be safe by design, as it prevents any unintended modifications. This addition benefits not only CPython’s standard library, but also third-party maintainers who can take advantage of a reliable, immutable dictionary type.</li> <li>To add to <a href="https://blobs.pythonbytes.fm/existing-frozen-typesstructures-in-python.html?cache_id=be7da5">existing frozen types in Python</a>.</li> </ul> <p><strong>Brian #2: From <a href="https://squidfunk.github.io/mkdocs-material/?featured_on=pythonbytes">Material for MkDocs</a> to <a href="https://zensical.org?featured_on=pythonbytes">Zensical</a></strong></p> <ul> <li>Suggested by John Hagen</li> <li>A lot of people, me included, use Material for MkDocs as our MkDocs theme for both personal and professional projects, and in-house docs.</li> <li>This plugin for MkDocs is <a href="https://github.com/squidfunk/mkdocs-material/issues/8523?featured_on=pythonbytes">now in maintenance mode</a></li> <li>The development team is switching to working on Zensical, a static site generator to overcome some technical limitations with MkDocs. There’s a series of posts about the transition and reasoning <ol> <li><a href="https://squidfunk.github.io/mkdocs-material/blog/2024/08/19/how-were-transforming-material-for-mkdocs/?featured_on=pythonbytes">Transforming Material for MkDocs</a></li> <li><a href="https://squidfunk.github.io/mkdocs-material/blog/2025/11/05/zensical/?featured_on=pythonbytes">Zensical – A modern static site generator built by the creators of Material for MkDocs</a></li> <li><a href="https://squidfunk.github.io/mkdocs-material/blog/2025/11/11/insiders-now-free-for-everyone/?featured_on=pythonbytes">Material for MkDocs Insiders – Now free for everyone</a></li> <li><a href="https://squidfunk.github.io/mkdocs-material/blog/2025/11/18/goodbye-github-discussions/?featured_on=pythonbytes">Goodbye, GitHub Discussions</a></li> </ol></li> <li>Material for MkDocs <ul> <li>still around, but in maintenance mode</li> <li>all insider features now available to everyone</li> </ul></li> <li>Zensical is / will be <ul> <li>compatible with Material for Mkdocs, can natively read mkdocs.yml, to assist with the transition</li> <li>Open Source, MIT license</li> <li>funded by an offering for professional users: Zensical Spark</li> </ul></li> </ul> <p><strong>Michael #3: <a href="https://github.com/tach-org/tach?featured_on=pythonbytes">Tach</a></strong></p> <ul> <li>Keep the streak: pip deps with uv + tach</li> <li>From Gerben Decker</li> <li>We needed some more control over linting our dependency structure, both internal and external.</li> <li>We use <code>tach</code> (which you covered before IIRC), but also some home built linting rules for our specific structure. These are extremely easy to build using an underused feature of <code>ruff</code>: "uv run ruff analyze graph --python python_exe_path .".</li> <li><a href="https://app.gauge.sh/show?uid=fee5a5ca-7f89-4f0a-bcf7-bca1f7c6cc8a&featured_on=pythonbytes">Example from an app</a> I’m working on (shhhhh not yet announced!)</li> </ul> <p><strong>Brian #4: Some Python Speedups in 3.15 and 3.16</strong></p> <ul> <li><a href="https://fidget-spinner.github.io/posts/faster-jit-plan.html?featured_on=pythonbytes"><strong>A Plan for 5-10%* Faster Free-Threaded JIT by Python 3.16</strong></a> <ul> <li>5% faster by 3.15 and 10% faster by 3.16</li> </ul></li> <li><a href="https://emmatyping.dev/decompression-is-up-to-30-faster-in-cpython-315.html?featured_on=pythonbytes">Decompression is up to 30% faster in CPython 3.15</a></li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://github.com/okken/lean-tdd-book?featured_on=pythonbytes">LeanTDD book issue tracker</a></li> </ul> <p>Michael:</p> <ul> <li>No. 4 for dependencies: <a href="https://www.linkedin.com/posts/bbelderbos_i-needed-to-see-which-packages-were-pulling-share-7398344644629016576-C-Q8?featured_on=pythonbytes">Inverted dep trees</a> from Bob Belderbos</li> </ul> <p><strong>Joke: <a href="https://x.com/creativedrewy/status/1990891118569869327?featured_on=pythonbytes">git pull inception</a></strong></p>
November 24, 2025 08:00 AM UTC
Zato Blog
Automation in Python
Automation in Python
How to automate systems in Python and how the Zato Python integration platform differs from a network automation tool, how to start using it, along with a couple of examples of integrations with Office 365 and Jira, is what the latest article is about.
➤ Read it here: Systems Automation in Python.
More resources
➤ Python API integration tutorials
➤ What is an integration platform?
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
➤ Open-source iPaaS in Python
November 24, 2025 03:00 AM UTC
November 22, 2025
Daniel Roy Greenfeld
TIL: Default code block languages for mkdocs
When using Mkdocs with Material, you can set default languages for code blocks in your mkdocs.yml configuration file. This is particularly useful for inline code examples that may not have explicit language tags.
markdown_extensions:
- pymdownx.highlight:
default_lang: python
You can see what this looks like in practice with Air's API reference for forms here: feldroy.github.io/air/api/forms/. With this configuration, any code block without a specified language defaults to Python syntax highlighting, making documentation clearer and more consistent.
November 22, 2025 12:08 PM UTC
Brett Cannon
Should I rewrite the Python Launcher for Unix in Python?
I want to be upfront that this blog post is for me to write down some thoughts that I have on the idea of rewriting the Python Launcher for Unix from Rust to pure Python. This blog post is not meant to explicitly be educational or enlightening for others, but I figured if I was going to write this down I might as well just toss it online in case someone happens to find it interesting. Anyway, with that caveat out of the way...
I started working on the Python Launcher for Unix in May 2018. At the time I used it as my Rust starter project and I figured distributing it would be easiest as a single binary since if I wrote it in Python how do you bootstrap yourself in launching Python with Python? But in the intervening 7.5 years, a few things have happened:
- BeeWare&aposs Briefcase has grown in sophistication
- Python-build-standalone has helped show that Python can be rebuilt in a relocatable way for easy distribution
- I have started the work to get relocatable, prebuilt binaries hosted on python.org
- I have plans for the Python Launcher for Unix (a normal person might say "big" plans, but I think people now realize my "normal" isn&apost normal 😅)
- I became a dad (that will make more sense as to why that matters later in this post)
All of this has come together for me to realize now is the time to reevaluate whether I want to stick with Rust or pivot to using pure Python.
Performance
The first question I need to answer for myself is whether performance is good enough to switch. My hypothesis is that the Python Launcher for Unix is mostly I/O-bound (specifically around file system access), and so using Python wouldn&apost be a hindrance. To test this, I re-implemented enough of the Python Launcher for Unix in pure Python to make py --version work:
$VIRTUAL_ENVenvironment variable support- Detection of
.venvin the current or parent directories - Searching
$PATHfor the newest version of Python
It only took 72 lines, so it was a quick hack. I compared the Rust version to the Python version on my machine running Fedora 43 by running hyperfine "py --version". If I give Rust an optimistic number by picking its average lower-bound and Python a handicap of picking its average upper-bound, we get:
- 3 ms for Rust (333 Hz)
- 33 ms for Python (30 Hz)
So 11x slower for Python. But when the absolute performance is fast enough to let you run the Python Launcher for Unix over 30 times a second, does it actually matter? And you&aposre not about to run the Python Launcher for Unix in some tight loop or even in production (as it&aposs a developer tool), so I don&apost think that worst-case performance number (on my machine) makes performance a concern in making my decision.
Distribution
Right now, you can get the Python Launcher for Unix via:
- crates.io
- GitHub Releases as tarballs of a single binary, manpage, license file, readme, and Fish shell completions
- Various package managers (e.g. Homebrew, Fedora, and Nix)
If I rewrote the Python Launcher for Unix in Python, could I get equivalent distribution channels? Substituting crates.io for PyPI makes that one easy. The various package managers also know how to package Python applications already, so they would take care of the bootstrapping problem of getting Python your machine to run the Python Launcher for Unix.
So that leaves what I distribute myself via GitHub Releases. After lamenting on Mastodon that I wished there was an easy, turn-key solution to getting pure Python code and bundling it with a prebuilt Python binary, the conversation made me realized that Briefcase should actually get me what I&aposm after.
Add in the fact that I&aposm working towards prebuilt binaries for python.org and it wouldn&apost even necessarily be an impediment if the Python Launcher for Unix were ever to be distributed via python.org as well. I could imagine some shell script to download Python and then use it to run a Python script to get the Python Launcher for Unix installed on one&aposs machine (if relative paths for shebangs were relative to the script being executed then I could see just shipping an internal copy of Python with the Python Launcher for Unix, but a quick search online suggests such relative paths are relative to the working directory). So I don&apost see using Python as being a detriment to distribution.
Maximizing the impact of my time
I am a dad to a toddler. That means my spare time is negligible and restricted to nap time (which is shrinking), or in the evening (which I can&apost code past 21:00, else I have really wonky dreams or I simply can&apost fall asleep due to my brain not shutting off). Now I know I should eventually get some spare time back, but that&aposs currently measured in years according to other parents, and so this time restriction to work on this fun project is not about to improve in the near to mid-future.
This has led me, as of late, to look at how best to use my spare time. I could continue to grow my Rust experience while solving problems, or I could lean into my Python experience and solve more problems in the same amount of time. This somewhat matters if I decide that increasing the functionality of the Python Launcher for Unix is the more fun for me than getting more Rust experience at this current point of my life.
And if I think the feature set is the most important, then doing it in Python has a greater chance of getting external contribution from the Python Launcher for Unix&aposs user base. Compare that to now where there have been 11 human contributors over the project&aposs entire lifetime.
Conclusion?
So have I talked myself into rewriting the Python Launcher for Unix into Python?
November 22, 2025 12:18 AM UTC
Bruno Ponne / Coding The Past
Data Science Quiz For Humanities
Test your skills with this interactive data science quiz covering statistics, Python, R, and data analysis.
November 22, 2025 12:00 AM UTC
Stéphane Wirtel
Claude Code : comment un assistant IA m'a fait gagner des jours de développement
TL;DR
Après une semaine d’utilisation intensive de Claude Code1 pendant PyCon Ireland et sur mes projets personnels, je suis complètement bluffé par les gains de productivité. L’outil m’a permis de migrer automatiquement le site Python Ireland de Django 5.0 vers 5.2 et Wagtail 6.2 vers 7.2, de développer un outil de conversion de livres scannés en 5 minutes, et de générer une documentation complète en quelques minutes. Contrairement à Cursor ou Windsurf, Claude Code s’intègre partout (PyCharm, VS Code, Zed, Neovim, ligne de commande), ce qui en fait un véritable game changer pour les développeurs professionnels.
November 22, 2025 12:00 AM UTC
Armin Ronacher
LLM APIs are a Synchronization Problem
The more I work with large language models through provider-exposed APIs, the more I feel like we have built ourselves into quite an unfortunate API surface area. It might not actually be the right abstraction for what’s happening under the hood. The way I like to think about this problem now is that it’s actually a distributed state synchronization problem.
At its core, a large language model takes text, tokenizes it into numbers, and feeds those tokens through a stack of matrix multiplications and attention layers on the GPU. Using a large set of fixed weights, it produces activations and predicts the next token. If it weren’t for temperature (randomization), you could think of it having the potential of being a much more deterministic system, at least in principle.
As far as the core model is concerned, there’s no magical distinction between “user text” and “assistant text”—everything is just tokens. The only difference comes from special tokens and formatting that encode roles (system, user, assistant, tool), injected into the stream via the prompt template. You can look at the system prompt templates on Ollama for the different models to get an idea.
The Basic Agent State
Let’s ignore for a second which APIs already exist and just think about what usually happens in an agentic system. If I were to have my LLM run locally on the same machine, there is still state to be maintained, but that state is very local to me. You’d maintain the conversation history as tokens in RAM, and the model would keep a derived “working state” on the GPU — mainly the attention key/value cache built from those tokens. The weights themselves stay fixed; what changes per step are the activations and the KV cache.
One further clarification: when I talk about state I don’t just mean the visible token history because the model also carries an internal working state that isn’t captured by simply re-sending tokens. In other words: you can replay the tokens and regain the text content, but you won’t restore the exact derived state the model had built.
From a mental-model perspective, caching means “remember the computation you already did for a given prefix so you don’t have to redo it.” Internally, that usually means storing the attention KV cache for those prefix tokens on the server and letting you reuse it, not literally handing you raw GPU state.
There are probably some subtleties to this that I’m missing, but I think this is a pretty good model to think about it.
The Completion API
The moment you’re working with completion-style APIs such as OpenAI’s or Anthropic’s, abstractions are put in place that make things a little different from this very simple system. The first difference is that you’re not actually sending raw tokens around. The way the GPU looks at the conversation history and the way you look at it are on fundamentally different levels of abstraction. While you could count and manipulate tokens on one side of the equation, extra tokens are being injected into the stream that you can’t see. Some of those tokens come from converting the JSON message representation into the underlying input tokens fed into the machine. But you also have things like tool definitions, which are injected into the conversation in proprietary ways. Then there’s out-of-band information such as cache points.
And beyond that, there are tokens you will never see. For instance, with reasoning models you often don’t see any real reasoning tokens, because some LLM providers try to hide as much as possible so that you can’t retrain your own models with their reasoning state. On the other hand, they might give you some other informational text so that you have something to show to the user. Model providers also love to hide search results and how those results were injected into the token stream. Instead, you only get an encrypted blob back that you need to send back to continue the conversation. All of a sudden, you need to take some information on your side and funnel it back to the server so that state can be reconciled on either end.
In completion-style APIs, each new turn requires resending the entire prompt history. The size of each individual request grows linearly with the number of turns, but the cumulative amount of data sent over a long conversation grows quadratically because each linear-sized history is retransmitted at every step. This is one of the reasons long chat sessions feel increasingly expensive. On the server, the model’s attention cost over that sequence also grows quadratically in sequence length, which is why caching starts to matter.
The Responses API
One of the ways OpenAI tried to address this problem was to introduce the Responses API, which maintains the conversational history on the server (at least in the version with the saved state flag). But now you’re in a bizarre situation where you’re fully dealing with state synchronization: there’s hidden state on the server and state on your side, but the API gives you very limited synchronization capabilities. To this point, it remains unclear to me how long you can actually continue that conversation. It’s also unclear what happens if there is state divergence or corruption. I’ve seen the Responses API get stuck in ways where I couldn’t recover it. It’s also unclear what happens if there’s a network partition, or if one side got the state update but the other didn’t. The Responses API with saved state is quite a bit harder to use, at least as it’s currently exposed.
Obviously, for OpenAI it’s great because it allows them to hide more behind-the-scenes state that would otherwise have to be funneled through with every conversation message.
State Sync API
Regardless of whether you’re using a completion-style API or the Responses API, the provider always has to inject additional context behind the scenes—prompt templates, role markers, system/tool definitions, sometimes even provider-side tool outputs—that never appears in your visible message list. Different providers handle this hidden context in different ways, and there’s no common standard for how it’s represented or synchronized. The underlying reality is much simpler than the message-based abstractions make it look: if you run an open-weights model yourself, you can drive it directly with token sequences and design APIs that are far cleaner than the JSON-message interfaces we’ve standardized around. The complexity gets even worse when you go through intermediaries like OpenRouter or SDKs like the Vercel AI SDK, which try to mask provider-specific differences but can’t fully unify the hidden state each provider maintains. In practice, the hardest part of unifying LLM APIs isn’t the user-visible messages—it’s that each provider manages its own partially hidden state in incompatible ways.
It really comes down to how you pass this hidden state around in one form or another. I understand that from a model provider’s perspective, it’s nice to be able to hide things from the user. But synchronizing hidden state is tricky, and none of these APIs have been built with that mindset, as far as I can tell. Maybe it’s time to start thinking about what a state synchronization API would look like, rather than a message-based API.
The more I work with these agents, the more I feel like I don’t actually need a unified message API. The core idea of it being message-based in its current form is itself an abstraction that might not survive the passage of time.
Learn From Local First?
There’s a whole ecosystem that has dealt with this kind of mess before: the local-first movement. Those folks spent a decade figuring out how to synchronize distributed state across clients and servers that don’t trust each other, drop offline, fork, merge, and heal. Peer-to-peer sync, and conflict-free replicated storage engines all exist because “shared state but with gaps and divergence” is a hard problem that nobody could solve with naive message passing. Their architectures explicitly separate canonical state, derived state, and transport mechanics — exactly the kind of separation missing from most LLM APIs today.
Some of those ideas map surprisingly well to models: KV caches resemble derived state that could be checkpointed and resumed; prompt history is effectively an append-only log that could be synced incrementally instead of resent wholesale; provider-side invisible context behaves like a replicated document with hidden fields.
At the same time though, if the remote state gets wiped because the remote site doesn’t want to hold it for that long, we would want to be in a situation where we can replay it entirely from scratch — which for instance the Responses API today does not allow.
Future Unified APIs
There’s been plenty of talk about unifying message-based APIs, especially in the wake of MCP (Model Context Protocol). But if we ever standardize anything, it should start from how these models actually behave, not from the surface conventions we’ve inherited. A good standard would acknowledge hidden state, synchronization boundaries, replay semantics, and failure modes — because those are real issues. There is always the risk that we rush to formalize the current abstractions and lock in their weaknesses and faults. I don’t know what the right abstraction looks like, but I’m increasingly doubtful that the status-quo solutions are the right fit.
November 22, 2025 12:00 AM UTC
November 21, 2025
Trey Hunner
Python Morsels Lifetime Access Sale
If you code in Python regularly, you’re already learning new things everyday.
You hit a wall, or something breaks. Then you search around, spend some hours on Stack Overflow, and eventually, you figure it out.
But this kind of learning is unstructured. It’s reactive, instead of intentional.
You fix the problem at hand, but the underlying gaps in your knowledge remain unaddressed.
A more structured way to improve your Python skills
Python Morsels gives you a structured, hands-on way to improve your Python skills through weekly practice:
- Notice significant progress with as little as 30 minutes a week
- Learn to naturally think more Pythonically
- Explore new approaches to problem-solving
- Challenge yourself to get outside your comfort zone regularly
Python Morsels is a subscription service because I’m adding new learning resources almost every week.
But through December 1st, you can get lifetime access for a one-time payment.
How it works
When you sign up for Python Morsels, you’ll choose your current Python skill level, from novice to advanced.
Based on your skill level, each Monday I’ll send you a personalized routine with:
- a short screencast to watch (or read)
- a multi-part exercise to move you outside your comfort zone
- a mini exercise that you can accomplish in just 10 minutes
- links to dive deeper into subsequent screencasts and exercises
Think of Python Morsels as a gym for your Python skills: you come in for quick training sessions, put in the reps, and make a little more progress each time.
All these resources are accessible to you forever, and with lifetime access you’ll never pay another subscription fee.
What Python Morsels includes
Python Morsels has grown a lot over the past 8 years. Currently, Python Morsels has:
- 235+ Video Lessons (or screencasts, as I call them)
- 262+ Hands-On Exercises (with solution walkthroughs)
- 500+ Optional Bonuses (to challenge every skill level)
- 303+ Articles Organized Topic-Wise (if you prefer reading)
I’ll be sending you personalized recommendations every week, but you can use these resources however they fit your routine: as learning guides, hands-on practice sessions, quick cheatsheets, long-term reference material, or quick Python workouts.
In addition to this, Python Morsels also gives you access to:
- Python Jumpstart - a structured course for beginners
- Additional Deep-Dive Courses - structured tracks to master a concept
- Exercise Paths - topic-based exercises to strengthen a specific skill
Because Python Morsels runs as an active subscription service, I’m always adding new screencasts, new exercises, and updated material on a weekly or monthly cycle. I also keep everything up-to-date with each new Python release, incorporating newly added features and retiring end-of-life’d Python versions.
Lock in lifetime access
Python Morsels usually costs $240/year but you can get lifetime access through December 1st for a one-time payment. I’ve only offered lifetime access once before in 8 years.
If you’ve been on the fence about subscribing to Python Morsels or want to invest in building a daily learning habit, this is a good time to do it.
Get lifetime access to Python Morsels
If you have questions about the sale, please comment below or email me.
November 21, 2025 10:42 PM UTC
Tryton News
Security Release for issue #14366
Cédric Krier has found that trytond does not enforce access rights for data export (since version 6.0).
Impact
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Confidentiality: High
- Integrity: None
- Availability: None
Workaround
There is no workaround.
Resolution
All affected users should upgrade trytond to the latest version.
Affected versions per series:
trytond:- 7.6: <= 7.6.10
- 7.4: <= 7.4.20
- 7.0: <= 7.0.39
- 6.0: <= 6.0.69
Non affected versions per series:
trytond:- 7.6: >= 7.6.11
- 7.4: >= 7.4.21
- 7.0: >= 7.0.40
- 6.0: >= 6.0.70
Reference
Concerns?
Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/ with the confidential checkbox checked.
1 post - 1 participant
November 21, 2025 03:00 PM UTC
Security Release for issue #14363
Abdulfatah Abdillahi has found that sao does not escape the completion values. The content of completion is generally the record name which may be edited in many ways depending on the model. The content may include some JavaScript which is executed in the same context as sao which gives access to sensitive data such as the session.
Impact
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Confidentiality: High
- Integrity: High
- Availability: None
Workaround
There is no general workaround.
Resolution
All affected users should upgrade sao to the latest version.
Affected versions per series:
sao:- 7.6: <= 7.6.10
- 7.4: <= 7.4.20
- 7.0: <= 7.0.39
- 6.0: <= 6.0.68
Non affected versions per series:
sao:- 7.6: >= 7.6.11
- 7.4: >= 7.4.21
- 7.0: >= 7.0.40
- 6.0: >= 6.0.69
Reference
Concerns?
Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/ with the confidential checkbox checked.
1 post - 1 participant
November 21, 2025 03:00 PM UTC
Security Release for issue #14364
Mahdi Afshar has found that trytond does not enforce access rights for the route of the HTML editor (since version 6.0).
Impact
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Confidentiality: High
- Integrity: Low
- Availability: None
Workaround
A possible workaround is to block access to the html editor.
Resolution
All affected users should upgrade trytond to the latest version.
Affected versions per series:
trytond:- 7.6: <= 7.6.10
- 7.4: <= 7.4.20
- 7.0: <= 7.0.39
- 6.0: <= 6.0.69
Non affected versions per series:
trytond:- 7.6: >= 7.6.11
- 7.4: >= 7.4.21
- 7.0: >= 7.0.40
- 6.0: >= 6.0.70
Reference
Concerns?
Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/ with the confidential checkbox checked.
1 post - 1 participant
November 21, 2025 03:00 PM UTC
Security Release for issue #14354
Mahdi Afshar and Abdulfatah Abdillahi have found that trytond sends the trace-back to the clients for unexpected errors. This trace-back may leak information about the server setup.
Impact
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Confidentiality: Low
- Integrity: None
- Availability: None
Workaround
A possible workaround is to configure an error handler which would remove the trace-back from the response.
Resolution
All affected users should upgrade trytond to the latest version.
Affected versions per series:
trytond:- 7.6: <= 7.6.10
- 7.4: <= 7.4.20
- 7.0: <= 7.0.39
- 6.0: <= 6.0.69
Non affected versions per series:
trytond:- 7.6: >= 7.6.11
- 7.4: >= 7.4.21
- 7.0: >= 7.0.40
- 6.0: >= 6.0.70
Reference
Concerns?
Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/ with the confidential checkbox checked.
2 posts - 2 participants
November 21, 2025 03:00 PM UTC
Django Weblog
DSF member of the month - Akio Ogasahara
For November 2025, we welcome Akio Ogasahara as our DSF member of the month! ⭐
Akio is a technical writer and systems engineer. He contributed to the Japanese translation for many years. He has been a DSF member since June 2025. You can learn more about Akio by visiting Akio's X account and his GitHub Profile.
Let’s spend some time getting to know Akio better!
Can you tell us a little about yourself (hobbies, education, etc.)
I was born in 1986 in Rochester, Minnesota, to Japanese parents, and I’ve lived in Japan since I was one. I’ve been fascinated by machines for as long as I can remember. I hold a master’s degree in mechanical engineering. I’ve worked as a technical writer and a software PM, and I’m currently in QA at a Japanese manufacturer.
I'm curious, where does your nickname “libratech” come from?
I often used “Libra” as a handle because the symbol of Libra—a balanced scale—reflects a value I care deeply about: fairness in judgment. I combined that with “tech,” from “tech writer,” to create “libratech.”
How did you start using Django?
Over ten years ago, I joined a hands-on workshop using a Raspberry Pi to visualize sensor data, and we built the dashboard with Django. That was my first real experience.
What other framework do you know and if there is anything you would like to have in Django if you had magical powers?
I’ve used Flask and FastAPI. If I could wish for anything, I’d love “one-click” deployment that turns a Django project into an ultra-lightweight app running on Cloudflare Workers.
What projects are you working on now?
As a QA engineer, I’m building Pandas pipelines for quality-data cleansing and creating BI dashboards.
What are you learning about these days?
I’m studying for two Japanese certifications: the Database Specialist exam and the Quality Control Examination (QC Kentei).
Which Django libraries are your favorite (core or 3rd party)?
Django admin, without question. In real operations, websites aren’t run only by programmers—most teams eventually need CRM-like capabilities. Django admin maps beautifully to that practical reality.
What are the top three things in Django that you like?
- Django admin
- Strong security
- DRY by design
You have contributed a lot on the Japanese documentation, what made you contribute to translate for the Japanese language in the first place?
I went through several joint surgeries and suddenly had a lot of time. I’d always wanted to contribute to open source, but I knew my coding skills weren’t my strongest asset. I did, however, have years of experience writing manuals—so translation felt like a meaningful way to help.
Do you have any advice for people who could be hesitant to contribute to translation of Django documentation?
Translation has fewer strict rules than code contributions, and you can start simply by creating a Transifex account. If a passage feels unclear, improve it! And if you have questions, the Django-ja translation team is happy to help on our Discord.
I know you have some interest in AI as a technical writer, do you have an idea on how Django could evolve with AI?
Today’s AI is excellent at working with existing code—spotting N+1 queries or refactoring SQL without changing behavior. But code written entirely by AI often has weak security. That’s why solid unit tests and Django’s strong security guardrails will remain essential: they let us harness AI’s creativity safely.
Django is celebrating its 20th anniversary, do you have a nice story to share?
The surgeries were tough, but they led me to documentation translation, which reconnected me with both English and Django. I’m grateful for that path.
What are your hobbies or what do you do when you’re not working?
Outside of computers, I enjoy playing drums in a band and watching musicals and stage plays! 🎵
Is there anything else you’d like to say?
If you ever visit Japan, of course sushi and ramen are great—but don’t miss the sweets and ice creams you can find at local supermarkets and convenience stores! They’re inexpensive, come in countless varieties, and I’m sure you’ll discover a new favorite!🍦
Thank you for doing the interview, Akio !
