skip to navigation
skip to content

Planet Python

Last update: October 08, 2025 01:44 AM UTC

October 07, 2025


PyCoder’s Weekly

Issue #703: PEP 8, Error Messages in Python 3.14, splitlines(), and More (Oct. 7, 2025)

#703 – OCTOBER 7, 2025
View in Browser »

The PyCoder’s Weekly Logo


Python Violates PEP 8

PEP 8 outlines the preferred coding style for Python. It often gets wielded as a cudgel in online conversations. This post talks about what PEP 8 says and where it often gets ignored.
AL SWIEGART

Python 3.14 Preview: Better Syntax Error Messages

Python 3.14 includes ten improvements to error messages, which help you catch common coding mistakes and point you in the right direction.
REAL PYTHON

Quiz: Python 3.14 Preview: Better Syntax Error Messages

REAL PYTHON

Free Course: Build a Durable AI Agent with Temporal and Python

alt

Curious about how to build an AI agent that actually works in production? This free hands-on course shows you how with Python and Temporal. Learn to orchestrate workflows, recover from failures, and deliver a durable chatbot agent that books trips and generates invoices. Explore Tutorial →
TEMPORAL sponsor

Why splitlines() Instead of split("\n")?

To split text into lines in Python you should use the splitlines() method, not the split() method, and this post shows you why.
TREY HUNNER

PEP 810: Explicit Lazy Imports (Added)

PYTHON.ORG

PEP 809: Stable ABI for the Future (Added)

PYTHON.ORG

PEP 807: Index Support for Trusted Publishing (Added)

PYTHON.ORG

PyOhio 2025 Videos Online

YOUTUBE.COM

Django Security Releases Issued: 5.2.7, 5.1.13, and 4.2.25

DJANGO SOFTWARE FOUNDATION

Python Jobs

Senior Python Developer (Houston, TX, USA)

Technoidentity

More Python Jobs >>>

Articles & Tutorials

Advice on Beginning to Learn Python

What’s changed about learning Python over the last few years? What new techniques and updated advice should beginners have as they start their journey? This week on the show, Stephen Gruppetta and Martin Breuss return to discuss beginning to learn Python.
REAL PYTHON podcast

Winning a Bet About six

In 2020, Seth Larson and Andrey Petrov made a bet about whether six, the Python 2 compatibility shim would still be in the top 20 PyPI downloads. Seth won, but probably only because of a single library still using it.
SETH LARSON

Show Off Your Python Chops: Win the 2025 Table & Plotnine Contests

Showcase your Python data skills! Submit your best Plotnine charts and table summaries to the 2025 Contests. Win swag, boost your portfolio, and get recognized by the community. Deadline: Oct 17, 2025. Submit now!
POSIT sponsor

Durable Python Execution With Temporal

Talk Python interviews Mason Egger to discuss Temporal, a durable execution platform that enables developers to build scalable applications without sacrificing productivity or reliability.
KENNEDY & EGGER podcast

Astral’s ty: A New Blazing-Fast Type Checker for Python

Learn to use ty, an ultra-fast Python type checker written in Rust. Get setup instructions, run type checks, and fine-tune custom rules in personal projects.
REAL PYTHON

Quiz: Astral’s ty Type Checker for Python

REAL PYTHON

What Is “Good Taste” in Software Engineering?

This opinion piece talks about the difference between skill and taste when writing software. What “clean code” means to one may not be the same as to others.
SEAN GOEDECKE

Modern Python Linting With Ruff

Ruff is a blazing-fast, modern Python linter with a simple interface that can replace Pylint, isort, and Black—and it’s rapidly becoming popular.
REAL PYTHON course

Quiz: Modern Python Linting With Ruff

REAL PYTHON

Introducing tdom: HTML Templating With t‑strings

Python 3.14 introduces t-strings and this article shows you tdom a new HTML DOM toolkit that takes advantage of them to produce safer output.
DAVE PECK

Full Text Search With Django and SQLite

A walkthrough how to build full text search to power the search functionality of a blog using Django and SQLite.
TIMO ZIMMERMANN

Projects & Code

subprocesslib: Like pathlib for the subprocess Module

PYPI.ORG • Shared by Antoine Cezar

Python Implementation of the Cap’n Web Protocol

GITHUB.COM/ABILIAN • Shared by Stefane Fermigier

air: New Python Web Framework

GITHUB.COM/FELDROY

fastapi-radar: Debugging Dashboard for FastAPI Apps

GITHUB.COM/DOGANARIF

Try Sphinx Docs Instantly in Your Browser

DOCUMATT.COM • Shared by Libor Jelínek

Events

Weekly Real Python Office Hours Q&A (Virtual)

October 8, 2025
REALPYTHON.COM

PyCon Africa 2025

October 8 to October 13, 2025
PYCON.ORG

Wagtail Space 2025

October 8 to October 11, 2025
ZOOM.US

PyCon Hong Kong 2025

October 11 to October 13, 2025
PYCON.HK

PyCon NL 2025

October 16 to October 17, 2025
PYCON-NL.ORG

PyCon Thailand 2025

October 17 to October 19, 2025
PYCON.ORG

PyCon Finland 2025

October 17 to October 18, 2025
PLONECONF.ORG

PyConES 2025

October 17 to October 20, 2025
PYCON.ORG


Happy Pythoning!
This was PyCoder’s Weekly Issue #703.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

October 07, 2025 07:30 PM UTC


Python Morsels

Python 3.14's best new features

Python 3.14 includes syntax highlighting, improved error messages, enhanced support for cocurrency and parallelism, t-strings and more!

Table of contents

  1. Very important but not my favorites
  2. Python 3.14: now in color!
  3. My tiny contribution
  4. Beginner-friendly error messages
  5. Tab completion for import statements
  6. Standard library improvements
  7. Cleaner multi-exception catching
  8. Concurrency improvements
  9. External debugger interface
  10. T-strings (template strings)
  11. Try out Python 3.14 yourself

Very important but not my favorites

I'm not going to talk about the experimental free-threading mode, the just-in-time compiler, or other performance improvements. I'm going to focus on features that you can use right after you upgrade.

Python 3.14: now in color!

One of the most immediately …

Read the full article: https://www.pythonmorsels.com/python314/

October 07, 2025 04:08 PM UTC


Real Python

What's New in Python 3.14

Python 3.14 was published on October 7, 2025. While many of its biggest changes happen under the hood, there are practical improvements you’ll notice right away. This version sharpens the language’s tools, boosts ergonomics, and opens doors to new capabilities without forcing you to rewrite everything.

In this video course, you’ll explore features like:


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

October 07, 2025 02:00 PM UTC


Seth Michael Larson

Is the "Nintendo Classics" collection a good value?

Nintendo Classics is a collection of hundreds of retro video games from Nintendo (and Sega) consoles from the NES to the GameCube. Nintendo Classics is included with the Nintendo Switch Online (NSO) subscription, which starts at $20/year (~$1.66/month) for individual users.

Looking at the prices of retro games these days, this seems like an incredible value for players that want to play these games. This post is sharing a dataset that I've curated about Nintendo Classics games and mapping their value to actual physical prices of the same games, with some interesting queries.

For example, here's a graph showing the total value (in $USD) of Nintendo Classics over time:

The dataset was generated from the tables provided on Wikipedia (CC-BY-SA). The dataset doesn't contain pricing information, instead only links to corresponding Pricecharting pages. This page only shares approximate aggregate price information, not prices of individual games. This page will be automatically updated over time as Nintendo announces more games are coming to Nintendo Classics. This page was last updated October 7th, 2025.

How many games and value per platform?

There are 8 unique platforms on Nintendo Classics each with their own collection of games. The below table includes the value of both added and announced-but-not-added games. You can see that the total value of games in Nintendo Classics is many thousands of dollars if genuine physical copies were purchased instead. Here's a graph showing the total value of each platform changing over time:

And here's the data for all published and announced games as a table:

Platform Games Total Value Value per Game
NES 91 $1980 $21
SNES 83 $3600 $43
Game Boy (GB/GBC) 41 $1615 $39
Nintendo 64 (N64) 42 $1130 $26
Sega Genesis 51 $2910 $57
Game Boy Advance (GBA) 30 $930 $31
GameCube 9 $640 $71
Virtual Boy 14 $2580 $184
All Platforms 361 $15385 $42

View SQL query

SELECT platform, COUNT(*), SUM(price), SUM(price)/COUNT(*)
FROM games
GROUP BY platform;

How much value is in each Nintendo Classics tier?

There are multiple "tiers" of Nintendo Classics each with a different up-front price (for the console itself) and ongoing price for the Nintendo Switch Online (NSO) subscription.

Certain collections require specific hardware, such as Virtual Boy requiring either the recreation ($100) or cardboard ($30) Virtual Boy headset and GameCube collection requiring a Switch 2 ($450). All other collections work just fine with a Switch Lite ($100). All platforms beyond NES, SNES, Game Boy, and Game Boy Color require NSO + Expansion Pass.

Platforms Requires Price Games Games Value
NES, SNES, GB, GBC Switch Lite & NSO * $100 + $20/Yr 215 $7195
+N64, Genesis, GBA Switch Lite & NSO+EP $100 + $50/Yr 338 $12165
+Virtual Boy Switch Lite, NSO+EP, & VB $130 + $50/Yr 352 $14745
+GameCube Switch 2 & NSO+EP $450 + $50/Yr 361 $15385

* I wanted to highlight that Nintendo Switch Online (NSO) without Expansion Pack has the option to actually pay $3 monthly rather than $20 yearly. This doesn't make sense if you're paying for a whole year anyway, but if you want to just play a game in the NES, SNES, GB, or GBC collections you can pay $3 for a month of NSO and play games for very cheap.

How often are games added to Nintendo Classics?

Nintendo Classics tends to add a few games per platform every year. Usually when a platform is first announced a whole slew of games are added during the announcement with a slow drip-feed of games coming later.

Here's the break-down per year how many games were added to each platform:

Platform 2018 2019 2020 2021 2022 2023 2024 2025
NES 30 30 8 2 5 4 12
SNES 25 18 13 9 1 9 8
N64 10 13 8 8 3
Genesis 20 17 8 3 3
Game Boy 19 16 6
GBA 13 12 5
GameCube 9
Virtual Boy
All Platforms 30 55 26 55 43 53 60 34

View SQL query

SELECT platform, STRFTIME('%Y', added_date) AS year, COUNT(*)
FROM games
GROUP BY platform, year
ORDER BY platform, year DESC;

What are the rarest or valuable games in Nintendo Classics?

There are a bunch of valuable and rare games available in Nintendo Classics. Here are the top-50 most expensive games that are available in the collection:

PlatformGameAdded Date
Virtual BoyJack Bros.TBA
Virtual BoyVirtual BowlingTBA
GenesisCrusader of Centy2023-06-27
GenesisPulseman2023-04-18
Virtual BoySpace Invaders Virtual CollectionTBA
GenesisAlien Soldier2022-03-16
SNESEarthBound2022-02-09
GenesisMUSHA2021-10-25
SNESHarvest Moon2022-03-30
SNESWild Guns2020-05-20
Virtual BoyInnsmouth no YakataTBA
GenesisMega Man: The Wily Wars2022-06-30
GB/GBCMega Man V2024-06-07
SNESSutte Hakkun2025-01-24
NESFire 'n Ice2021-02-17
SNESKirby's Dream Land 32019-09-05
NESDonkey Kong Jr. Math2024-07-04
GB/GBCSurvival Kids2025-05-23
SNESDemon's Crest2019-09-05
GameCubeChibi-Robo!2025-08-21
GameCubePokémon XD: Gale of DarknessTBA
GB/GBCCastlevania Legends2023-10-31
NESS.C.A.T.: Special Cybernetic Attack Team2020-09-23
SNESStar Fox 22019-12-12
SNESKirby's Star Stacker2022-07-21
GBAF-Zero Climax2024-10-11
GameCubePokémon ColosseumTBA
GB/GBCMega Man IV2024-06-07
SNESUncharted Waters: New Horizons2025-03-28
Virtual BoyVirtual Boy Wario LandTBA
NESShadow of the Ninja2020-02-19
SNESSuper Metroid2019-09-05
GBAMr. Driller 22025-09-25
SNESJoe & Mac 2: Lost in the Tropics2019-09-05
SNESBreath of Fire II2019-12-12
SNESUmihara Kawase2022-05-26
GenesisGunstar Heroes2021-10-25
GenesisRistar2021-10-25
Virtual BoyVirtual FishingTBA
NESVice: Project Doom2019-08-21
N64Sin and Punishment2021-10-25
N64Pokémon Stadium 22023-08-08
GenesisCastlevania: Bloodlines2021-10-25
GenesisPhantasy Star IV2021-10-25
SNESThe Peace Keepers2020-09-23
GB/GBCKirby Tilt 'n' Tumble2023-06-05
N64The Legend of Zelda: Majora's Mask2022-02-25
Virtual BoyMario ClashTBA
SNESSuper Valis IV2020-12-18
SNESWrecking Crew '982024-04-12

View SQL query

SELECT platform, name, price FROM games
ORDER BY price DESC LIMIT 50;

Who publishes their games to Nintendo Classics?

Nintendo Classics has more publishers than just Nintendo and Sega. Looking at which third-party publishers are publishing their games to Nintendo Classics can give you a hint at what future games might make their way to the collection:

Publisher Games Value
Capcom 17 $1055
Xbox Game Studios 13 $245
Koei Tecmo 13 $465
City Connection 11 $240
Konami 10 $505
Bandai Namco Entertainment 9 $190
Sunsoft 7 $155
Natsume Inc. 7 $855
G-Mode 7 $190
Arc System Works 6 $110

View SQL query

SELECT publisher, COUNT(*) AS num_games, SUM(price)
FROM games WHERE publisher NOT IN ('Nintendo', 'Sega')
GROUP BY publisher
ORDER BY num_games DESC LIMIT 20;

What games have been removed from Nintendo Classics?

There's only been one game that's been removed from Nintendo Classics so far. There likely will be more in the future:

PlatformGameAdded DateRemoved Date
SNESSuper Soccer2019-09-052025-03-25

View SQL query:

SELECT platform, name, added_date, removed_date
FROM games WHERE removed_date IS NOT NULL;

This site uses the MIT licensed ChartJS for the line chart visualization.



Thanks for keeping RSS alive! ♥

October 07, 2025 12:00 AM UTC

October 06, 2025


Ari Lamstein

Visualizing 25 Years of Border Patrol Data with Python

I recently had the chance to speak with a statistician at the Department of Homeland Security (DHS) about my Streamlit app that visualizes trends in US Immigration Enforcement data (link). Our conversation helped clarify a question I’d raised in an earlier post—one that emerged from a surprising pattern in the data.

A Surprising Pattern

The first graph in my post showed how the number of detainees in ICE custody has changed over time, broken down by the arresting agency: ICE (Immigration and Customs Enforcement) or CBP (Customs and Border Protection). The agency-level split revealed an unexpected trend.

As I noted in the post:

Equally interesting is the agency-level data: since Trump took office ICE detentions are sharply up, but CBP detentions are down. I am not sure why CBP detentions are down.

A Potential Answer

This person suggested that CBP arrests might reflect not just enforcement capacity, but the number of people attempting to cross the border illegally—a figure that could fluctuate based on how welcoming an administration appears to be toward immigration.

This was a new lens for me. I hadn’t considered that attempted border crossings might rise or fall with shifts in presidential tone or policy. Given that one of Trump’s central campaign promises in 2024 was to crack down on illegal immigration (link), it felt like a hypothesis worth exploring.

The Data: USBP Encounters

While we can’t directly measure how many people attempt to cross the border illegally, DHS publishes a dataset that records each time the US Border Patrol (USBP) encounters a “removable alien”—a term DHS uses for individuals subject to removal under immigration law. This dataset can serve as a rough proxy for attempted illegal crossings.

The data is available on this page and is published as an Excel workbook titled “CBP Encounters – USBP – November 2024.” It covers October 1999 through November 2024, spanning five presidential administrations. While it doesn’t include data from the current administration (which began in January 2025), it does offer a historical view of enforcement trends.

The workbook contains 16 sheets; this analysis focuses on the “Monthly Region” tab. In this sheet, “Region” refers to the part of the border where the encounter occurred: Coastal Border, Northern Land Border, or Southwest Land Border.

The Analysis

To support this analysis, I created a new Python module called encounters. It’s available in my existing immigration_enforcement repo, along with the dataset and example workbooks. I’ve tagged the version of the code used in this post as usbp_encounters_post, so people will always be able to run the examples below—even if the repo evolves. You’re welcome to clone it and use it as a foundation for your own analysis.

One important detail: this dataset records dates using fiscal years, which run from October 1 to September 30. For example, October of FY2020 corresponds to October 2019 on the calendar. To simplify analysis, the function encounters.get_monthly_region_df reads in the “Monthly Region” sheet and automatically converts all fiscal year dates to calendar dates:

To preview the data, we can load the “Monthly Region” sheet using the encounters module like this:

 
import encounters

df = encounters.get_monthly_region_df() 
df.head() 

This returns:

date region quantity
0 1999-10-01 Coastal Border 740
1 1999-10-01 Northern Land Border 1250
2 1999-10-01 Southwest Land Border 87820
3 1999-11-01 Coastal Border 500
4 1999-11-01 Northern Land Border 960

To visualize the data, we can use Plotly to create a time series of encounters by region:

import plotly.express as px

px.line(
    df,
    x="date",
    y="quantity",
    color="region",
    title="USBP Border Encounters Over Time",
    color_discrete_sequence=px.colors.qualitative.T10,
)

From this graph, a few patterns stand out:

A Better Graph

Since the overwhelming majority of encounters occur at the Southwest Land Border, it makes sense to focus the visualization there. To explore how encounter trends align with presidential transitions, we can annotate the graph to show when administrations changed. The function encounters.get_monthly_encounters_graph handles this:

encounters.get_monthly_encounters_graph(annotate_administrations=True)

This annotated graph appears to support what the DHS statistician suggested: encounter numbers sometimes shift dramatically between administrations. The change is especially pronounced for the Trump and Biden administrations:

Potential Policy Link

While I’m not an expert on immigration policy, Wikipedia offers summaries of the immigration policies under both the Trump and Biden administrations.

It describes Trump’s policies as aiming to reduce both legal and illegal immigration—through travel bans, lower refugee admissions, and stricter enforcement measures. And the page on Biden’s immigration policy begins:

“The immigration policy Joe Biden initially focused on reversing many of the immigration policies of the previous Trump administration.”

The contrast between these two approaches is stark, and it’s at least plausible that the low number of encounters at the start of Trump’s first term, and the spike in encounters at the start of Biden’s term, reflect responses to these shifts.

Future Work

This post is just a first step in analyzing Border Patrol Encounter data. Looking ahead, here are a few directions I’m excited to explore:

While comments on my blog are disabled, I welcome hearing from readers. You can contact me here.

 

October 06, 2025 03:30 PM UTC


Real Python

It's Almost Time for Python 3.14 and Other Python News

Python 3.14 nears release with new features in sight, and Django 6.0 alpha hints at what’s next for the web framework. Several PEPs have landed, including improvements to type annotations and support for the free-threaded Python effort.

Plus, the Python Software Foundation announced new board members, while Real Python dropped a bundle of fresh tutorials and updates. Read on to learn what’s new in the world of Python this month!

Join Now: Click here to join the Real Python Newsletter and you'll never miss another Python tutorial, course, or news update.

Python 3.14 Reaches Release Candidate 3

Python 3.14.0rc3 was announced in September, bringing the next major version of Python one step closer to final release. This release candidate includes critical bug fixes, final tweaks to new features, and overall stability improvements.

Python 3.14 is expected to introduce new syntax options, enhanced standard-library modules, and performance boosts driven by internal C API changes. For the complete list of changes in Python 3.14, consult the official What’s new in Python 3.14 documentation.

The release also builds upon ongoing work toward making CPython free-threaded, an effort that will eventually allow better use of multicore CPUs. Developers are encouraged to test their projects with the RC to help identify regressions or issues before the official release.

The final release, 3.14.0, is scheduled for October 7. Check out Real Python’s series about the new features you can look forward to in Python 3.14.

Django 6.0 Alpha Released

Django 6.0 alpha 1 is out! This first public preview gives early access to the upcoming features in Django’s next major version. Although not production-ready, the alpha includes significant internal updates and deprecations, setting the stage for future capabilities.

Some of the early changes include enhanced async support, continued cleanup of old APIs, and the groundwork for upcoming improvements in database backend integration. Now is a great time for Django developers to test their apps and provide feedback before Django 6.0 is finalized.

Django 5.2.6, 5.1.12, and 4.2.24 were released separately with important security fixes. If you maintain Django applications, then these updates are strongly recommended.

Read the full article at https://realpython.com/python-news-october-2025/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

October 06, 2025 02:00 PM UTC


Brian Okken

pytest 2.6.0 release

There’s a new release of pytest-check. Version 2.6.0.

This is a cool contribution from the community.

The problem

In July, bluenote10 reported that check.raises() doesn’t behave like pytest.raises() in that the AssertionError returned from check.raises() doesn’t have a queryable value.

Example of pytest.raises():

with pytest.raises(Exception) as e:
    do_something()
assert str(e.value) == "<expected error message>"

We’d like check.raises() to act similarly:

with check.raises(Exception) as e:
    do_something()
assert str(e.value) == "<expected error message>"

But that didn’t work prior to 2.6.0. The issue was that the value returned from check.raises() didn’t have any .value atribute.

October 06, 2025 01:30 PM UTC


Talk Python to Me

#522: Data Sci Tips and Tricks from CodeCut.ai

Today we’re turning tiny tips into big wins. Khuyen Tran, creator of CodeCut.ai, has shipped hundreds of bite-size Python and data science snippets across four years. We dig into open-source tools you can use right now, cleaner workflows, and why notebooks and scripts don’t have to be enemies. If you want faster insights with fewer yak-shaves, this one’s packed with takeaways you can apply before lunch. Let’s get into it.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/agntcy'>Agntcy</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Khuyen Tran (LinkedIn)</strong>: <a href="https://www.linkedin.com/in/khuyen-tran-1ab926151/?featured_on=talkpython" target="_blank" >linkedin.com</a><br/> <strong>Khuyen Tran (GitHub)</strong>: <a href="https://github.com/khuyentran1401/?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>CodeCut</strong>: <a href="https://codecut.ai/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Production-ready Data Science Book (discount code TalkPython)</strong>: <a href="https://codecut.ai/production-ready-data-science/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <br/> <strong>Why UV Might Be All You Need</strong>: <a href="https://codecut.ai/why-uv-might-all-you-need/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>How to Structure a Data Science Project for Readability and Transparency</strong>: <a href="https://codecut.ai/how-to-structure-a-data-science-project-for-readability-and-transparency-2/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Stop Hard-coding: Use Configuration Files Instead</strong>: <a href="https://codecut.ai/stop-hard-coding-in-a-data-science-project-use-configuration-files-instead/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Simplify Your Python Logging with Loguru</strong>: <a href="https://codecut.ai/simplify-your-python-logging-with-loguru/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Git for Data Scientists: Learn Git Through Practical Examples</strong>: <a href="https://codecut.ai/git-for-data-scientists-learn-git-through-practical-examples/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Marimo (A Modern Notebook for Reproducible Data Science)</strong>: <a href="https://codecut.ai/marimo-a-modern-notebook-for-reproducible-data-science/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Text Similarity & Fuzzy Matching Guide</strong>: <a href="https://codecut.ai/text-similarity-fuzzy-matching-guide/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Loguru (Python logging made simple)</strong>: <a href="https://github.com/Delgan/loguru?tab=readme-ov-file#modern-string-formatting-using-braces-style" target="_blank" >github.com</a><br/> <strong>Hydra</strong>: <a href="https://hydra.cc/?featured_on=talkpython" target="_blank" >hydra.cc</a><br/> <strong>Marimo</strong>: <a href="https://marimo.io/?featured_on=talkpython" target="_blank" >marimo.io</a><br/> <strong>Quarto</strong>: <a href="https://quarto.org/?featured_on=talkpython" target="_blank" >quarto.org</a><br/> <strong>Show Your Work! Book</strong>: <a href="https://austinkleon.com/show-your-work/?featured_on=talkpython" target="_blank" >austinkleon.com</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=lypo8Ul4NhU" target="_blank" >youtube.com</a><br/> <strong>Episode #522 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/522/data-sci-tips-and-tricks-from-codecut.ai#takeaways-anchor" target="_blank" >talkpython.fm/522</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/522/data-sci-tips-and-tricks-from-codecut.ai" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

October 06, 2025 08:00 AM UTC


Rodrigo Girão Serrão

Functions: a complete reference | Pydon't 🐍

This article serves as a complete reference for all the non-trivial things you should know about Python functions.

Functions are the basic building block of any Python program you write, and yet, many developers don't leverage their full potential. You will fix that by reading this article.

Knowing how to use the keyword def is just the first step towards knowing how to define and use functions in Python. As such, this Pydon't covers everything else there is to learn:

Bookmark this reference for later or download the “Pydon'ts – write elegant Python code” ebook for free. The ebook contains this chapter and many others, including hundreds of tips to help you write better Python code. Download the ebook “Pydon'ts – write elegant Python code” here.

What goes into a function and what doesn't

Do not overcrowd your functions with logic for four or five different things. A function should do a single thing, and it should do it well, and the name of the function should clearly tell you what your function does.

If you are unsure about whether some piece of code should be a single function or multiple functions, it's best to err on the side of too many functions. That is because a function is a modular piece of code, and the smaller your functions are, the easier it is to compose them together to create more complex behaviours.

Consider the function process_order defined below, an exaggerated example that breaks these best practices to make the point clearer. While it is not incredibly long, it does too many things:

def process_order(order):
    # Validate the order:
    for item, quantity, price in order:
        if quantity <= 0:
            raise ValueError(f"Cannot buy 0 or less of {item}.")
        if price <= 0:
            raise ValueError(f"Price must be positive.")

    # Write the receipt:
    total = 0
    with open("receipt.txt", "w") as f:
        for item, quantity, price in order:
            # This week, yoghurts and batteries are on sale.
            if "yoghurt" in item:
                price *= 0.8
            elif "batteries" in item:
                price *= 0.5
            # Write this line of the receipt:
            partial = price * quantity
            f.write(f"{item:>15} --- {quantity:>3}...

October 06, 2025 07:00 AM UTC

October 05, 2025


Paolo Melchiorre

Django: one ORM to rule all databases 💍

Comparing the Django ORM support across official database backends, so you don’t have to learn it the hard way.

October 05, 2025 10:00 PM UTC


Christian Ledermann

Python Code Quality Tools Beyond Linting

The landscape of Python software quality tooling is currently defined by two contrasting forces: high-velocity convergence and deep specialization. The recent, rapid adoption of Ruff has solved the long-standing community problem of coordinating dozens of separate linters and formatters, establishing a unified, high-performance axis for standard code quality.

A second category of tools continues to operate in necessary, but isolated, silos. Tools dedicated to architectural enforcement and deep structural metrics, such as:

These projects address fundamental challenges of code maintainability, evolvability, and architectural debt that extend beyond the scope of fast, stylistic linting. The success of Ruff now presents the opportunity to foster a cross-tool discussion focused not just on syntax, but on structure.

Specialized quality tools are vital for long-term maintainability and risk assessment. Tools like import-linter and tach mitigate technical risk by enforcing architectural rules, preventing systemic decay, and reducing change costs. Complexity and cohesion metrics from tools such as complexipy, lcom, and cohesion quantitatively flag overly complex or highly coupled components, acting as early warning systems for technical debt. By analysing the combined outputs, risk assessment shifts to predictive modelling: integrating data from individual tools (e.g., import-linter violations, complexipy scores) creates a multi-dimensional risk score. Overlaying these results, such as identifying modules that are both low in cohesion and involved in tach-flagged dependency cycles, generates a "heat map" of technical debt. This unified approach, empirically validated against historical project data like bug frequency and commit rates can yield a predictive risk assessment. It identifies modules that are not just theoretically complex but empirically confirmed sources of instability, transforming abstract quality metrics into concrete, prioritized refactoring tasks for the riskiest codebase components.

Reasons to Connect

Bring the maintainers and core users of these diverse tools into a shared discussion.

Increasing Tool Visibility and Sustainability: Specialized tools often rely on small, dedicated contributor pools and suffer from knowledge isolation, confining technical debate to their specific GitHub repository. A broader discussion provides these projects with critical outreach, exposure to a wider user base, and a stronger pipeline of new contributors, ensuring their long-term sustainability.

Let's start the conversation on how to 'measure' maintainable, and architecturally sound Python code.
And keep Goodhart's law: "When a measure becomes a target, it ceases to be a good measure" in mind ;-)

October 05, 2025 02:07 PM UTC


Daniel Roy Greenfeld

Using pyinstrument to profile Air apps

Air is built on FastAPI, so we could use pyinstrument's instructions modified. However, because profilers reveal a LOT of internal data, in our example we actively use an environment variable.

You will need both air and pyinstrument to get this working:

# preferred
uv add "air[standard]" pyinstrument
# old school
pip install "air[standard]" pyinstrument

And here's how to use pyinstrument to find bottlenecks:

import asyncio
from os import getenv
import air
from pyinstrument import Profiler

app = air.Air()

# Use an environment variable to control if we are profiling
# This is a value that should never be set in production
if getenv("PROFILING"):
    @app.middleware("http")
    async def profile_request(request: air.Request, call_next):
        profiling = request.query_params.get("profile", False)
        if profiling:
            profiler = Profiler()
            profiler.start()
            await call_next(request)
            profiler.stop()
            return air.responses.HTMLResponse(profiler.output_html())
        else:
            return await call_next(request)


@app.page
async def index(pause: float = 0):
    if pause:
        await asyncio.sleep(pause)

    title = f"Pausing for {pause} seconds"
    return air.layouts.mvpcss(
        air.Title(title),
        air.H1(title),
        # Provide three options for testing the profiler
        air.P('Using asyncio.sleep to simulate bottlenecks'),
        air.Ol(
            air.Li(
                air.A(
                    f"Pause for 0.1 seconds",
                    href="/?profile=1&pause=0.1",
                    target="_blank",
                )
            ),
            air.Li(
                air.A(
                    f"Pause for 0.3 seconds",
                    href="/?profile=1&pause=0.3",
                    target="_blank",
                )
            ),
            air.Li(
                air.A(
                    f"Pause for 1.0 seconds",
                    href="/?profile=1&pause=1.0",
                    target="_blank",
                )
            ),
        ),
    )

Running the test app:

Rather than set the environment variable, for this kind of thing I like to prefix the CLI command with a PROFILING=1 prefix to send the environment variable for just this run of the project. By doing so we trigger pyinstrument:

PROFILING=1 fastapi dev main.py

Once you have it running, check it out here: http://localhost:8000

Screenshots

October 05, 2025 08:00 AM UTC


Graham Dumpleton

Lazy imports using wrapt

PEP 810 (explicit lazy imports) was recently released for Python. The idea with this PEP is to add explicit syntax for implementing lazy imports for modules in Python.

lazy import json

Lazily importing modules in Python is not a new idea and there have been a number of packages available to achieve a similar result, they just lacked an explicit language syntax to hide what is actually going on under the covers.

When I saw this PEP it made me realise that a new feature I added into wrapt for upcoming 2.0.0 release can be used to implement a lazy import with little effort.

For those who only know of wrapt as being a package for implementing Python decorators, it should be known that the ability to implement decorators using the approach it does, was merely one outcome of the true reason for wrapt existing.

The actual reason wrapt was created was to be able to perform monkey patching of Python code.

One key aspect of being able to monkey patch Python code is to be able to have the ability to wrap target objects in Python with a wrapper which acts as a transparent object proxy for the original object. By extending the object proxy type, one can then intercept access to the target object to perform specific actions.

For the purpose of this discussion we can ignore the step of creating a custom object proxy type and look at just how the base object proxy works.

import wrapt

import graphlib

print(type(graphlib))

print(id(graphlib.TopologicalSorter), graphlib.TopologicalSorter)

xgraphlib = wrapt.ObjectProxy(graphlib)

print(type(xgraphlib))

print(id(graphlib.TopologicalSorter), xgraphlib.TopologicalSorter)

In this example we import a module called graphlib from the Python standard library. We then access from that module the class graphlib.TopologicalSorter and print it out.

When we wrap the same module with an object proxy, the aim is that anything you could do with the original module, could also be done via the object proxy. The output from the above is thus:

<class 'module'>
35852012560 <class 'graphlib.TopologicalSorter'>
<class 'ObjectProxy'>
35852012560 <class 'graphlib.TopologicalSorter'>

verifying that in both cases the TopologicalSorter is in fact the same object, even though for the proxy the apparent type is different.

The new feature which has been added for wrapt version 2.0.0 is a lazy object proxy. That is, instead of passing to the object proxy when created the target object to be wrapped, you pass a function, with this function being called to create or otherwise obtain the target object to be wrapped, the first time the proxy object is accessed.

Using this feature we can easily implement lazy module importing.

import sys
import wrapt

def lazy_import(name):
    return wrapt.LazyObjectProxy(lambda: __import__(name, fromlist=[""]))

graphlib = lazy_import("graphlib")

print("sys.modules['graphlib'] =", sys.modules.get("graphlib", None))

print(type(graphlib))

print(graphlib.TopologicalSorter)

print("sys.modules['graphlib'] =", sys.modules.get("graphlib", None))

Running this the output is:

sys.modules['graphlib'] = None
<class 'LazyObjectProxy'>
<class 'graphlib.TopologicalSorter'>
sys.modules['graphlib'] = <module 'graphlib' from '.../lib/python3.13/graphlib.py'>

One key thing to note here is that when the lazy import is setup, no changes have been made to sys.modules. It is only later when the module is truly imported that you see an entry in sys.modules for that module name.

Some lazy module importers work by injecting into sys.modules a fake module object for the target module. This has to be done right up front when the application is started. Because the fake entry exists, when import is later used to import that module it thinks it has already been imported and thus what is added into the scope where import is used is the fake module, with the actual module not being imported at that point.

What then happens is that when code attempts to use something from the module, an overridden __getattr__ special dunder method on the fake module object gets triggered, which on the first use causes the actual module to then be imported.

That sys.modules is modified and a fake module added is one of the criticisms one sees about such lazy module importers. That is, the change they make is global to the whole application which could have implications such as where side affects of importing a module are expected to be immediate.

With the way the wrapt example works above, no global change is required to sys.modules, and instead impacts are only local to the scope where the lazy import was made.

Reducing the impacts to just the scope where the lazy import was used is actually one of the goals of the PEP. The example using wrapt shows that it can be done, but it means you can't use an import statement, but then that is what the PEP aims to still allow, albeit they still require a new lazy keyword for when doing the import. Either way, the code where you want to have a lazy import needs to be different.

The other thing which the PEP should avoid is the module reference in the scope where the module is imported being any sort of fake module object. Initially the module reference would effectively be a place holder, but as soon as used, the actual module would be imported and the place holder replaced.

For the wrapt example the module reference would always be a proxy object, although technically with a bit of stack diving trickey you could also replace the module reference with the actual module as a side effect of the first use. This sort of trick is left as an exercise for the reader.

October 05, 2025 12:00 AM UTC

October 04, 2025


Paolo Melchiorre

My DjangoCon US 2025

A summary of my experience at DjangoCon US 2025 told through the posts I published on Mastodon during the conference.

October 04, 2025 10:00 PM UTC


Rodrigo Girão Serrão

TIL #134 – = alignment in string formatting

Today I learned how to use the equals sign to align numbers when doing string formatting in Python.

There are three main alignment options in Python's string formatting:

Character Meaning
< align left
> align right
^ centre

However, numbers have a fourth option =. On the surface, it looks like it doesn't do anything:

x = 73

print(f"@{x:10}@")   # @        73@
print(f"@{x:=10}@")  # @        73@

But that's because = influences the alignment of the sign. If I make x negative, we already see something:

x = -73

print(f"@{x:10}@")   # @       -73@
print(f"@{x:=10}@")  # @-       73@

So, the equals sign = aligns a number to the right but aligns its sign to the left. That may look weird, but I guess that's useful if you want to pad a number with 0s:

x = -73

print(f"@{x:010}@")  # @-000000073@

In fact, there is a shortcut for this type of alignment, which is to just put a zero immediately to the left of the width when aligning a number:

x = -73

print(f"@{x:010}@")  # @-000000073@

The zero immediately to the left changes the default alignment of numbers to be = instead of >.

October 04, 2025 02:01 PM UTC

October 03, 2025


Luke Plant

Breaking “provably correct” Leftpad

I don’t know much about about formal methods, so a while back I read Hillel Wayne’s post Let’s prove Leftpad with interest. However:

So I thought I’d take a peek and do some testing on these Leftpad implementations that are all “provably correct”.

Contents

Methodology

I’ll pick a few, simple, perfectly ordinary inputs at random, and work out what I think the output should be. This is a pretty trivial problem so I’m expecting that all the implementations will match my output. [narrator: He is is expecting no such thing]

I’m also expecting that, even if for some reason I’ve made a mistake, all the implementations will at least match each other. [narrator: More lies] They’ve all been proved correct, right?

Here are my inputs and expected outputs. I’m going to pad to a length of 10, and use - as padding so it can be seen and counted more easily than spaces.

Item Input Length Expected padding Expected Output
1 𝄞 1 9 ---------𝄞
2 1 9 ---------Å
3 𓀾 1 9 ---------𓀾
4 אֳֽ֑ 1 9 ---------אֳֽ֑
5 résumé 6 4 ----résumé
6 résumé 6 4 ----résumé

[“ordinary”, “random” - I think my work here is done…]

I’ve used a monospace font so that the right hand side of the outputs all line up as you’d expect.

Entry 6 is not a mistake, by the way, it just does “e acute” in a different way to entry 5. Nothing to see here, move along…

Implementations

Not all of the implementations were that easy to run. In fact, many of them didn’t take any kind of “string” as input, but vectors or lists or such things, and it wasn’t obvious to me how to pass strings in. So I discounted them.

For the ones I could run, I attempted to do so by embedding the test inputs directly in the program, if possible.

Liquid Haskell

Embedding the characters directly in Haskell source code kept getting me “lexical error in string/character literal”, so I wrote a small driver program that read from a file.

Java

The leftpad function provided didn’t take a string, but a char[]. Thankfully, it’s easy to convert from String objects, using the .toCharArray() function. So I did that.

Lean4

There is a handy online playground, and the implementation had a helpful #eval block that I could modify to get output. You can play with it here.

Rust

The code here had loads of extra lines regarding specs etc. which I stripped so I could easily run it, which worked fine.

More tricky was that the code didn’t take a string, but some Vec<Thingy>. As I know nothing about Rust, I got ChatGPT to tell me how to convert from a string to that. It gave me two options, I picked the one that looked simpler and less <<angry>>. I didn’t deliberately pick the one which made Rust look even worse than all the others, out of peevish resentment for every time someone has rewritten some Python code (my go-to language) in Rust and made it a million times faster – that’s a ridiculous suggestion.

Some competition!

To make things interesting, let’s compare these provably correct implementations with one vibe-coded by ChatGPT, in some random language, like, say, um, Swift. It gave me this code:

import Foundation

func leftPad(_ string: String, length: Int, pad: Character = " ") -> String {
    let paddingCount = max(0, length - string.count)
    return String(repeating: pad, count: paddingCount) + string
}

You can play with it online here.

Results

Here are the results, green for correct and red for … less correct.

table.results { font-family: monospace; } td.result-pass { background-color: light-dark(#00ff00, #004000); color: light-dark(black, white); } td.result-fail { background-color: light-dark(#ff8080, #400000); color: light-dark(black, white); }
Input Reference Java Haskell Lean Rust Swift
𝄞 ---------𝄞 --------𝄞 ---------𝄞 ---------𝄞 ------𝄞 ---------𝄞
---------Å --------Å --------Å --------Å -------Å ---------Å
𓀾 ---------𓀾 --------𓀾 ---------𓀾 ---------𓀾 ------𓀾 ---------𓀾
אֳֽ֑ ---------אֳֽ֑ ------אֳֽ֑ ------אֳֽ֑ ------אֳֽ֑ --אֳֽ֑ ---------אֳֽ֑
résumé ----résumé ----résumé ----résumé ----résumé --résumé ----résumé
résumé ----résumé --résumé --résumé --résumé résumé ----résumé

And pivoted the other way around so you can compare individual inputs more easily:

Language 𝄞 𓀾 אֳֽ֑ résumé résumé
Reference ---------𝄞 ---------Å ---------𓀾 ---------אֳֽ֑ ----résumé ----résumé
Java --------𝄞 --------Å --------𓀾 ------אֳֽ֑ ----résumé --résumé
Haskell ---------𝄞 --------Å ---------𓀾 ------אֳֽ֑ ----résumé --résumé
Lean ---------𝄞 --------Å ---------𓀾 ------אֳֽ֑ ----résumé --résumé
Rust ------𝄞 -------Å ------𓀾 --אֳֽ֑ --résumé résumé
Swift ---------𝄞 ---------Å ---------𓀾 ---------אֳֽ֑ ----résumé ----résumé

Comments

Rust, as expected, gets nul points. What can I say?

Vibe-coding with Swift: 💯

Other that, we can see:

The score so far:

Explanation

OK, I’ve had my fun now :-)

(The original “Let’s Prove Leftpad” project was done “because it is funny”, and this post is in the same spirit. I want to be especially clear that I’m not actually a fan of vibe-coding).

What’s actually going on here? There are two main issues, both tied up with the concept of “the length of a string”.

(If you already know enough about Unicode, or don’t care about the details, you can skip to the “What went wrong?” section to continue discussion regarding formal verification).

First:

What is a character?

Strings are composed of “characters”, but what are they?

Most modern computer languages, and all the ones I included above, use Unicode as the basis for answering this. Unicode, at its heart, is a list of “code points” (although it is bit more than this). A code point, however, is not exactly a character.

Many of the code points you use most often, like Latin Capital Letter A U+0041, are exactly what you think of as a character. But many are not. These include:

So, Unicode has another concept, called the “extended grapheme cluster”, or “user-perceived character”, which more closely maps to what you might think of as a “character”. That’s the concept I’m implicitly using for my claim of what leftpad should output.

Secondly, there is the question of:

How does a programming language handle strings?

Different languages have different fundamental answers to the question of “what is a string?”, different internal representations of them (how the data is actually stored), and different ways of exposing strings to the programmer.

Some languages, especially performance oriented ones, provide little to zero insulation from the internal representation, while others provide a fairly strong abstraction. Some languages, like Haskell, provide multiple string types (which can be used with string literals in your code with the OverloadedStrings extension).

At this point, as well as “code points”, we’ve got to consider “encodings”. If you have a code point, even a “simple” one like U+0041 (A), you have said nothing about how you are going to store or transmit that data. An encoding is a system for doing that. Two relevant ones here:

In terms of languages today, with some simplification we can say the following:

Putting them together

These differences, between them, explain the differences in output above. In more detail:

What went wrong?

For me, the biggest issue is not the “code points” vs “characters” debate, which is responsible for most of the variation shown, but the issue that resulted in the difference in the Java output i.e. UTF-16. All of the others (if I hadn’t stitched up Rust) would have resulted in the same output at least.

Apparently, nothing in the process of doing the formal verification forced the implementations to converge, and I think it is pretty fair to conclude that that at least one of the implementations must be faulty, given that they produce different output.

So what went wrong?

Lies, damned lies and natural language

English (or any natural language) is at the heart of the problem here. We should start with the phrase “provably correct”. It has a technical definition, but I’m not convinced those English words help us. The post accompanying the LiquidHaskell entry for Leftpad puts it this way:

My eyes roll whenever I read the phrase “proved X (a function, a program) correct”.

There is no such thing as “correct”.

There are only “specifications” or “properties”, and proofs that ensures that your code matches those specifications or properties.

For these reasons, I think I’d prefer talking about “formally verified” functions – it at least prompts you to ask “what does that mean”, and maybe suggests that you should be thinking about what, specifically, has been verified.

The next bit of English to trip us up is “the length of a string”. It’s extremely easy to imagine this is an easy concept, but in the presence of Unicode it really isn’t.

Hillel’s original informal requirements don’t actually use that phrase, instead they use the pseudo-code max(n, len(str)). Looking at the implementations, it appears people have subconsciously interpreted this as “the length of the string”, and then assumed that the functions like length or size that their language provides do “the right thing”.

We could conclude this is in fact a problem with informal requirements – it was at the level of interpreting those requirements that this went wrong. Therefore, we need more formal specifications and verification, not less. But I don’t think this gets round the fact that we’ve got to translate at some point, and at that point you’ve got trouble.

What is correct?

The issue I haven’t addressed so far is whether my reference output and the Swift implementation are actually “correct”. The reality is that you can make arguments for different things.

Implicitly, I’m arguing that left pad should be used for visual alignment in a fixed width context, and the implementation that does the best at that is the best one. I think that is a pretty reasonable case. But I’m sure you could make a case for other output – there isn’t actually anything that says what left pad should be used for. It’s possible that there are use cases where “the language’s underlying concept of the length of a string, whatever that may be” is the most important thing.

In addition, I was hiding the fact that “fixed width” is yet another lie:

I was originally going to use a flag character like 🏴󠁧󠁢󠁥󠁮󠁧󠁿 as one of my inputs, which is a single “extended grapheme cluster” that uses no less then 7 code points. It also results in 14 UTF-16 units in Java. The problem was that this character, like most emojis and many other characters from wide scripts like Chinese, takes up a double width even with a monospace font.

To maintain the subterfuge of “look how these all line up neatly in the correct output”, I was forced to use other examples. In other words, the example use case I was relying on to prove that these leftpad implementations were broken, is itself a broken concept. But I would still maintain that my reference output is closer to what you would expect leftpad to do.

A big point is this: even if we argue that a give implementation is “correct” (in that it does what its specifications say it does), that doesn’t mean you are using it correctly. Are you using it for its intended purpose and context? That seems like a really hard question to answer even for leftpad, and many other real world functions are similar.

So, I’m not sure what my final conclusion is, other than “programming is hard, let’s go shopping let’s eat chocolate” (alternative suggested by my wife, that’s my plan for the evening then).

Confessions and corrections

Response

Hillel was kind enough to look at this post, and had this response to add:

In general, formally verified code can “go wrong” in two ways: the proven properties that don’t match what we need, or the proof depends on assumptions that are not true in practice. This is a good example of the former. An example of the latter would be the assumption that the output string is storable in memory. None of the formally verified functions will correctly render leftpad(“-“, 1e300, “foo”). This is why we always need to be careful when talking about “proving correctness”. In formal methods, “correct” always means “conforms to a certain specification under certain assumptions”, which is very different from the colloquial use of “correct” (does what you want and doesn’t have bugs).

He also pointed out that padding/alignment functionality available in standard libraries, like Python’s Format Specification Mini-Language and Javascript’s padStart, have similar issues.

October 03, 2025 08:00 PM UTC


Mariatta

Disabling Signup in Django allauth

Django allauth

Django allauth is a popular third party package that provides a lot of functionality for handling user authentication, with support for social authentication, email verification, multi-factor authentication, and more.

It is a powerful library that greatly expands the built-in Django authentication system. It comes with its own basic forms and models for user registration, login, logout, and password management.

I like using it because often I just wanted to get a new Django project up and running quickly without having to write up all the authentication-related views, forms, and templates myself. I’m using django-allauth in PyLadiesCon Portal, and in my personal project Secret Codes.

October 03, 2025 07:50 PM UTC


Real Python

The Real Python Podcast – Episode #268: Advice on Beginning to Learn Python

What's changed about learning Python over the last few years? What new techniques and updated advice should beginners have as they start their journey? This week on the show, Stephen Gruppetta and Martin Breuss return to discuss beginning to learn Python.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

October 03, 2025 12:00 PM UTC

October 01, 2025


Real Python

Python 3.14 Preview: Better Syntax Error Messages

Python 3.14 brings a fresh batch of improvements to error messages that’ll make debugging feel less like detective work and more like having a helpful colleague point out exactly what went wrong. These refinements build on the clearer tracebacks introduced in recent releases and focus on the mistakes Python programmers make most often.

By the end of this tutorial, you’ll understand that:

  • Python 3.14’s improved error messages help you debug code more efficiently.
  • There are ten error message enhancements in 3.14 that cover common mistakes, from keyword typos to misusing async with.
  • These improvements can help you catch common coding mistakes faster.
  • Python’s error messages have evolved from version 3.10 through 3.14.
  • Better error messages accelerate your learning and development process.

There are many other improvements and new features coming in Python 3.14. The highlights include the following:

To try any of the examples in this tutorial, you need to use Python 3.14. The tutorials How to Install Python on Your System: A Guide and Managing Multiple Python Versions With pyenv walk you through several options for adding a new version of Python to your system.

Get Your Code: Click here to download the free sample code that you’ll use to learn about the error message improvements in Python 3.14.

Take the Quiz: Test your knowledge with our interactive “Python 3.14 Preview: Better Syntax Error Messages” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Python 3.14 Preview: Better Syntax Error Messages

Explore how Python 3.14 improves error messages with clearer explanations, actionable hints, and better debugging support for developers.

Better Error Messages in Python 3.14

When Python 3.9 introduced a new parsing expression grammar (PEG) parser for the language, it opened the door to better error messages in Python 3.10. Python 3.11 followed with even better error messages, and that same effort continued in Python 3.12.

Python 3.13 refined these messages further with improved formatting and clearer explanations, making multiline errors more readable and adding context to complex error situations. These improvements build upon PEP 657, which introduced fine-grained error locations in tracebacks in Python 3.11.

Now, Python 3.14 takes another step forward, alongside other significant changes like PEP 779, which makes the free-threaded build officially supported, and PEP 765, which disallows using return, break, or continue to exit a finally block. What makes the error message enhancements in Python 3.14 special is their focus on common mistakes.

Each improved error message follows a consistent pattern:

  1. It identifies the mistake.
  2. It explains what’s wrong in plain English.
  3. It suggests a likely fix when possible.

The error message improvements in Python 3.14 cover SyntaxError, ValueError, and TypeError messages. For an overview of the ten improvements you’ll explore in this tutorial, expand the collapsible section below:

  • Keyword TyposSyntaxError
    ⭕️ Previous: invalid syntax
    Improved: invalid syntax. Did you mean 'for'?

  • elif After elseSyntaxError
    ⭕️ Previous: invalid syntax
    Improved: 'elif' block follows an 'else' block

  • Conditional ExpressionsSyntaxError
    ⭕️ Previous: invalid syntax
    Improved: expected expression after 'else', but statement is given

  • String ClosureSyntaxError
    ⭕️ Previous: invalid syntax
    Improved: invalid syntax. Is this intended to be part of the string?

  • String PrefixesSyntaxError
    ⭕️ Previous: invalid syntax
    Improved: 'b' and 'f' prefixes are incompatible

  • Unpacking ErrorsValueError
    ⭕️ Previous: too many values to unpack (expected 2)
    Improved: too many values to unpack (expected 2, got 3)

  • as TargetsSyntaxError
    ⭕️ Previous: invalid syntax
    Improved: cannot use list as import target

  • Unhashable TypesTypeError
    ⭕️ Previous: unhashable type: 'list'
    Improved: cannot use 'list' as a dict key (unhashable type: 'list')

  • math Domain ErrorsValueError
    ⭕️ Previous: math domain error
    Improved: expected a nonnegative input, got -1.0

  • async with ErrorsTypeError
    ⭕️ Previous: 'TaskGroup' object does not support the context manager protocol
    Improved: object does not support the context manager protocol...Did you mean to use 'async with'?

The examples in this tutorial show both the error messages from Python 3.13 and the improved messages in Python 3.14, so you can see the differences even if you haven’t installed 3.14 yet.

For a general overview of Python’s exception system, see Python’s Built-in Exceptions: A Walkthrough With Examples, or to learn about raising exceptions, check out Python’s raise: Effectively Raising Exceptions in Your Code.

Clearer Keyword Typo Suggestions

A typo is usually a tiny mistake, sometimes just one extra letter, but it’s enough to break your code completely. Typos that involve Python keywords are among the most common syntax errors in Python code.

In Python 3.13 and earlier, a typo in a keyword produces a generic syntax error that offers no guidance about what might be wrong:

Python Python 3.13
>>> forr i in range(5):
  File "<python-input-0>", line 1
    forr i in range(5):
         ^
SyntaxError: invalid syntax

The error points to the problem area with a helpful caret symbol (^), which at least tells you where Python found the error. However, it doesn’t suggest what you might have meant. The message “invalid syntax” is technically correct but not particularly helpful in practice.

You have to figure out on your own that forr should actually be for. It might be obvious once you spot it, but finding that single wrong letter can take a surprisingly long time when you’re focused on logic rather than spelling.

Python 3.14 recognizes when you type something close to a Python keyword and offers a helpful suggestion that immediately points you to the fix:

Read the full article at https://realpython.com/python314-error-messages/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

October 01, 2025 02:00 PM UTC


Django Weblog

Django security releases issued: 5.2.7, 5.1.13, and 4.2.25

In accordance with our security release policy, the Django team is issuing releases for Django 5.2.7, Django 5.1.13, and Django 4.2.25. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.

CVE-2025-59681: Potential SQL injection in QuerySet.annotate(), alias(), aggregate(), and extra() on MySQL and MariaDB

QuerySet.annotate(), QuerySet.alias(), QuerySet.aggregate(), and QuerySet.extra() methods were subject to SQL injection in column aliases, using a suitably crafted dictionary, with dictionary expansion, as the **kwargs passed to these methods on MySQL and MariaDB.

Thanks to sw0rd1ight for the report.

This issue has severity "high" according to the Django security policy.

CVE-2025-59682: Potential partial directory-traversal via archive.extract()

The django.utils.archive.extract() function, used by startapp --template and startproject --template, allowed partial directory-traversal via an archive with file paths sharing a common prefix with the target directory.

Thanks to stackered for the report.

This issue has severity "low" according to the Django security policy.

Affected supported versions

  • Django main
  • Django 6.0 (currently at alpha status)
  • Django 5.2
  • Django 5.1
  • Django 4.2

Resolution

Patches to resolve the issue have been applied to Django's main, 6.0 (currently at alpha status), 5.2, 5.1, and 4.2 branches. The patches may be obtained from the following changesets.

CVE-2025-59681: Potential SQL injection in QuerySet.annotate(), alias(), aggregate(), and extra() on MySQL and MariaDB

CVE-2025-59682: Potential partial directory-traversal via archive.extract()

The following releases have been issued

The PGP key ID used for this release is Jacob Walls: 131403F4D16D8DC7

General notes regarding security reporting

As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum. Please see our security policies for further information.

October 01, 2025 01:55 PM UTC


Real Python

Quiz: Python 3.14 Preview: Better Syntax Error Messages

This quiz helps you get familiar with the upgraded error messages in Python 3.14. You’ll review new keyword typo suggestions, improved math errors, string prefix feedback, and more.

Put your understanding to the test and discover how Python’s improved error messages can help you debug code faster.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

October 01, 2025 12:00 PM UTC

Quiz: Python MCP Server: Connect LLMs to Your Data

This quiz helps you review the core ideas behind the Model Context Protocol (MCP). You will practice how MCP connects large language models with external systems, how to install it, and what role prompts, resources, and tools play.

You’ll also revisit best practices for defining tools, explore client-server setups like Cursor, and check your understanding of transports and testing. For a full walkthrough, see Python MCP Server: Connect LLMs to Your Data.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

October 01, 2025 12:00 PM UTC


Zero to Mastery

[September 2025] Python Monthly Newsletter 🐍

70th issue of Andrei's Python Monthly: Customizing Your Python REPL, AI Coding Trap, and much more. Read the full newsletter to get up-to-date with everything you need to know from last month.

October 01, 2025 10:00 AM UTC


Tryton News

Newsletter October 2025

During the last month we focused on fixing bugs, improving the behaviour of things, speeding-up performance issues - building on the changes from our last release. We also added some new features which we would like to introduce to you in this newsletter.

For an in depth overview of the Tryton issues please take a look at our issue tracker or see the issues and merge requests filtered by label.

Changes for the User

Sales, Purchases and Projects

We now use the guest-party for a Shopify order without a known customer which can be updated with the proper party in the admin-panel at a later time.

Now we support the orders/edited and orders/cancelled web-hooks from Shopify.

New Releases

We released bug fixes for the currently maintained long term support series
7.0 and 6.0, and for the penultimate series 7.4.

Security

Please update your systems to take care of a security related bug we found last month.
Luis Falcon has found that trytond may log sensitive data like passwords when the logging level is set to INFO. Impact CVSS v3.0 Base Score: 4.2 Attack Vector: Network Attack Complexity: Low Privileges Required: High User Interaction: None Scope: Unchanged Confidentiality: High Integrity: None Availability: None Workaround Increasing the logging level above INFO prevents logging of the sensitive data. Resolution All affected users should upgrade trytond to the latest version. Affected vers…

Authors: @dave @pokoli @udono

1 post - 1 participant

Read full topic

October 01, 2025 06:00 AM UTC


Seth Michael Larson

Winning a bet about “six”, the Python 2 compatibility shim

Exactly five years ago today Andrey Petrov and I made a bet about whether “six”, the compatibility shim for Python 2 and 3 APIs, would still be in the top 20 daily downloads on PyPI. I said it would, Andrey took the side against.

Well, today I can say that I've won the bet. When the bet was placed, six was #2 in terms of daily downloads and today six is #14. Funnily enough, six was still exactly #14 back in 2023:

“six is top 14 after 3 years, 2 years left, sounds like [Andrey] is winning”
-- Quentin Pradet (2023-07-09)

Completely unrelated to this bet, Hynek mentioned six still being in the top 20 downloaded packages during his PyCon UK keynote.

six itself isn't a library that many use on its own, as at least 96% of six downloads come from Python 3 versions. Instead, this library is installed because other libraries depend on the library. Here are the top packages that still depend on six:

Package Downloads / Day Last Uploaded
python-dateutil 22M 2024-03-01
yandexcloud 6M 2025-09-22
azure-core 4M 2025-09-11
jedi 2M 2024-11-11
kubernetes 2M 2025-06-09
rfc3339-validator 2M 2021-05-12
google-pasta 1M 2020-03-13
confluent-kafka 1M 2025-08-18
oauth2client 1M 2018-09-07
ecdsa 1M 2025-03-13

These packages were found by querying my own dataset about PyPI:

SELECT packages.name, packages.downloads
FROM packages JOIN deps ON packages.name = deps.package_name
WHERE deps.dep_name = 'six'
GROUP BY packages.name
ORDER BY packages.downloads DESC
LIMIT 10;

Notice how a single popular library, python-dateutil, keeping six as a dependency was enough to carry me to victory. Without python-dateutil I likely would have lost this bet. I also wanted to note the "last uploaded" dates, as some of the libraries aren't uploaded frequently, potentially explaining why they still depend on six.

“surely in 10 years, six won't be a thing. right? RIGHT?”
-- Andrey Petrov (2020-10-01)

We'll see! ;) Thanks to Benjamin Peterson for creating and maintaining six.



Thanks for keeping RSS alive! ♥

October 01, 2025 12:00 AM UTC