Planet Python
Last update: November 19, 2025 10:42 AM UTC
November 19, 2025
Django Weblog
Going build-free with native JavaScript modules
For the last decade and more, we've been bundling CSS and JavaScript files. These build tools allowed us to utilize new browser capabilities in CSS and JS while still supporting older browsers. They also helped with client-side network performance, minimizing the content to be as small as possible and combining files into one large bundle to reduce network handshakes. We've gone through a lot of build tools iterations in the process; from Grunt (2012) to Gulp (2013) to Webpack (2014) to Parcel (2017) to esbuild (2020) and Vite (2020).
And with modern browser technologies there is less need for these build tools.
- Modern CSS supports many of the features natively that the build tools were created for. CSS nesting to organize code, variables, @supports for feature detection.
- JavaScript ES6 / ES2015 was a big step forward, and the language has been progressing steadily ever since. It now has native module support with the import / export keywords
- Meanwhile, with HTTP/2 performance improvements, parallel requests can be made over the same connection, removing the constraints of the HTTP/1.x protocol.
These build processes are complex, particularly for beginners to Django. The tools and associated best practices move quickly. There is a lot to learn and you need to understand how to utilize them with your Django project. You can build a workflow that stores the build results in your static folder, but there is no core Django support for a build pipeline, so this largely requires selecting from a number of third party packages and integrating them into your project.
The benefit this complexity adds is no longer as clear cut, especially for beginners. There are still advantages to build tools, but you can can create professional results without having to use or learn any build processes.
Build-free JavaScript tutorial
To demonstrate modern capabilities, let's expand Django’s polls tutorial with some newer JavaScript. We’ll use modern JS modules and we won’t require a build system.
To give us a reason to need JS let's add a new requirement to the polls; to allow our users to add their own suggestions, instead of only being able to vote on the existing options. We update our form to have a new option under the selection code:
or add your own <input type="text" name="choice_text" maxlength="200" />
Now our users can add their own options to polls if the existing ones don't fit. We can update the voting view to handle this new option. We add a new choice_text input, and if there is no vote selection we will potentially handle adding the new option, while still providing an error message if neither is supplied. We also provide an error if both are selected.
def vote(request, question_id):
if request.POST['choice'] and request.POST['choice_text']:
return render(request, 'polls/detail.html', {
'question': question,
'error_message': "You can't vote and provide a new option.",
})
question = get_object_or_404(Question, pk=question_id)
try:
selected_choice = question.choice_set.get(pk=request.POST['choice'])
except (KeyError, Choice.DoesNotExist):
if request.POST['choice_text']:
selected_choice = Choice.objects.create(
question=question,
choice_text=request.POST['choice_text'],
)
else:
return render(request, 'polls/detail.html', {
'question': question,
'error_message': "You didn't select a choice or provide a new one.",
})
selected_choice.votes += 1
selected_choice.save()
return HttpResponseRedirect(reverse('polls:results', args=(question.id,)))
Now that our logic is a bit more complex it would be nicer if we had some JavaScript to do this. We can build a script that handles some of the form validation for us.
function noChoices(choices, choice_text) {
return (
Array.from(choices).some((radio) => radio.checked) ||
(choice_text[0] && choice_text[0].value.trim() !== "")
);
}
function allChoices(choices, choice_text) {
return (
!Array.from(choices).some((radio) => radio.checked) &&
choice_text[0] &&
choice_text[0].value.trim() !== ""
);
}
export default function initFormValidation() {
document.getElementById("polls").addEventListener("submit", function (e) {
const choices = this.querySelectorAll('input[name="choice"]');
const choice_text = this.querySelectorAll('input[name="choice_text"]');
if (!noChoices(choices, choice_text)) {
e.preventDefault();
alert("You didn't select a choice or provide a new one.");
}
if (!allChoices(choices, choice_text)) {
e.preventDefault();
alert("You can't select a choice and also provide a new option.");
}
});
}
Note how we use export default in the above code. This means form_validation.js is a JavaScript module. When we create our main.js file, we can import it with the import statement:
import initFormValidation from "./form_validation.js";
initFormValidation();
Lastly, we add the script to the bottom of our details.html file, using Django’s usual static template tag. Note the type="module" this is needed to tell the browser we will be using import/export statements.
<script type="module" src="{% static 'polls/js/main.js' %}"></script>
That’s it! We got the modularity benefits of modern JavaScript without needing any build process. The browser handles the module loading for us. And thanks to parallel requests since HTTP/2, this can scale to many modules without a performance hit.
In production
To deploy, all we need is Django's support for collecting static files into one place and its support for adding hashes to filenames. In production it is a good idea to use ManifestStaticFilesStorage storage backend. It stores the file names it handles by appending the MD5 hash of the file’s content to the filename. This allows you to set far future cache expiries, which is good for performance, while still guaranteeing new versions of the file will make it to users’ browsers.
This backend is also able to update the reference to form_validation.js in the import statement, with its new versioned file name.
Future work
ManifestStaticFilesStorage works, but a lot of its implementation details get in the way. It could be easier to use as a developer.
- The support for
import/exportwith hashed files is not very robust. - Comments in CSS with references to files can lead to errors during collectstatic.
- Circular dependencies in CSS/JS can not be processed.
- Errors during collectstatic when files are missing are not always clear.
- Differences between implementation of StaticFilesStorage and ManifestStaticFilesStorage can lead to errors in production that don't show up in development (like #26329, about leading slashes).
- Configuring common options means subclassing the storage when we could use the existing OPTIONS dict.
- Collecting static files could be faster if it used parallelization (pull request: #19935 Used a threadpool to parallelise collectstatic)
We discussed those possible improvements at the Django on the Med 🏖️ sprints and I’m hopeful we can make progress.
I built django-manifeststaticfiles-enhanced to attempt to fix all these. The core work is to switch to a lexer for CSS and JS, based on Ned Batchelder’s JsLex that was used in Django previously. It was expanded to cover modern JS and CSS by working with Claude Code to do the grunt work of covering the syntax.
It also switches to using a topological sort to find dependencies, whereas before we used a more brute force approach of repeated processing until we saw no more changes, which lead to more work, particularly on storages that used the network. It also meant we couldn't handle circular dependencies.
To validate it works, I ran a performance benchmark on 50+ projects, it’s been tested issues and with similar (often improved) performance. On average, it’s about 30% faster.
While those improvements would be welcome, do go ahead with trying build-free JavaScript and CSS in your Django projects today! Modern browsers make it possible to create great frontend experiences without the complexity.
November 19, 2025 08:13 AM UTC
November 18, 2025
The Python Coding Stack
I Don’t Like Magic • Exploring The Class Attributes That Aren’t Really Class Attributes • [Club]
I don’t like magic. I don’t mean the magic of the Harry Potter kind—that one I’d like if only I could have it. It’s the “magic” that happens behind the scenes when a programming language like Python does things out of sight. You’ll often find things you have to “just learn” along the Python learning journey. “That’s the way things are,” you’re told.
That’s the kind of magic I don’t like. I want to know how things work. So let me take you back to when I first learnt about named tuples—the NamedTuple in the typing module, not the other one—and data classes. They share a similar syntax, and it’s this shared syntax that confused me at first. I found these topics harder to understand because of this.
Their syntax is different from other stuff I had learnt up to that point. And I could not reconcile it with the stuff I knew. That bothered me. It also made me doubt the stuff I already knew. Here’s what I mean. Let’s look at a standard class first:
class Person:
classification = “Human”
def __init__(self, name, age, profession):
self.name = name
self.age = age
self.profession = professionYou define a class attribute, .classification, inside the class block, but outside any of the special methods. All instances will share this class attribute. Then you define the .__init__() special method and create three instance attributes: .name, .age, and .profession. Each instance will have its own versions of these instance attributes. If you’re not familiar with class attributes and instance attributes, you can read my seven-part series on object-oriented programming: A Magical Tour Through Object-Oriented Programming in Python • Hogwarts School of Codecraft and Algorithmancy
Now, let’s assume you don’t actually need the class attribute and that this class will only store data. It won’t have any additional methods. You decide to use a data class instead:
from dataclasses import dataclass
@dataclass
class Person:
name: str
age: int
profession: strOr you prefer to use a named tuple, and you reach out for typing.NamedTuple:
from typing import NamedTuple
class Person(NamedTuple):
name: str
age: int
profession: strThe syntax is similar. I’ll tell you why I used to find this confusing soon.
Whichever option you choose, you can create an instance using Person(”Matthew”, 30, “Python Programmer”). And each instance you create will have its own instance attributes .name, .age, and .profession.
But wait a minute! The data class and the named tuple use syntax that’s similar to creating class attributes. You define these just inside the class block and not in an .__init__() method. How come they create instance attributes? “That’s just how they work” is not good enough for me.
These aren’t class attributes. Not yet. There’s no value associated with these identifiers. Therefore, they can’t be class attributes, even though you write them where you’d normally add class attributes in a standard class. However, they can be class attributes if you include a default value:
@dataclass
class Person:
name: str
age: int
profession: str = “Python Programmer”The .profession attribute now has a string assigned to it. In a data class, this represents the default value. But if this weren’t a data class, you’d look at .profession and recognise it as a class attribute. But in a data class, it’s not a class attribute, it’s an instance attribute, as are .name and .age, which look like…what do they look like, really? They’re just type hints. Yes, type hints without any object assigned. Python type hints allow you to do this:
>>> first_name: strThis line is valid in Python. It does not create the variable name. You can confirm this:
>>> first_name
Traceback (most recent call last):
File “<input>”, line 1, in <module>
NameError: name ‘first_name’ is not definedAlthough you cannot just write first_name if the identifier doesn’t exist, you can use first_name: str. This creates an annotation which serves as the type hint. Third-party tools now know that when you create the variable first_name and assign it a value, it ought to be a string.
So, let’s go back to the latest version of the Person data class with the default value for one of the attributes:
@dataclass
class Person:
name: str
age: int
profession: str = “Python Programmer”But let’s ignore the @dataclass decorator for now. Indeed, let’s remove this decorator:
class Person:
name: str
age: int
profession: str = “Python Programmer”You define a class with one class attribute, .profession and three type hints:
name: strage: intprofession: str
How can we convert this information into instance attributes when creating an instance of the class? I won’t try to reverse engineer NamedTuple or data classes here. Instead, I’ll explore my own path to get a sense of what might be happening in those tools.
Let’s start hacking away…
November 18, 2025 10:01 PM UTC
PyCoder’s Weekly
Issue #709: deepcopy(), JIT, REPL Tricks, and More (Nov. 18, 2025)
#709 – NOVEMBER 18, 2025
View in Browser »
Why Python’s deepcopy Can Be So Slow
“Python’s copy.deepcopy() creates a fully independent clone of an object, traversing every nested element of the object graph.” That can be expensive. Learn what it is doing and how you can sometimes avoid the cost.
SAURABH MISRA
A Plan for 5-10%* Faster Free-Threaded JIT by Python 3.16
Just In Time compilation is under active development in the CPython interpreter. This blog post outlines the targets for the next two Python releases.
KEN JIN
Fast Container Builds: 202 - Check out the Deep Dive
This blog explores the causes and consequences of slow container builds, with a focus on understanding how BuildKit’s capabilities support faster container builds. →
DEPOT sponsor
The Python Standard REPL: Try Out Code and Ideas Quickly
The Python REPL gives you instant feedback as you code. Learn to use this powerful tool to type, run, debug, edit, and explore Python interactively.
REAL PYTHON
Python Jobs
Python Video Course Instructor (Anywhere)
Python Tutorial Writer (Anywhere)
Articles & Tutorials
Preparing Data Science Projects for Production
How do you prepare your Python data science projects for production? What are the essential tools and techniques to make your code reproducible, organized, and testable? This week on the show, Khuyen Tran from CodeCut discusses her new book, “Production Ready Data Science.”
REAL PYTHON podcast
Becoming a Core Developer
Throughout your open source journey, you have no doubt been interacting with the core development team of the projects to which you have been contributing. Have you ever wondered how people become core developers of a project?
STEFANIE MOLIN
Modern, Self-Hosted Authentication
Keep your users, your data and your stack with PropelAuth BYO. Easily add Enterprise authentication features like Enterprise SSO, SCIM and session management. Keep your sales team happy and give your CISO piece of mind →
PROPELAUTH sponsor
38 Things Python Developers Should Learn in 2025
Talk Python interviews Peter Wang and Calvin Hendrix-Parker and they discuss loads of things in the Python ecosystem that are worth learning, including free-threaded CPython, MCP, DuckDB, Arrow, and much more.
TALK PYTHON podcast
Trusted Publishing for GitLab Self-Managed and Organizations
The Trusted Publishing system for PyPI is seeing rapid adoption. This post talks about its growth along with the next steps: adding GitLab and handling organizations.
MIKE FIELDER
Decompression Is Up to 30% Faster in CPython 3.15
Zstandard compression got added in Python 3.14, but work is on-going. Python 3.15 is showing performance improvements in both zstd and other compression modules.
EMMA SMITH
__slots__ for Optimizing Classes
Most Python objects store their attributes in __dict__, which is a dictionary. Modules and classes always use __dict__, but not everything does.
TREY HUNNER
Convert Documents Into LLM-Ready Markdown
Get started with Python MarkItDown to turn PDFs, Office files, images, and URLs into clean, LLM-ready Markdown in seconds.
REAL PYTHON
Quiz: Convert Documents Into LLM-Ready Markdown
Practice MarkItDown basics. Convert PDFs, Word documents, Excel documents, and HTML documents to Markdown. Try the quiz.
REAL PYTHON
Convert Scikit-learn Pipelines into SQL Queries with Orbital
Orbital is a new library that converts Scikit-learn pipelines into SQL queries, enabling machine learning model inference directly within SQL databases.
POSIT sponsor
Python Operators and Expressions
Operators let you combine objects to create expressions that perform computations – the core of how Python works.
REAL PYTHON course
A Generator, Duck Typing, and a Branchless Conditional Walk Into a Bar
What’s your favorite line of code? Rodrigo expounds about generators, duck typing, and branchless conditionals.
RODRIGO GIRÃO SERRÃO
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
November 19, 2025
REALPYTHON.COM
DELSU Tech Invasion 3.0
November 19 to November 21, 2025
HAMPLUSTECH.COM
PyData Bristol Meetup
November 20, 2025
MEETUP.COM
PyLadies Dublin
November 20, 2025
PYLADIES.COM
Python Sul 2025
November 21 to November 24, 2025
PYTHON.ORG.BR
Happy Pythoning!
This was PyCoder’s Weekly Issue #709.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
November 18, 2025 07:30 PM UTC
Real Python
Break Out of Loops With Python's break Keyword
In Python, the break statement lets you exit a loop prematurely, transferring control to the code that follows the loop. This tutorial guides you through using break in both for and while loops. You’ll also briefly explore the continue keyword, which complements break by skipping the current loop iteration.
By the end of this video course, you’ll understand that:
- A
breakin Python is a keyword that lets you exit a loop immediately, stopping further iterations. - Using
breakoutside of loops doesn’t make sense because it’s specifically designed to exit loops early. - The
breakdoesn’t exit all loops, only the innermost loop that contains it.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 18, 2025 02:00 PM UTC
Mike Driscoll
Black Friday Python Deals Came Early
Black Friday deals came early this year. You can get 50% off of any of my Python books or courses until the end of November. You can use this coupon code at checkout: BLACKISBACK
The following links already have the discount applied:
Python eBooks
- Python 101
- Python 201: Intermediate Python
- The Python Quiz Book
- Automating Excel with Python
- Python Logging
- Pillow: Image Processing with Python
- Creating GUI Applications with wxPython
- JupyterLab 101
- Creating TUI Applications with Textual and Python
Python Courses
- Python 101 Video Series
- Automating Excel with Python Video series + eBook
- Python Logging Video Course
The post Black Friday Python Deals Came Early appeared first on Mouse Vs Python.
November 18, 2025 01:41 PM UTC
PyCon
Join us in “Trailblazing Python Security” at PyCon US 2026
PyCon US 2026 is coming to Long Beach, California! PyCon US is the premiere conference for the Python programming language in North America. Python experts and enthusiasts from around the globe will gather in Long Beach to discuss and learn about the latest developments to the Python programming language and massive ecosystem of Python projects.
Brand new this year are two themed talk tracks: “Trailblazing Python Security” and “Python and the Future of AI”. We want to hear talks from you! The PyCon US Call for Proposals (CFP) for PyCon US 2026 is now open through December 19th, 2025. Don’t wait to submit your talks, the earlier you submit the better.
If your company or organization would like to show support for a more secure Python ecosystem by sponsoring the “Trailblazing Python Security” talk track check out the PyCon US 2026 Sponsor Prospectus or reach out via email to sponsors@python.org. We’ve made three sponsor slots available for the track: one lead sponsor and two co-sponsors, so act fast!
We’re also looking for mentors! If you’re an experienced speaker and want to help someone with their proposal, the PyCon US Proposal Mentorship Program is for you! We typically get twice the number of mentees seeking support than we do for volunteer mentors. Sign up to mentor via this form by November 21, 2025.
If you're interested in Python and security: why should you attend PyCon US 2026?
Many Pythonistas use the Open Source software available on the Python Package Index (PyPI). PyCon US is the flagship conference hosted by the Python Software Foundation, the stewards of the Python Package Index. Many brand-new security features are announced and demoed live at PyCon US, such as “PyPI Digital Attestations”, “PyPI Organizations”, and “Trusted Publishers”.
You’ll be among the first Pythonistas to hear about these new features and chat with the developers and maintainers of PyPI.
| Open Space about handling vulnerabilities in Python projects |
PyCon US always has many opportunities to learn about the latest in Python security. Last year at PyCon US 2025 hosted a “Python Security Mini-Summit” Open Space with speakers discussing the Cyber Resilience Act (CRA), CVE and Open Source, and supply-chain within the Scientific Python community. Expect even more security content this year!
The conference talk schedule includes many talks about using memory-safe systems programming languages like Rust with Python, authentication with popular Web frameworks, how to handle security vulnerabilities as an Open Source project, and how the “Phantom Dependency” problem affects the Python package ecosystem.
We hope you’ll consider joining us in Long Beach, CA for PyCon US 2026. See you there! 🔥🏕️
November 18, 2025 11:17 AM UTC
Seth Michael Larson
BrotliCFFI has two new maintainers
Quick post announcing that the Python package brotlicffi has two new maintainers: Nathan Goldbaum and Christian Clauss. Thank you both for stepping up to help me with this package.
Both these folks (along with a few others) have shown up and gotten straight to work in adding support for Python 3.14, free-threaded Python, and the latest release of Brotli. I’ve given myself the task to change the PyPI publishing workflow to allow a group of contributors to make new releases to mostly get out of their way!
I’m grateful that the project lives under the python-hyper organization which is structured in such a way that allows onboarding new contributors quickly after they’ve shown interest in contributing to a project meaningfully.
Thanks for keeping RSS alive! ♥
November 18, 2025 12:00 AM UTC
November 17, 2025
Rodrigo Girão Serrão
Floodfill algorithm in Python
Learn how to implement and use the floodfill algorithm in Python.
What is the floodfill algorithm?
Click the image below to randomly colour the region you click.
Go ahead, try it!
IMG_WIDTH = 160 IMG_HEIGHT = 160 PIXEL_SIZE = 2 import asyncio import collections import random from pyscript import display from pyodide.ffi import create_proxy import js from js import fetch canvas = js.document.getElementById("bitmap") ctx = canvas.getContext("2d") URL = "/blog/floodfill-algorithm-in-python/_python.txt" async def load_bitmap(url: str) -> list[list[int]]: # Fetch the text file from the URL response = await fetch(url) text = await response.text() bitmap: list[list[int]] = [] for line in text.splitlines(): line = line.strip() if not line: continue row = [int(ch) for ch in line if ch in "01"] if row: bitmap.append(row) return bitmap def draw_bitmap(bitmap): rows = len(bitmap) cols = len(bitmap[0]) if rows > 0 else 0 if rows == 0 or cols == 0: return for y, row in enumerate(bitmap): for x, value in enumerate(row): if value == 1: ctx.fillStyle = "black" else: ctx.fillStyle = "white" ctx.fillRect(x * PIXEL_SIZE, y * PIXEL_SIZE, PIXEL_SIZE, PIXEL_SIZE) _neighbours = [(1, 0), (-1, 0), (0, 1), (0, -1)] async def fill_bitmap(bitmap, x, y): if bitmap[y][x] == 1: return ctx = canvas.getContext("2d") r, g, b = (random.randint(0, 255) for _ in range(3)) ctx.fillStyle = f"rgb({r}, {g}, {b})" def draw_pixel(x, y): ctx.fillRect(x * PIXEL_SIZE, y * PIXEL_SIZE, PIXEL_SIZE, PIXEL_SIZE) pixels = collections.deque([(x, y)]) seen = set((x, y)) while pixels: nx, ny = pixels.pop() draw_pixel(nx, ny) for dx, dy in _neighbours: x_, y_ = nx + dx, ny + dy if x_ < 0 or x_ >= IMG_WIDTH or y_ < 0 or y_ >= IMG_HEIGHT or (x_, y_) in seen: continue if bitmap[y_][x_] == 0: seen.add((x_, y_)) pixels.appendleft((x_, y_)) await asyncio.sleep(0.0001) is_running = False def get_event_coords(event): """Return (clientX, clientY) for mouse/pointer/touch events.""" # PointerEvent / MouseEvent: clientX/clientY directly available if hasattr(event, "clientX") and hasattr(event, "clientY") and event.clientX is not None: return event.clientX, event.clientY # TouchEvent: use the first touch point if hasattr(event, "touches") and event.touches.length > 0: touch = event.touches.item(0) return touch.clientX, touch.clientY # Fallback: try changedTouches if hasattr(event, "changedTouches") and event.changedTouches.length > 0: touch = event.changedTouches.item(0) return touch.clientX, touch.clientY return None, None async def on_canvas_press(event): global is_running if is_running: return is_running = True try: # Avoid scrolling / zooming taking over on touch if hasattr(event, "preventDefault"): event.preventDefault() clientX, clientY = get_event_coords(event) if clientX is None: # Could not read coordinates; bail out gracefully return rect = canvas.getBoundingClientRect() # Account for CSS scaling: map from displayed size to canvas units scale_x = canvas.width / rect.width scale_y = canvas.height / rect.height x_canvas = (clientX - rect.left) * scale_x y_canvas = (clientY - rect.top) * scale_y x_idx = int(x_canvas // PIXEL_SIZE) y_idx...November 17, 2025 03:49 PM UTC
Real Python
How to Serve a Website With FastAPI Using HTML and Jinja2
By the end of this guide, you’ll be able to serve dynamic websites from FastAPI endpoints using Jinja2 templates powered by CSS and JavaScript. By leveraging FastAPI’s HTMLResponse, StaticFiles, and Jinja2Templates classes, you’ll use FastAPI like a traditional Python web framework.
You’ll start by returning basic HTML from your endpoints, then add Jinja2 templating for dynamic content, and finally create a complete website with external CSS and JavaScript files to copy hex color codes:
To follow along, you should be comfortable with Python functions and have a basic understanding of HTML and CSS. Experience with FastAPI is helpful but not required.
Get Your Code: Click here to download the free sample code that shows you how to serve a website with FastAPI using HTML and Jinja2.
Take the Quiz: Test your knowledge with our interactive “How to Serve a Website With FastAPI Using HTML and Jinja2” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Serve a Website With FastAPI Using HTML and Jinja2Review how to build dynamic websites with FastAPI and Jinja2, and serve HTML, CSS, and JS with HTMLResponse and StaticFiles.
Prerequisites
Before you start building your HTML-serving FastAPI application, you’ll need to set up your development environment with the required packages. You’ll install FastAPI along with its standard dependencies, including the ASGI server you need to run your application.
Select your operating system below and install FastAPI with all the standard dependencies inside a virtual environment:
These commands create and activate a virtual environment, then install FastAPI along with Uvicorn as the ASGI server, and additional dependencies that enhance FastAPI’s functionality. The standard option ensures you have everything you need for this tutorial, including Jinja2 for templating.
Step 1: Return Basic HTML Over an API Endpoint
When you take a close look at a FastAPI example application, you commonly encounter functions returning dictionaries, which the framework transparently serializes into JSON responses.
However, FastAPI’s flexibility allows you to serve various custom responses besides that—for example, HTMLResponse to return content as a text/html type, which your browser interprets as a web page.
To explore returning HTML with FastAPI, create a new file called main.py and build your first HTML-returning endpoint:
main.py
from fastapi import FastAPI
from fastapi.responses import HTMLResponse
app = FastAPI()
@app.get("/", response_class=HTMLResponse)
def home():
html_content = """
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Home</title>
</head>
<body>
<h1>Welcome to FastAPI!</h1>
</body>
</html>
"""
return html_content
The HTMLResponse class tells FastAPI to return your content with the text/html content type instead of the default application/json response. This ensures that browsers interpret your response as HTML rather than plain text.
Before you can visit your home page, you need to start your FastAPI development server to see the HTML response in action:
Read the full article at https://realpython.com/fastapi-jinja2-template/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 17, 2025 02:00 PM UTC
Quiz: How to Serve a Website With FastAPI Using HTML and Jinja2
In this quiz, you’ll test your understanding of building dynamic websites with FastAPI and Jinja2 Templates.
By working through this quiz, you’ll revisit how to return HTML with HTMLResponse, serve assets with StaticFiles, render Jinja2 templates with context, and include CSS and JavaScript for interactivity like copying hex color codes.
If you are new to FastAPI, review Get Started With FastAPI. You can also brush up on Python functions and HTML and CSS.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 17, 2025 12:00 PM UTC
Python Bytes
#458 I will install Linux on your computer
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong>Possibility of a new website for Django</strong></li> <li><strong><a href="https://github.com/slaily/aiosqlitepool?featured_on=pythonbytes">aiosqlitepool</a></strong></li> <li><strong><a href="https://deptry.com?featured_on=pythonbytes">deptry</a></strong></li> <li><strong><a href="https://github.com/juftin/browsr?featured_on=pythonbytes">browsr</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=s2HlckfeBCs' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="458">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: Possibility of a new website for Django</strong></p> <ul> <li>Current Django site: <a href="https://www.djangoproject.com?featured_on=pythonbytes">djangoproject.com</a></li> <li>Adam Hill’s in progress redesign idea: <a href="https://django-homepage.adamghill.com?featured_on=pythonbytes">django-homepage.adamghill.com</a></li> <li>Commentary in the <a href="https://forum.djangoproject.com/t/want-to-work-on-a-homepage-site-redesign/42909/35?featured_on=pythonbytes">Want to work on a homepage site redesign? discussion</a></li> </ul> <p><strong>Michael #2: <a href="https://github.com/slaily/aiosqlitepool?featured_on=pythonbytes">aiosqlitepool</a></strong></p> <ul> <li>🛡️A resilient, high-performance asynchronous connection pool layer for SQLite, designed for efficient and scalable database operations.</li> <li>About 2x better than regular SQLite.</li> <li>Pairs with <a href="https://github.com/omnilib/aiosqlite?featured_on=pythonbytes">aiosqlite</a></li> <li><code>aiosqlitepool</code> in three points: <ul> <li><strong>Eliminates connection overhead</strong>: It avoids repeated database connection setup (syscalls, memory allocation) and teardown (syscalls, deallocation) by reusing long-lived connections.</li> <li><strong>Faster queries via "hot" cache</strong>: Long-lived connections keep SQLite's in-memory page cache "hot." This serves frequently requested data directly from memory, speeding up repetitive queries and reducing I/O operations.</li> <li><strong>Maximizes concurrent throughput</strong>: Allows your application to process significantly more database queries per second under heavy load.</li> </ul></li> </ul> <p><strong>Brian #3: <a href="https://deptry.com?featured_on=pythonbytes">deptry</a></strong></p> <ul> <li>“deptry is a command line tool to check for issues with dependencies in a Python project, such as unused or missing dependencies. It supports projects using Poetry, pip, PDM, uv, and more generally any project supporting PEP 621 specification.”</li> <li>“Dependency issues are detected by scanning for imported modules within all Python files in a directory and its subdirectories, and comparing those to the dependencies listed in the project's requirements.”</li> <li><p>Note if you use <code>project.optional-dependencies</code></p> <div class="codehilite"> <pre><span></span><code><span class="k">[project.optional-dependencies]</span> <span class="n">plot</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="s2">"matplotlib"</span><span class="p">]</span> <span class="n">test</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="s2">"pytest"</span><span class="p">]</span> </code></pre> </div></li> <li><p>you have to set a config setting to get it to work right:</p> <div class="codehilite"> <pre><span></span><code><span class="k">[tool.deptry]</span> <span class="n">pep621_dev_dependency_groups</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="s2">"test"</span><span class="p">,</span><span class="w"> </span><span class="s2">"docs"</span><span class="p">]</span> </code></pre> </div></li> </ul> <p><strong>Michael #4: <a href="https://github.com/juftin/browsr?featured_on=pythonbytes">browsr</a></strong></p> <ul> <li><strong><code>browsr</code></strong> 🗂️ is a pleasant <strong>file explorer</strong> in your terminal. It's a command line <strong>TUI</strong> (text-based user interface) application that empowers you to browse the contents of local and remote filesystems with your keyboard or mouse.</li> <li>You can quickly navigate through directories and peek at files whether they're hosted <strong>locally</strong>, in <strong>GitHub</strong>, over <strong>SSH</strong>, in <strong>AWS S3</strong>, <strong>Google Cloud Storage</strong>, or <strong>Azure Blob Storage</strong>.</li> <li>View code files with syntax highlighting, format JSON files, render images, convert data files to navigable datatables, and more.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li>Understanding the MICRO</li> <li>TDD chapter coming out later today or maybe tomorrow, but it’s close.</li> </ul> <p>Michael:</p> <ul> <li><a href="https://marketplace.visualstudio.com/items?itemName=johnpapa.vscode-peacock&featured_on=pythonbytes">Peacock</a> is excellent</li> </ul> <p><strong>Joke: <a href="https://x.com/thatstraw/status/1977317574779048171?featured_on=pythonbytes">I will find you</a></strong></p>
November 17, 2025 08:00 AM UTC
November 16, 2025
Ned Batchelder
Why your mock breaks later
In Why your mock doesn’t work I explained this rule of mocking:
Mock where the object is used, not where it’s defined.
That blog post explained why that rule was important: often a mock doesn’t work at all if you do it wrong. But in some cases, the mock will work even if you don’t follow this rule, and then it can break much later. Why?
Let’s say you have code like this:
# user.py
def get_user_settings():
with open(Path("~/settings.json").expanduser()) as f:
return json.load(f)
def add_two_settings():
settings = get_user_settings()
return settings["opt1"] + settings["opt2"]
You write a simple test:
def test_add_two_settings():
# NOTE: need to create ~/settings.json for this to work:
# {"opt1": 10, "opt2": 7}
assert add_two_settings() == 17
As the comment in the test points out, the test will only pass if you create the correct settings.json file in your home directory. This is bad: you don’t want to require finicky environments for your tests to pass.
The thing we want to avoid is opening a real file, so it’s a natural impulse
to mock out open():
# test_user.py
from io import StringIO
from unittest.mock import patch
@patch("builtins.open")
def test_add_two_settings(mock_open):
mock_open.return_value = StringIO('{"opt1": 10, "opt2": 7}')
assert add_two_settings() == 17
Nice, the test works without needing to create a file in our home directory!
Much later...
One day your test suite fails with an error like:
...
File ".../site-packages/coverage/python.py", line 55, in get_python_source
source_bytes = read_python_source(try_filename)
File ".../site-packages/coverage/python.py", line 39, in read_python_source
return source.replace(b"\r\n", b"\n").replace(b"\r", b"\n")
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
TypeError: replace() argument 1 must be str, not bytes
What happened!? Coverage.py code runs during your tests, invoked by the
Python interpreter. The mock in the test changed the builtin open, so
any use of it anywhere during the test is affected. In some cases, coverage.py
needs to read your source code to record the execution properly. When that
happens, coverage.py unknowingly uses the mocked open, and bad things
happen.
When you use a mock, patch it where it’s used, not where it’s defined. In this case, the patch would be:
@patch("myproduct.user.open")
def test_add_two_settings(mock_open):
... etc ...
With a mock like this, the coverage.py code would be unaffected.
Keep in mind: it’s not just coverage.py that could trip over this mock. There
could be other libraries used by your code, or you might use open
yourself in another part of your product. Mocking the definition means
anything using the object will be affected. Your intent is to only
mock in one place, so target that place.
Postscript
I decided to add some code to coverage.py to defend against this kind of over-mocking. There is a lot of over-mocking out there, and this problem only shows up in coverage.py with Python 3.14. It’s not happening to many people yet, but it will happen more and more as people start testing with 3.14. I didn’t want to have to answer this question many times, and I didn’t want to force people to fix their mocks.
From a certain perspective, I shouldn’t have to do this. They are in the wrong, not me. But this will reduce the overall friction in the universe. And the fix was really simple:
open = open
This is a top-level statement in my module, so it runs when the module is
imported, long before any tests are run. The assignment to open will
create a global in my module, using the current value of open, the one
found in the builtins. This saves the original open for use in my module
later, isolated from how builtins might be changed later.
This is an ad-hoc fix: it only defends one builtin. Mocking other builtins
could still break coverage.py. But open is a common one, and this will
keep things working smoothly for those cases. And there’s precedent: I’ve
already been using a more involved technique to defend
against mocking of the os module for ten years.
Even better!
No blog post about mocking is complete without encouraging a number of other best practices, some of which could get you out of the mocking mess:
- Use
autospec=Trueto make your mocks strictly behave like the original object: see Why your mock still doesn’t work. - Make assertions about how your mock was called to be sure everything is connected up properly.
- Use verified fakes instead of auto-generated mocks: Fast tests for slow services: why you should use verified fakes.
- Separate your code so that computing functions like our
add_two_settingsdon’t also do I/O. This makes the functions easier to test in the first place. Take a look at Function Core, Imperative Shell. - Dependency injection lets you explicitly pass test-specific objects where they are needed instead of relying on implicit access to a mock.
November 16, 2025 12:55 PM UTC
November 15, 2025
Kay Hayen
Nuitka Release 2.8
This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, “download now”.
This release adds a ton of new features and corrections.
Bug Fixes
Standalone: For the “Python Build Standalone” flavor ensured that debug builds correctly recognize all their specific built-in modules, preventing potential errors. (Fixed in 2.7.2 already.)
Linux: Fixed a crash when attempting to modify the RPATH of statically linked executables (e.g., from
imageio-ffmpeg). (Fixed in 2.7.2 already.)Anaconda: Updated
PySide2support to correctly handle path changes in newer Conda packages and improved path normalization for robustness. (Fixed in 2.7.2 already.)macOS: Corrected handling of
QtWebKitframework resources. Previous special handling was removed as symlinking is now default, which also resolved an issue of file duplication. (Fixed in 2.7.2 already.)Debugging: Resolved an issue in debug builds where an incorrect assertion was done during the addition of distribution metadata. (Fixed in 2.7.1 already.)
Module: Corrected an issue preventing
stubgenfrom functioning with Python versions earlier than 3.6. (Fixed in 2.7.1 already.)UI: Prevented Nuitka from crashing when
--include-modulewas used with a built-in module. (Fixed in 2.7.1 already.)Module: Addressed a compatibility issue where the
codemode for the constants blob failed with the C++ fallback. This fallback is utilized on very old GCC versions (e.g., default on CentOS7), which are generally not recommended. (Fixed in 2.7.1 already.)Standalone: Resolved an assertion error that could occur in certain Python setups due to extension module suffix ordering. The issue involved incorrect calculation of the derived module name when the wrong suffix was applied (e.g., using
.soto derive a module name likegdbmmoduleinstead of justgdbm). This was observed with Python 2 on CentOS7 but could potentially affect other versions with unconventional extension module configurations. (Fixed in 2.7.1 already.)Python 3.12.0: Corrected the usage of an internal structure identifier that is only available in Python 3.12.1 and later versions. (Fixed in 2.7.1 already.)
Plugins: Prevented crashes in Python setups where importing
pkg_resourcesresults in aPermissionError. This typically occurs in broken installations, for instance, where some packages are installed with root privileges. (Fixed in 2.7.1 already.)macOS: Implemented a workaround for data file names that previously could not be signed within app bundles. The attempt in release 2.7 to sign these files inadvertently caused a regression for cases involving illegal filenames. (Fixed in 2.7.1 already.)
Python 2.6: Addressed an issue where
staticmethodobjects lacked the__func__attribute. Nuitka now tracks the original function as a distinct value. (Fixed in 2.7.1 already.)Corrected behavior for
orderedsetimplementations that lack aunionmethod, ensuring Nuitka does not attempt to use it. (Fixed in 2.7.1 already.)Python 2.6: Ensured compatibility for setups where the
_PyObject_GC_IS_TRACKEDmacro is unavailable. This macro is now used beyond assertions, necessitating support outside of debug mode. (Fixed in 2.7.1 already.)Python 2.6: Resolved an issue caused by the absence of
sys.version_info.releaselevelby utilizing a numeric index instead and adding a new helper function to access it. (Fixed in 2.7.1 already.)Module: Corrected the
__compiled__.mainvalue to accurately reflects the package in which a module is loaded, this was not the case for Python versions prior to 3.12. (Fixed in 2.7.1 already.)Plugins: Further improved the
dill-compatplugin by preventing assertions related to empty annotations and by removing hard-coded module names for greater flexibility. (Fixed in 2.7.1 already.)Windows: For onefile mode using DLL mode, ensure all necessary environment variables are correctly set for
QtWebEngine. Previously, default Qt paths could point incorrectly near the onefile binary. (Fixed in 2.7.3 already.)PySide6: Fixed an issue with
PySide6where slots defined in base classes might not be correctly handled, leading to them only working for the first class that used them. (Fixed in 2.7.3 already.)Plugins: Enhanced Qt binding plugin support by checking for module presence without strictly requiring metadata. This improves compatibility with environments like Homebrew or
uvwhere package metadata might be absent. (Fixed in 2.7.3 already.)macOS: Ensured the
appletarget is specified during linking to prevent potential linker warnings about using anunknowntarget in certain configurations. (Fixed in 2.7.3 already.)macOS: Disabled the use of static
libpythonwithpyenvinstallations, as this configuration is currently broken. (Fixed in 2.7.3 already.)macOS: Improved error handling for the
--macos-app-protected-resourceoption by catching cases where a description is not provided. (Fixed in 2.7.3 already.)Plugins: Enhanced workarounds for
PySide6, now also covering single-shot timer callbacks. (Fixed in 2.7.4 already.)Plugins: Ensured that the Qt binding module is included when using accelerated mode with Qt bindings. (Fixed in 2.7.4 already.)
macOS: Avoided signing through symlinks and minimized their use to prevent potential issues, especially during code signing of application bundles. (Fixed in 2.7.4 already.)
Windows: Implemented path shortening for paths used in onefile DLL mode to prevent issues with long or Unicode paths. This also benefits module mode. (Fixed in 2.7.4 already.)
UI: The options nanny plugin no longer uses a deprecated option for macOS app bundles, preventing potential warnings or issues. (Fixed in 2.7.4 already.)
Plugins: Ensured the correct macOS target architecture is used. This particularly useful for
PySide2with universal CPython binaries, to prevent compile time crashes e.g. when cross-compiling for a different architecture. (Fixed in 2.7.4 already.)UI: Fixed a crash that occurred on macOS if the
ccachedownload was rejected by the user. (Fixed in 2.7.4 already.)UI: Improved the warning message related to macOS application icons for better clarity. (Added in 2.7.4 already.)
Standalone: Corrected an issue with QML plugins on macOS when using newer
PySide6versions. (Fixed in 2.7.4 already.)Python 3.10+: Fixed a memory leak where the matched value in pattern matching constructs was not being released. (Fixed in 2.7.4 already.)
Python3: Fixed an issue where exception exits for larger
rangeobjects, which are not optimized away, were not correctly annotated by the compiler. (Fixed in 2.7.4 already.)Windows: Corrected an issue with the automatic use of icons for
PySide6applications on non-Windows if Windows icon options were used. (Fixed in 2.7.4 already.)Onefile: When using DLL mode there was a load error for the DLL with MSVC 14.2 or earlier, but older MSVC is to be supported. (Fixed in 2.7.5 already.)
Onefile: Fix, the splash screen was showing in DLL mode twice or more; these extra copies couldn’t be stopped. (Fixed in 2.7.5 already.)
Standalone: Fixed an issue where data files were no longer checked for conflicts with included DLLs. The order of data file and DLL copying was restored, and macOS app signing was made a separate step to remove the order dependency. (Fixed in 2.7.6 already.)
macOS: Corrected our workaround using symlinks for files that cannot be signed. When
--output-directorywas used, as it made incorrect assumptions about thedistfolder path. (Fixed in 2.7.6 already.)UI: Prevented checks on onefile target specifications when not actually compiling in onefile mode, e.g. on macOS with
--mode=app. (Fixed in 2.7.6 already.)UI: Improved error messages for data directory options by include the relevant part in the output. (Fixed in 2.7.6 already.)
Plugins: Suppressed
UserWarningmessages from thepkg_resourcesmodule during compilation. (Fixed in 2.7.6 already.)Python3.11+: Fixed an issue where descriptors for compiled methods were incorrectly exposed for Python 3.11 and 3.12. (Fixed in 2.7.7 already.)
Plugins: Avoided loading modules when checking for data file existence. This prevents unnecessary module loading and potential crashes in broken installations. (Fixed in 2.7.9 already.)
Plugins: The
global_change_functionanti-bloat feature now operates on what should be the qualified names (__qualname__) instead of just function names, preventing incorrect replacements of methods with the same name in different classes. (Fixed in 2.7.9 already.)Onefile: The
containing_dirattribute of the__compiled__object was regressed in DLL mode on Windows, pointing to the temporary DLL directory instead of the directory containing the onefile binary. (Fixed in 2.7.10 already, note that the solution in 2.7.9 had a regression.)Compatibility: Fixed a crash that occurred when an import attempted to go outside its package boundaries. (Fixed in 2.7.11 already.)
macOS: Ignored a warning from
codesignwhen using self-signed certificates. (Fixed in 2.7.11 already.)Onefile: Fixed an issue in DLL mode where environment variables from other onefile processes (related to temporary paths and process IDs) were not being ignored, which could lead to conflicts. (Fixed in 2.7.12 already.)
Compatibility: Fixed a potential crash that could occur when processing an empty code body. (Fixed in 2.7.13 already.)
Plugins: Ensured that DLL directories created by plugins could be at the top level when necessary, improving flexibility. (Fixed in 2.7.13 already.)
Onefile: On Windows, corrected an issue in DLL mode where
original_argv0wasNone; it is now properly set. (Fixed in 2.7.13 already.)macOS: Avoided a warning that appeared on newer macOS versions. (Fixed in 2.7.13 already.)
macOS: Allowed another DLL to be missing for
PySide6to support more setups. (Fixed in 2.7.13 already.)Standalone: Corrected the existing import workaround for Python 3.12 that was incorrectly renaming existing modules of matching names into sub-modules of the currently imported module. (Fixed in 2.7.14 already.)
Standalone: On Windows, ensured that the DLL search path correctly uses the proper DLL directory. (Fixed in 2.7.14 already.)
Python 3.5+: Fixed a memory leak where the called object could be leaked in calls with keyword arguments following a star dict argument. (Fixed in 2.7.14 already.)
Python 3.13: Fixed an issue where
PyState_FindModulewas not working correctly with extension modules due to sub-interpreter changes. (Fixed in 2.7.14 already.)Onefile: Corrected an issue where the process ID (PID) was not set in a timely manner, which could affect onefile operations. (Fixed in 2.7.14 already.)
Compatibility: Fixed a crash that could occur when a function with both a star-list argument and keyword-only arguments was called without any arguments. (Fixed in 2.7.16 already.)
Standalone: Corrected an issue where distribution names were not checked case-insensitively, which could lead to metadata not being included. (Fixed in 2.7.16 already.)
Linux: Avoid using full zlib with extern declarations but instead only the CRC32 functions we need. Otherwise conflicts with OS headers could occur.
Standalone: Fixed an issue where scanning for standard library dependencies was unnecessarily performed.
Plugins: Made the runtime query code robust against modules that in stdout during import
This affected at least
togagiving some warnings on Windows with mere stdout prints. We now have a marker for the start of our output that we look for and safely ignore them.Windows: Do not attempt to attach to the console when running in DLL mode. For onefile with DLL mode, this was unnecessary as the bootstrap already handles it, and for pure DLL mode, it is not desired.
Onefile: Removed unnecessary parent process monitoring in onefile mode, as there is no child process launched.
Anaconda: Determine version and project name for conda packages more reliably
It seems Anaconda is giving variables in package metadata and often no project name, so we derive it from the conda files and its meta data in those cases.
macOS: Make sure the SSL certificates are found when downloading on macOS, ensuring successful downloads.
Windows: Fixed an issue where console mode
attachwas not working in onefile DLL mode.Scons: Fixed an issue where
pragmawas used with oldergccgcccan give warnings about them. This fixes building on older OSes with the system gcc.Compatibility: Fix, need to avoid using filenames with more than 250 chars for long module names.
For cache files, const files, and C files, we need to make sure, we don’t exceed the 255 char limits per path element that literally every OS has.
Also enhanced the check code for legal paths to cover this, so user options are covered from this errors too.
Moved file hashing to file operations where it makes more sense to allow module names to use hashing to provide a legal filename to refer to themselves.
Compatibility: Fixed an issue where walking included compiled packages through the Nuitka loader could produce incorrect names in some cases.
Windows: Fixed wrong calls made when checking
stderrproperties during launch if it wasNone.Debugging: Fixed an issue where the segfault non-deployment disable itself before doing anything else.
Plugins: Fix, the warning to choose a GUI plugin for
matplotlibwas given withtk-interplugin enabled still, which is of course not appropriate.Distutils: Fix, do not recreate the build folder with a
.gitignorefile.We were re-creating it as soon as we looked at what it would be, now it’s created only when asking for that to happen.
No-GIL: Addressed compile errors for the free-threaded dictionary implementation that were introduced by necessary hot-fixes in the version 2.7.
Compatibility: Fixed handling of generic classes and generic type declarations in Python 3.12.
macOS: Fixed an issue where entitlements were not properly provided for code signing.
Onefile: Fixed delayed shutdown for terminal applications in onefile DLL mode.
Was waiting for non-used child processes, which don’t exist and then the timeout for that operation, which is always happening on CTRL-C or terminal shutdown.
Python3.13: Fix, seems interpreter frames with None code objects exist and need to be handled as well.
Standalone: Fix, need to allow for
setuptoolspackage to be user provided.Windows: Avoided using non-encodable dist and build folder names.
Some paths don’t become short, but still be non-encodable from the file system for tools. In these cases, temporary filenames are used to avoid errors from C compilers and other tools.
Python3.13: Fix, ignore stdlib
cgimodule that might be left over from previous installsThe module was removed during development, and if you install over an old alpha version of 3.13 a newer Python, Nuitka would crash on it.
macOS: Allowed the
libfolder for the Python Build Standalone flavor, improving compatibility.macOS: Allowed libraries for
rpathresolution to be found in all Homebrew folders and not justlib.Onefile: Need to allow
..in paths to allow outside installation paths.
Package Support
Standalone: Introduced support for the
niceguipackage. (Added in 2.7.1 already.)Standalone: Extended support to include
xgboost.coreon macOS. (Added in 2.7.1 already.)Standalone: Added needed data files for
ursinapackage. (Added in 2.7.1 already.)Standalone: Added support for newer versions of the
pydanticpackage. (Added in 2.7.4 already.)Standalone: Extended
libonnxruntimesupport to macOS, enabling its use in compiled applications on this platform. (Added in 2.7.4 already.)Standalone: Added necessary data files for the
pygameextrapackage. (Added in 2.7.4 already.)Standalone: Included GL backends for the
arcadepackage. (Added in 2.7.4 already.)Standalone: Added more data directories for the
ursinaandpanda3dpackages, improving their out-of-the-box compatibility. (Added in 2.7.4 already.)Standalone: Added support for newer
skimagepackage. (Added in 2.7.5 already.)Standalone: Added support for the
PyTaskbarpackage. (Added in 2.7.6 already.)macOS: Added
tk-intersupport for Python 3.13 with official CPython builds, which now use framework files for Tcl/Tk. (Added in 2.7.6 already.)Standalone: Added support for the
paddlexpackage. (Added in 2.7.6 already.)Standalone: Added support for the
jinxedpackage, which dynamically loads terminal information. (Added in 2.7.6 already.)Windows: Added support for the
ansiconpackage by including a missing DLL. (Added in 2.7.6 already.)macOS: Enhanced configuration for the
pypylonpackage, however, it’s not sufficient. (Added in 2.7.6 already.)Standalone: Added support for newer
numpyversions. (Added in 2.7.7 already.)Standalone: Added support for older
vtkpackage. (Added in 2.7.8 already.)Standalone: Added support for newer
certifiversions that useimportlib.resources. (Added in 2.7.9 already.)Standalone: Added support for the
reportlab.graphics.barcodemodule. (Added in 2.7.9 already.)Standalone: Added support for newer versions of the
transformerspackage. (Added in 2.7.11 already.)Standalone: Added support for newer versions of the
sklearnpackage. (Added in 2.7.12 already.)Standalone: Added support for newer versions of the
scipypackage. (Added in 2.7.12 already.)Standalone: Added support for older versions of the
cv2package (specifically version 4.4). (Added in 2.7.12 already.)Standalone: Added initial support for the
vllmpackage. (Added in 2.7.12 already.)Standalone: Ensured all necessary DLLs for the
pygamepackage are included. (Added in 2.7.12 already.)Standalone: Added support for newer versions of the
zaber_motionpackage. (Added in 2.7.13 already.)Standalone: Added missing dependencies for the
pymediainfopackage. (Added in 2.7.13 already.)Standalone: Added support for newer versions of the
sklearnpackage by including a missing dependency. (Added in 2.7.13 already.)Standalone: Added support for newer versions of the
togapackage. (Added in 2.7.14 already.)Standalone: Added support for the
wordninja-enhancedpackage. (Added in 2.7.14 already.)Standalone: Added support for the
Fast-SSIMpackage. (Added in 2.7.14 already.)Standalone: Added a missing data file for the
rfc3987_syntaxpackage. (Added in 2.7.14 already.)Standalone: Added missing data files for the
trimeshpackage. (Added in 2.7.15 already.)Standalone: Added support for the
gdsfactory,klayout, andkfactorypackages. (Added in 2.7.15 already.)Standalone: Added support for the
vllmpackage. (Added in 2.7.16 already.)Standalone: Added support for newer versions of the
tkinterwebpackage. (Added in 2.7.15 already.)Standalone: Added support for newer versions of the
cmsis_pack_managerpackage. (Added in 2.7.15 already.)Standalone: Added missing data files for the
idlelibpackage. (Added in 2.7.15 already.)Standalone: Avoid including debug binary on non-Windows for Qt Webkit.
Standalone: Add dependencies for pymediainfo package.
Standalone: Added support for the
winptypackage.Standalone: Added support for newer versions of the
gipackage.Standalone: Added support for newer versions of the
litellmpackage.Standalone: Added support for the
traitsandpyfacepackages.Standalone: Added support for newer versions of the
transformerspackage.Standalone: Added data files for
rasteriopackage.Standalone: Added support for
ortoolspackage.Standalone: Added support newer “vtk” package
New Features
Python3.14: Added experimental support for Python3.14, not recommended for use yet, as this is very fresh and might be missing a lot of fixes.
Release: Added an extra dependency group for the Nuitka build-backend, intended for use in
pyproject.tomland other build-system dependencies. To use it depend inNuitka[build-wheel]instead of Nuitka. (Added in 2.7.7 already.)For release we also added
Nuitka[onefile],Nuitka[standalone],Nuitka[app]as extra dependency groups. If icon conversions are used, e.g.Nuitka[onefile,icon-conversion]adds the necessary packages for that. If you don’t care about what’s being pulled inNuitka[all]can be used, by defaultNuitkaonly comes with the bare minimum needed and will inform about missing packages.macOS: Added
--macos-sign-keyring-filenameand--macos-sign-keyring-passwordto automatically unlock a keyring for use during signing. This is very useful for CI where no UI prompt can be used.Windows: Detect when
inputcannot be used due to no console or the console not providing proper standard input and produce a dialog for entry instead. Shells likecmd.exeexecute inputs as commands entered when attaching to them. With this, the user is informed to make the input into the dialog instead. In case of no terminal, this just brings up the dialog for GUI mode.Plugins: Introduced
global_change_functionto the anti-bloat engine, allowing function replacements across all sub-modules of a package at once. (Added in 2.7.6 already.)Reports: For Python 3.13+, the compilation report now includes information on GIL usage. (Added in 2.7.7 already.)
macOS: Added an option to prevent an application from running in multiple instances. (Added in 2.7.7 already.)
AIX: Added support for this OS as well, now standalone and module mode work there too.
Scons: When C a compilation fails to due warnings in
--debugmode, recognize that and provide the proper extra options to use if you want to ignore that.Non-Deployment: Added a non-deployment handler to catch modules
Non-Deployment: Added non-deployment handler to catch modules that error exit on import, while assumed to work perfectly.
This will give people an indication that the
numpymodule is expected to work and that maybe just the newest version is not and we need to be told about it.Non-Deployment: Added a non-deployment handler for
DistributionNotFoundexceptions in the main program, which now points the user to the necessary metadata options.UI: Made
--include-data-files-externalthe primary option for placing data files alongside the created program.This now works with standalone mode too, and is no longer onefile specific, the name should reflect that and people can now use it more broadly.
Plugins: Added support for multiple warnings of the same kind. The
dill-compatplugin needs that as it supports multiple packages.Plugins: Added detector for the
dill-compatplugin that detects usages ofdill,cloudpickleandray.cloudpickle.Standalone: Add support for including Visual Code runtime dlls on Windows.
When MSVC (Visual Studio) is installed, we take the runtime DLLs from its folders. We cannot take the ones from the
redistpackages installed to system folders for license reasons.Gives a warning when these DLLs would be needed, but were not found.
We might want to add an option later to exclude them again, for size purposes, but correctness out of the box is more important for now.
UI: Make sure the distribution name is correct for
--include-distribution-metadataoption values.Plugins: Added support for configuring re-compilation of extension modules from their source code.
When we have both Python code and an extension module, we only had a global option available on the command line.
This adds
--recompile-extension-modulesfor more fine grained choices as it allows to specify names and patterns.For
zmq, we need to enforce it to never be compiled, as it checks if it is compiled with Cython at runtime, so re-compilation is never possible.
Reports: Include environment flags for C compiler and linker picked up for the compilation. Sometimes these cause compilation errors that and this will reveal there presence.
Optimization
Enhanced detection of
raisestatements that use compile-time constant values which are not actual exception instances.This improvement prevents Nuitka from crashing during code generation when encountering syntactically valid but semantically incorrect code, such as
raise NotImplemented. While such code is erroneous, it should not cause a compiler crash. (Added in 2.7.1 already.)With unknown locals dictionary variables trust very hard values there too.
With this using hard import names also optimize inside of classes.
This makes
gcloudmetadata work, which previously wasn’t resolved in their code.
macOS: Enhanced
PySide2support by removing the general requirement for onefile mode. Onefile mode is now only enforced forQtWebEnginedue to its specific stability issues when not bundled this way. (Added in 2.7.4 already.)Scons: Added support for C23 embedding of the constants blob with ClangCL, avoiding the use of resources. Since the onefile bootstrap does not yet honor this for its payload, this feature is not yet complete but could help with size limitations in the future.
Plugins: Overhauled the UPX plugin.
Use better compression than before, hint the user at disabling onefile compression where applicable to avoid double compression. Output warnings for files that are not considered compressible. Check for
upxbinary sooner.Scons: Avoid compiling
haclcode for macOS where it’s not needed.
Anti-Bloat
Improved handling of the
astropypackage by implementing global replacements instead of per-module ones. Similar global handling has also been applied toIPythonto reduce overhead. (Added in 2.7.1 already.)Avoid
docutilsusage in themarkdown2package. (Added in 2.7.1 already.)Reduced compiled size by avoiding the use of “docutils” within the
markdown2package. (Added in 2.7.1 already.)Avoid including the testing framework from the
langsmithpackage. (Added in 2.7.6 already.)Avoid including
setuptoolsfromjax.version. (Added in 2.7.6 already.)Avoid including
unittestfrom thereportlabpackage. (Added in 2.7.6 already.)Avoid including
IPythonfor thekeraspackage using a more global approach. (Added in 2.7.11 already.)Avoid including the
tritonpackage when compilingtransformers. (Added in 2.7.11 already.)Avoid a bloat warning for an optional import in the
seabornpackage. (Added in 2.7.13 already.)Avoid compiling generated
google.protobuf.*_pb2files. (Added in 2.7.7 already.)Avoid including
tritonandsetuptoolswhen using thexformerspackage. (Added in 2.7.16 already.)Refined
dasksupport to not removepandas.testingwhenpytestusage is allowed. (Added in 2.7.16 already.)Avoid compiling the
tensorflowmodule that is very slow and contains generated code.Avoid using
setuptoolsincupypackage.Avoid false bloat warning in
seadocpackage.Avoid using
daskinsklearnpackage.Avoid using
cupy.testingin thecupypackage.Avoid using
IPythonin theroboflowpackage.Avoid including
rayfor thevllmpackage.Avoid using
dillin thetorchpackage.
Organizational
UI: Remove obsolete options to control the compilation mode from help output. We are keeping them only to not break existing workflows, but
--mode=...should be used now, and these options will start triggering warnings soon.Python3.13.4: Reject broken CPython official release for Windows.
The link library included is not the one needed for GIL, and as such it breaks Nuitka heavily and must be errored out on, all smaller or larger micro versions work, but this one does not.
Release: Do not use Nuitka 2.7.9 as it broke data file access via
__file__in onefile mode on Windows. This is a brown paper bag release, with 2.7.10 containing only the fix for that. Sorry for the inconvenience.Release: Ensured proper handling of newer
setuptoolsversions during Nuitka installation. (Fixed in 2.7.4 already.)UI: Sort
--list-distribution-metadataoutput and remove duplicates. (Changed in 2.7.8 already.)Visual Code: Added a Python 2.6 configuration for Win32 to aid in comparisons and legacy testing.
UI: Now lists available Qt plugin families if
--include-qt-plugincannot find one.UI: Warn about compiling a file named
__main__.pywhich should be avoided, instead you should specify the package directory in that case.UI: Make it an error to compile a file named
__init__.pyfor standalone mode.
Debugging: The
--editoption now correctly finds files even when using long, non-shortened temporary file paths.Debugging: The
pyside6plugin now enforces--no-debug-immortal-assumptionswhen--debugis on because PySide6 violates these and we don’t need Nuitka to check for that then as it will abort when it finds them.Quality: Avoid writing auto-formatted files with same contents
That avoids stirring up tools that listen to changes.
For example the Nuitka website auto-builder otherwise rebuilt per release post on docs update.
Quality: Use latest version of
deepdiff.Quality: Added autoformat for JSON files.
Release: The man pages were using outdated options and had no example for standalone or app modes. Also the actual options were no longer included.
GitHub: Use the
--modeoptions in the issue template as well.GitHub: Enhanced wordings for bug report template to give more directions and more space for excellent reports to be made.
GitHub: The bug report template now requests the output of our package metadata listing tool, as it provides more insight into how Nuitka perceives the environment.
Debugging: Re-enabled important warnings for Clang, which had unnoticed for a long time and prevented a few things from being recognized.
Debugging: Support arbitrary debuggers through –debugger-choice.
Support arbitrary debuggers for use in the
--debuggermode, if you specify all of their command line you can do anything there.Also added predefined
valgrind-memcheckmode for memory checker tool of Valgrind to be used.UI: Added rich as a progress bar that can be used. Since it’s available via pip, it can likely be found and requires no inline copy. Added colors and similar behavior for
tqdmas well.UI: Remove obsolete warning for Linux with
upxplugin.We don’t use
appimageanymore for a while now, so its constraints no longer apply.UI: Add warnings for module specific options too. The logic to not warn on GitHub Actions was inverted, this restores warnings for normal users.
UI: Output the module name in question for
options-nannyplugin and parameter warnings.UI: When a forbidden import comes from an implicit import, report it properly.
Sometimes
.pyifiles from extension modules cause an import, but it was not clear which one; now it will indicate the module causing it.UI: More clear error message in case a Python for scons was not found.
Actions: Cover debug mode compilation at least once.
Quality: Resolve paths from all OSes in
--edit. Sometime I want to look at a file on a different OS, and there is no need to enforce being on the same one for path resolution to work.Actions: Updated to a newer Ubuntu version for testing, as to get
clang-formatinstalled anymore.Debugging: Allow for C stack output in signal handlers, this is most useful when doing the non-deployment handler that catches them to know where they came from more precisely.
UI: Show no-GIL in output of Python flavor in compilation if relevant.
Tests
Removed Azure CI configuration, as testing has been fully migrated to GitHub Actions. (Changed in 2.7.9 already.)
Improved test robustness against short paths for package-containing directories. (Added in 2.7.4 already.)
Prevented test failures caused by rejected download prompts during test execution, making CI more stable. (Added in 2.7.4 already.)
Refactored common testing code to avoid using
doctests, preventing warnings in specific standalone mode test scenarios related to reference counting. (Added in 2.7.4 already.)Tests: Cover the memory leaking call re-formulation with a reference count test.
Cleanups
Plugins: Improved
pkg_resourcesintegration by using the__loader__attribute of the registering module for loader type registration, avoiding modification of the globalbuiltinsdictionary. (Fixed in 2.7.2 already.)Improved the logging mechanism for module search scans. It is now possible to enable tracing for individual
locateModulecalls, significantly enhancing readability and aiding debugging efforts.Scons: Refactored architecture specific options into dedicated functions to improve code clarity.
Spelling: Various spelling and wording cleanups.
Avoid using
#ifdefin C code templates, and let’s just avoid it generally.Added missing slot function names to the ignored word list.
Renamed variables related to slots to be more verbose and proper spelling as a result, as that’s for better understanding of their use anyway.
Scons: Specify versions supported for Scons by excluding the ones that are not, rather than manually maintaining a list. This adds automatic support for Python 3.14.
Plugins: Removed a useless call to
internas it did not have thought it does.Attach copyright during code generation for code specializations
This also enhances the formatting for almost all files by making leading and trailing new lines more consistent.
One C file turns out unused and was removed as a left over from a previous refactoring.
Summary
This release was supposed to focus on scalability, but that didn’t happen again due to a variety of important issues coming up as well as a created downtime after high private difficulties after a planned surgery. However, the upcoming release will have it finally.
The onefile DLL mode as used on Windows has driven a lot of need for corrections, some of which are only in the final release, and this is probably the first time it should be usable for everything.
For compatibility, working with the popular (yet - not yes recommended UV-Python), Windows UI fixes for temporary onefile and macOS improvements, as well as improved Android support are excellent.
The next release of Nuitka however will have to focus on scalability and maintenance only. But as usual, not sure if it can happen.
November 15, 2025 01:52 PM UTC
November 14, 2025
Real Python
The Real Python Podcast – Episode #274: Preparing Data Science Projects for Production
How do you prepare your Python data science projects for production? What are the essential tools and techniques to make your code reproducible, organized, and testable? This week on the show, Khuyen Tran from CodeCut discusses her new book, "Production Ready Data Science."
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 14, 2025 12:00 PM UTC
EuroPython Society
Recognising Michael Foord as an Honorary EuroPython Society Fellow
Hi everyone. Today, we are honoured to announce a very special recognition.
The EuroPython Society has posthumously elected Michael Foord (aka voidspace) as an Honorary EuroPython Society Fellow.
Michael Foord (1974–2025)
Michael was a long-time and deeply influential member of the Python community. He began using Python in 2002, became a Python core developer, and left a lasting mark on the language through his work on unittest and the creation of the mock library. He also started the tradition of the Python Language Summits at PyCon US, and he consistently supported and connected the Python community across Europe and beyond.
However, his legacy extends far beyond code. Many of us first met Michael through his writing and tools, but what stayed with people was the example he set through his contributions, and how he showed up for others. He answered questions with patience, welcomed newcomers, and cared about doing the right thing in small, everyday ways. He made space for people to learn. He helped the Python community in Europe grow stronger and more connected. He made our community feel like a community.
His impact was celebrated widely across the community, with many tributes reflecting his kindness, humour, and dedication:
At EuroPython 2025, we held a memorial and kept a seat for him in the Forum Hall:
A lasting tribute
EuroPython Society Fellows are people whose work and care move our mission forward. By naming Michael an Honorary Fellow, we acknowledge his technical contributions and also the kindness and curiosity that defined his presence among us. We are grateful for the example he set, and we miss him.
Our thoughts and thanks are with Michael&aposs friends, collaborators, and family. His work lives on in our tools. His spirit lives on in how we treat each other.
With gratitude,
Your friends at EuroPython Society
November 14, 2025 09:00 AM UTC
November 13, 2025
Paolo Melchiorre
How to use UUIDv7 in Python, Django and PostgreSQL
Learn how to use UUIDv7 today with stable releases of Python 3.14, Django 5.2 and PostgreSQL 18. A step by step guide showing how to generate UUIDv7 in Python, store them in Django models, use PostgreSQL native functions and build time ordered primary keys without writing SQL.
November 13, 2025 11:00 PM UTC
Python Engineering at Microsoft
Python in Visual Studio Code – November 2025 Release
We’re excited to announce that the November 2025 release of the Python extension for Visual Studio Code is now available!
This release includes the following announcements:
- Add Copilot Hover Summaries as docstring
- Localized Copilot Hover Summaries
- Convert wildcard imports Code Action
- Debugger support for multiple interpreters via the Python Environments Extension
If you’re interested, you can check the full list of improvements in our changelogs for the Python and Pylance extensions.
Add Copilot Hover Summaries as docstring
You can now add your AI-generated documentation directly into your code as a docstring using the new Add as docstring command in Copilot Hover Summaries. When you generate a summary for a function or class, navigate to the symbol definition and hover over it to access the Add as docstring command, which inserts the summary below your cursor formatted as a proper docstring.
This streamlines the process of documenting your code, allowing you to quickly enhance readability and maintainability without retyping.

Localized Copilot Hover Summaries
GitHub Copilot Hover Summaries inside Pylance now respect your display language within VS Code. When you invoke an AI-generated summary, you’ll get strings in the language you’ve set for your editor, making it easier to understand the generated documentation.

Convert wildcard imports into Code Action
Wildcard imports (from module import *) are often discouraged in Python because they can clutter your namespace and make it unclear where names come from, reducing code clarity and maintainability.
Pylance now helps you clean up modules that still rely on from module import * via a new Code Action. It replaces the wildcard with the explicit symbols, preserving aliases and keeping the import to a single statement. To try it out, you can click on the line with the wildcard import and press Ctrl + . (or Cmd + . on macOS) to select the Convert to explicit imports Code Action.

Debugger support for multiple interpreters via the Python Environments Extension
The Python Debugger extension now leverages the APIs from the Python Environments Extension (vscode-python-debugger#849). When enabled, the debugger can recognize and use different interpreters for each project within a workspace. If you have multiple folders configured as projects—each with its own interpreter – the debugger will now respect these selections and use the interpreter shown in the status bar when debugging.
To enable this functionality, set “python.useEnvironmentsExtension”: true in your user settings. The new API integration is only active when this setting is turned on.
Please report any issues you encounter to the Python Debugger repository.
Other Changes and Enhancements
We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include:
- Resolve unexpected blocking during PowerShell command activation (vscode-python-environments#952)
- The Python Environments Extension now respects the existing python.poetryPath user setting to specify which Poetry executable to use (vscode-python-environments#918)
- The Python Environments Extension now detects both requirements.txt and dev-requirements.txt files when creating a new virtual environment for automatic dependency installation (vscode-python-environments#506)
We would also like to extend special thanks to this month’s contributors:
- @iBug: Fixed Python REPL cursor drifting in vscode-python#25521
Try out these new improvements by downloading the Python extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.
The post Python in Visual Studio Code – November 2025 Release appeared first on Microsoft for Python Developers Blog.
November 13, 2025 06:41 PM UTC
November 12, 2025
Python Software Foundation
Python is for everyone: Join in the PSF year-end fundraiser & membership drive!
The Python Software Foundation (PSF) is the charitable organization behind Python, dedicated to advancing, supporting, and protecting the Python programming language and the community that sustains it. That mission and cause are more than just words we believe in. Our tiny but mighty team works hard to deliver the projects and services that allow Python to be the thriving, independent, community-driven language it is today. Some of what the PSF does includes producing PyCon US, hosting the Python Package Index (PyPI), supporting 5 Developers-in-Residence, maintaining critical community infrastructure, and more.
Python is for teaching, learning, playing, researching, exploring, creating, working– the list goes on and on and on! Support this year's fundraiser with your donations and memberships to help the PSF, the Python community, and the language stay strong and sustainable. Because Python is for everyone, thanks to you.
There are two direct ways to join through donate.python.org:
- Donate directly to the PSF! Your donation is a direct way to support and power the future of the Python programming language and community you love. Every donation makes a difference, and we work hard to make a little go a long way.
- Become a PSF Supporting Member! When you sign up as a Supporting Member of the PSF, you become a part of the PSF, are eligible to vote in PSF elections, and help us sustain our mission with your annual support. You can sign up as a Supporting Member at the usual annual rate ($99 USD), or you can take advantage of our sliding scale option (starting at $25 USD)!
>>> Donate or Become a Member Today! <<<
If you already donated and/or you’re already a member, you can:
- Share the fundraiser with your regional and project-based communities: Share this blog post in your Python-related Discords, Slacks, social media accounts- wherever your Python community is! Keep an eye on our social media accounts to see the latest stories and news for the campaign.
- Share your Python story with a call to action: We invite you to share your personal Python, PyCon, or PSF story. What impact has it made in your life, in your community, in your career? Share your story in a blog post or on your social media platform of choice and add a link to donate.python.org.
- Ask your employer to sponsor: If your company is using Python to build its products and services, check to see if they already sponsor the PSF on our Sponsors page. If not, reach out to your organization's internal decision-makers and impress on them just how important it is for us to power the future of Python together, and send them our sponsor prospectus.
Your donations and support:
- Keep Python thriving
- Support CPython and PyPI progress
- Increase security across the Python ecosystem
- Bring the global Python community together
- Make our community more diverse and robust every year
Highlights from 2025:
- Producing another wonderful PyCon US: We welcomed 2,225 attendees for PyCon US 2025– 1,404 of whom were newcomers– at the David L. Lawrence Convention Center in beautiful downtown Pittsburgh. PyCon US 2025 was packed with 9 days of content, education, and networking for the Python community, including 6 Keynote Sessions, 91 Talks, including the Charlas Spanish track, 24 Tutorials, 20 Posters, 30+ Sprint Projects, 146 Open Spaces, and 60 Booths!
- Continuing to enhance Python and PyPI’s security through Developers-in-Residence: The PSF’s PyPI Safety and Security Engineer, Mike Fiedler, has implemented new safeguards, including automation to detect expiring email domains and prevent impersonation attacks, as well as guidance for maintainers to use more secure authentication methods like WebAuthn and Trusted Publishers. The PSF’s Security Developer-in-Residence, Seth Larson, continues to lead efforts to strengthen Python’s security and transparency. His work on PEP 770 introduces standardized Software Bill-of-Materials (SBOMs) within Python packages, improving visibility into dependencies for stronger supply chain security. A new white paper co-authored with Alpha-Omega outlines how these improvements enhance trust and measurability across the ecosystem.
- Adoption of pypistats.org: The PSF infrastructure team has officially adopted the operation of pypistats.org, which had been run by volunteer Christopher Flynn for over six years (thank you, Christopher!). The PSF’s Infrastructure Team now handles the service’s infrastructure, costs, and domain registration– and the service itself remains open source and community-maintained.
- Advancing PyPI Organizations: The rollout of PyPI Organizations is now well underway, marking a major milestone in improving project management and collaboration across the Python ecosystem. With new Terms of Service finalized and supporting tools in place, the PSF has cleared its backlog of requests and approved thousands of organizations—including 2,409 Community and 4979 Company organizations as of today. Hundreds of these organizations have already begun adding members, transferring projects, and subscribing to the new Company tier, generating sustainable support for the PSF. We’re excited to see how teams are using these new features to better organize and maintain their projects on PyPI.
- Empowering the Python community through Fiscal Sponsorship: We are proud to continue supporting our 20 fiscal sponsoree organizations with their initiatives and events all year round. The PSF provides 501(c)(3) tax-exempt status to fiscal sponsorees such as PyLadies and Pallets, and provides back office support so they can focus on their missions. Consider donating to your favorite PSF Fiscal Sponsoree and check out our Fiscal Sponsorees page to learn more about what each of these awesome organizations is all about!
- Serving our community with grants: The PSF Grants Program awarded approximately $340K to 86 grantees around the world; supporting local conferences, workshops, and community initiatives that keep Python growing and accessible to all. While we had to make the difficult decision to pause the program early to ensure financial sustainability, we would love to reopen it as soon as possible. Your participation in this year’s fundraiser fuels that effort!
- Honoring community leaders: The PSF honored three leaders with Distinguished Service Awards this year. Ewa Jodlowska helped transform the PSF into a professional, globally supportive organization. Thomas Wouters has contributed decades of leadership, guidance, and institutional knowledge. Van Lindberg provided essential legal expertise that guided the PSF through growth and governance. Their dedication has left a lasting impact on the PSF, Python, and its community. The PSF was also thrilled to recognize Katie McLaughlin, Sarah Kuchinsky, and Rodrigo Girão Serrão with Community Service Awards (CSA) for their outstanding contributions to the Python community. Their dedication, creativity, and generosity embody the spirit of Python and strengthen our global community. We recognized Jay Miller with a CSA for his work to improve diversity, inclusion, and equity in the global Python community through founding and sustaining Black Python Devs. We also honored Matt Lebrun and Micaela Reyes with CSA's for their efforts to grow and support the Python community in the Philippines through conferences, meetups, and volunteer programs.
- Finding strength in the Python community: When the PSF shared the news about turning down a NSF grant, the outpouring of support from the Python community was nothing short of incredible. In just one day, you helped raise over $60K and welcomed 125 new Supporting Members- in the week after, that number jumped to $150K+ and 270+ new Supporting Members! A community-led matching campaign and countless messages of support, solidarity, and encouragement reminded us that while some choices are tough, we never face them alone. The PSF Board & Staff are deeply moved and energized by your words, actions, and continued belief in our shared mission. This moment has set the stage for a record-breaking end-of-year fundraiser, and we are so incredibly grateful to be in community with each of you.
November 12, 2025 05:03 PM UTC
Real Python
The Python Standard REPL: Try Out Code and Ideas Quickly
The Python standard REPL (Read-Eval-Print Loop) lets you run code interactively, test ideas, and get instant feedback. You start it by running the python command, which opens an interactive shell included in every Python installation.
In this tutorial, you’ll learn how to use the Python REPL to execute code, edit and navigate code history, introspect objects, and customize the REPL for a smoother coding workflow.
By the end of this tutorial, you’ll understand that:
- You can enter and run simple or compound statements in a REPL session.
- The implicit
_variable stores the result of the last evaluated expression and can be reused in later expressions. - You can reload modules dynamically with
importlib.reload()to test updates without restarting the REPL. - The modern Python REPL supports auto-indentation, history navigation, syntax highlighting, quick commands, and autocompletion, which improves your user experience.
- You can customize the REPL with a startup file, color themes, and third-party libraries like Rich for a better experience.
With these skills, you can move beyond just running short code snippets and start using the Python REPL as a flexible environment for testing, debugging, and exploring new ideas.
Get Your Code: Click here to download the free sample code that you’ll use to explore the capabilities of Python’s standard REPL.
Take the Quiz: Test your knowledge with our interactive “The Python Standard REPL: Try Out Code and Ideas Quickly” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
The Python Standard REPL: Try Out Code and Ideas QuicklyTest your understanding of the Python standard REPL. The Python REPL allows you to run Python code interactively, which is useful for testing new ideas, exploring libraries, refactoring and debugging code, and trying out examples.
Getting to Know the Python Standard REPL
In computer programming, you’ll find two kinds of programming languages: compiled and interpreted languages. Compiled languages like C and C++ have an associated compiler program that converts the language’s code into machine code.
This machine code is typically saved in an executable file. Once you have an executable, you can run your program on any compatible computer system without needing the compiler or the source code.
In contrast, interpreted languages like Python need an interpreter program. This means that you need to have a Python interpreter installed on your computer to run Python code. Some may consider this characteristic a drawback because it can make your code distribution process much more difficult.
However, in Python, having an interpreter offers one significant advantage that comes in handy during your development and testing process. The Python interpreter allows for what’s known as an interactive Read-Eval-Print Loop (REPL), or shell, which reads a piece of code, evaluates it, and then prints the result to the console in a loop.
The Python REPL is a built-in interactive coding playground that you can start by typing python in your terminal. Once in a REPL session, you can run Python code:
>>> "Python!" * 3
Python!Python!Python!
>>> 40 + 2
42
In the REPL, you can use Python as a calculator, but also try any Python code you can think of, and much more! Jump to starting and terminating REPL interactive sessions if you want to get your hands dirty right away, or keep reading to gather more background context first.
Note: In this tutorial, you’ll learn about the CPython standard REPL, which is available in all the installers of this Python distribution. If you don’t have CPython yet, then check out How to Install Python on Your System: A Guide for detailed instructions.
The standard REPL has changed significantly since Python 3.13 was released. Several limitations from earlier versions have been lifted. Throughout this tutorial, version differences are indicated when appropriate.
To dive deeper into the new REPL features, check out these resources:
The Python interpreter can execute Python code in two modes:
- Script, or program
- Interactive, or REPL
In script mode, you use the interpreter to run a source file—typically a .py file—as an executable program. In this case, Python loads the file’s content and runs the code line by line, following the script or program’s execution flow.
Alternatively, interactive mode is when you launch the interpreter using the python command and use it as a platform to run code that you type in directly.
In this tutorial, you’ll learn how to use the Python standard REPL to run code interactively, which allows you to try ideas and test concepts when using and learning Python. Are you ready to take a closer look at the Python REPL? Keep reading!
What Is Python’s Interactive Shell or REPL?
When you run the Python interpreter in interactive mode, you open an interactive shell, also known as an interactive session. In this shell, your keyboard is the input source, and your screen is the output destination.
Note: In this tutorial, the terms interactive shell, interactive session, interpreter session, and REPL session are used interchangeably.
Here’s how the REPL works: it takes input consisting of Python code, which the interpreter parses and evaluates. Next, the interpreter displays the result on your screen, and the process starts again as a loop.
Read the full article at https://realpython.com/python-repl/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 12, 2025 02:00 PM UTC
Peter Bengtsson
Using AI to rewrite blog post comments
Using AI to correct and edit blog post comments as part of the moderation process.
November 12, 2025 12:42 PM UTC
Python Morsels
Unnecessary parentheses in Python
Python's ability to use parentheses for grouping can often confuse new Python users into over-using parentheses in ways that they shouldn't be used.
Table of contents
- Parentheses can be used for grouping
- Python's
ifstatements don't use parentheses - Parentheses can go anywhere
- Parentheses for wrapping lines
- Parentheses that make statements look like functions
- Parentheses can go in lots of places
- Use parentheses sometimes
- Consider readability when adding or removing parentheses
Parentheses can be used for grouping
Parentheses are used for 3 things in Python: calling callables, creating empty tuples, and grouping.
Functions, classes, and other [callable][] objects can be called with parentheses:
>>> print("I'm calling a function")
I'm calling a function
Empty tuples can be created with parentheses:
>>> empty = ()
Lastly, parentheses can be used for grouping:
>>> 3 * (4 + 7)
33
Sometimes parentheses are necessary to convey the order of execution for an expression.
For example, 3 * (4 + 7) is different than 3 * 4 + 7:
>>> 3 * (4 + 7)
33
>>> 3 * 4 + 7
19
Those parentheses around 4 + 7 are for grouping that sub-expression, which changes the meaning of the larger expression.
All confusing and unnecessary uses of parentheses are caused by this third use: grouping parentheses.
Python's if statements don't use parentheses
In JavaScript if statements look …
Read the full article: https://www.pythonmorsels.com/unnecessary-parentheses/
November 12, 2025 03:30 AM UTC
Seth Michael Larson
Blogrolls are the Best(rolls)
Happy 6-year blogiversary to me! 🎉 To celebrate I want to talk about other peoples’ blogs, more specifically the magic of “blogrolls”. Blogrolls are “lists of other sites that you read, are a follower of, or recommend”. Any blog can host a blogroll, or sometimes websites can be one big blogroll.
I’ve hosted a blogroll on my own blog since 2023 and encourage other bloggers to do so. My own blogroll is generated from the list of RSS feeds I subscribe to and articles that I “favorite” within my RSS reader. If you want to be particularly fancy you can add an RSS feed (example) to your blogroll that provides readers a method to “subscribe” for future blogroll updates.
Blogrolls are like catnip for me: I cannot resist opening and Ctrl-clicking
every link until I can’t see my tabs anymore. The feeling is akin to the first deep breath
of air before starting a hike: there’s a rush of new information, topics, and potential new blogs
to follow.
Blogrolls can bridge the “effort chasm” I frequently hear as an issue when I recommend folks try an RSS feed reader. We’re not used to empty feeds anymore; self-curating blogs until you receive multiple articles per day takes time and effort. Blogrolls can help here, especially ones that publish using the importable OPML format.
You can instantly populate your feed reader app with hundreds of feeds from blogs that are likely relevant to you. Simply create an account on a feed reader, import the blogroll OPML document from a blogger you enjoy, and watch the articles “roll” in. Blogrolls are almost like Bluesky “Starter Packs” in this way!
Hopefully this has convinced you to either curate your own blogroll or to start looking for (or asking for!) blogrolls from your favorite writers on the Web. Share your favorite blogroll with me on email or social media. Title inspired by “Hexagons are the Best-agons”.
Thanks for keeping RSS alive! ♥
November 12, 2025 12:00 AM UTC
November 11, 2025
Ahmed Bouchefra
Let’s be honest. There’s a huge gap between writing code that works and writing code that’s actually good. It’s the number one thing that separates a junior developer from a senior, and it’s something a surprising number of us never really learn.
If you’re serious about your craft, you’ve probably felt this. You build something, it functions, but deep down you know it’s brittle. You’re afraid to touch it a year from now.
Today, we’re going to bridge that gap. I’m going to walk you through eight design principles that are the bedrock of professional, production-level code. This isn’t about fancy algorithms; it’s about a mindset. A way of thinking that prepares your code for the future.
And hey, if you want a cheat sheet with all these principles plus the code examples I’m referencing, you can get it for free. Just sign up for my newsletter from the link in the description, and I’ll send it right over.
Ready? Let’s dive in.
1. Cohesion & Single Responsibility
This sounds academic, but it’s simple: every piece of code should have one job, and one reason to change.
High cohesion means you group related things together. A function does one thing. A class has one core responsibility. A module contains related classes.
Think about a UserManager class. A junior dev might cram everything in there: validating user input, saving the user to the database, sending a welcome email, and logging the activity. At first glance, it looks fine. But what happens when you want to change your database? Or swap your email service? You have to rip apart this massive, god-like class. It’s a nightmare.
The senior approach? Break it up. You’d have:
- An
EmailValidatorclass. - A
UserRespositoryclass (just for database stuff). - An
EmailServiceclass. - A
UserActivityLoggerclass.
Then, your main UserService class delegates the work to these other, specialized classes. Yes, it’s more files. It looks like overkill for a small project. I get it. But this is systems-level thinking. You’re anticipating future changes and making them easy. You can now swap out the database logic or the email provider without touching the core user service. That’s powerful.
2. Encapsulation & Abstraction
This is all about hiding the messy details. You want to expose the behavior of your code, not the raw data.
Imagine a simple BankAccount class. The naive way is to just have public attributes like balance and transactions. What could go wrong? Well, another developer (or you, on a Monday morning) could accidentally set the balance to a negative number. Or set the transactions list to a string. Chaos.
The solution is to protect your internal state. In Python, we use a leading underscore (e.g., _balance) as a signal: “Hey, this is internal. Please don’t touch it directly.”
Instead of letting people mess with the data, you provide methods: deposit(), withdraw(), get_balance(). Inside these methods, you can add protective logic. The deposit() method can check for negative amounts. The withdraw() method can check for sufficient funds.
The user of your class doesn’t need to know how it all works inside. They just need to know they can call deposit(), and it will just work. You’ve hidden the complexity and provided a simple, safe interface.
3. Loose Coupling & Modularity
Coupling is how tightly connected your code components are. You want them to be as loosely coupled as possible. A change in one part shouldn’t send a ripple effect of breakages across the entire system.
Let’s go back to that email example. A tightly coupled OrderProcessor might create an instance of EmailSender directly inside itself. Now, that OrderProcessor is forever tied to that specific EmailSender class. What if you want to send an SMS instead? You have to change the OrderProcessor code.
The loosely coupled way is to rely on an “interface,” or what Python calls an Abstract Base Class (ABC). You define a generic Notifier class that says, “Anything that wants to be a notifier must have a send() method.”
Then, your OrderProcessor just asks for a Notifier object. It doesn’t care if it’s an EmailNotifier or an SmsNotifier or a CarrierPigeonNotifier. As long as the object you give it has a send() method, it will work. You’ve decoupled the OrderProcessor from the specific implementation of the notification. You can swap them in and out interchangeably.
A quick pause. I want to thank boot.dev for sponsoring this discussion. It’s an online platform for backend development that’s way more interactive than just watching videos. You learn Python and Go by building real projects, right in your browser. It’s gamified, so you level up and unlock content, which is surprisingly addictive. The core content is free, and with the code techwithtim, you get 25% off the annual plan. It’s a great way to put these principles into practice. Now, back to it. —
4. Reusability & Extensibility
This one’s a question you should always ask yourself: Can I add new functionality without editing existing code?
Think of a ReportGenerator function that has a giant if/elif/else block to handle different formats: if format == 'text', elif format == 'csv', elif format == 'html'. To add a JSON format, you have to go in and add another elif. This is not extensible.
The better way is, again, to use an abstract class. Create a ReportFormatter interface with a format() method. Then create separate classes: TextFormatter, CsvFormatter, HtmlFormatter, each with their own format() logic.
Your ReportGenerator now just takes any ReportFormatter object and calls its format() method. Want to add JSON support? You just create a new JsonFormatter class. You don’t have to touch the ReportGenerator at all. It’s extensible without being modified.
5. Portability
This is the one everyone forgets. Will your code work on a different machine? On Linux instead of Windows? Without some weird version of C++ installed?
The most common mistake I see is hardcoding file paths. If you write C:\Users\Ahmed\data\input.txt, that code is now guaranteed to fail on every other computer in the world.
The solution is to use libraries like Python’s os and pathlib to build paths dynamically. And for things like API keys, database URLs, and other environment-specific settings, use environment variables. Don’t hardcode them! Create a .env file and load them at runtime. This makes your code portable and secure.
6. Defensibility
Write your code as if an idiot is going to use it. Because someday, that idiot will be you.
This means validating all inputs. Sanitizing data. Setting safe default values. Ask yourself, “What’s the worst that could happen if someone provides bad input?” and then guard against it.
In a payment processor, don’t have debug_mode=True as the default. Don’t set the maximum retries to 100. Don’t forget a timeout. These are unsafe defaults.
And for the love of all that is holy, validate your inputs! Don’t just assume the amount is a number or that the account_number is valid. Check it. Raise clear errors if it’s wrong. Protect your system from bad data.
7. Maintainability & Testability
The most expensive part of software isn’t writing it; it’s maintaining it. And you can’t maintain what you can’t test.
Code that is easy to test is, by default, more maintainable.
Look at a complex calculate function that parses an expression, performs the math, handles errors, and writes to a log file all at once. How do you even begin to test that? There are a million edge cases.
The answer is to break it down. Have a separate OperationParser. Have simple add, subtract, multiply functions. Each of these small, pure components is incredibly easy to test. Your main calculate function then becomes a simple coordinator of these tested components.
8. Simplicity (KISS, DRY, YAGNI)
Finally, after all that, the highest goal is simplicity.
- KISS (Keep It Simple, Stupid): Simple code is harder to write than complex code, but it’s a million times easier to understand and maintain. Swallow your ego and write the simplest thing that works.
- DRY (Don’t Repeat Yourself): If you’re doing something more than once, wrap it in a reusable function or component.
- YAGNI (You Aren’t Gonna Need It): This is the counter-balance to all the principles above. Don’t over-engineer. Don’t add a flexible, extensible system if you’re just building a quick prototype to validate an idea. When I was coding my startup, I ignored a lot of these patterns at first because speed was more important. Always ask what the business need is before you start engineering a masterpiece.
Phew, that was a lot. But these patterns are what it takes to level up. It’s a shift from just getting things done to building things that last.
If you enjoyed this, let me know. I’d love to make more advanced videos like this one. See you in the next one.
November 11, 2025 09:03 PM UTC
PyCoder’s Weekly
Issue #708: Debugging Live Code, NiceGUI, Textual, and More (Nov. 11, 2025)
#708 – NOVEMBER 11, 2025
View in Browser »
Debugging Live Code With CPython 3.14
Python 3.14 added new capabilities to attach to and debug a running process. Learn what this means for debugging and examining your running code.
SURISTER
NiceGUI Goes 3.0
Talk Python interviews Rodja Trappe and Falko Schindler, creators of the NiceGUI toolkit. They talk about what it can do and how it works.
TALK PYTHON
AI Code Reviews Without the Noise
Sentry’s AI Code Review has caught more than 30,000 bugs before they hit production. 🤯 What it hasn’t caught: about a million spammy style nitpicks. Plus, it now predicts bugs 50% faster, and provides agent prompts to automate your fixes. Learn more about Sentry’s AI Code Review →
SENTRY sponsor
Building UIs in the Terminal With Python Textual
Learn to build rich, interactive terminal UIs in Python with Textual: a powerful library for modern, event-driven TUIs.
REAL PYTHON course
Python Jobs
Python Video Course Instructor (Anywhere)
Python Tutorial Writer (Anywhere)
Articles & Tutorials
How Often Does Python Allocate?
How often does Python allocate? The answer is “very often”. This post demonstrates how you can see that for yourself. See also the associated HN discussion
ZACK RADISIC
Improving Security and Integrity of Python Package Archives
Python packages are built on top of archive formats like ZIP which can be problematic as features of the format can be abused. A recent white paper outlines dangers to PyPI and what can be done about it.
PYTHON SOFTWARE FOUNDATION
The 2025 AI Stack, Unpacked
Temporal’s industry report explores how teams like Snap, Descript, and ZoomInfo are building production-ready AI systems, including what’s working, what’s breaking, and what’s next. Download today to see how your stack compares →
TEMPORAL sponsor
10 Smart Performance Hacks for Faster Python Code
Some practical optimization hacks, from data structures to built-in modules, that boost speed, reduce overhead, and keep your Python code clean.
DIDO GRIGOROV
Understanding the PSF’s Current Financial Outlook
A summary of the Python Software Foundation’s current financial outlook and what that means to the variety of community groups it supports.
PYTHON SOFTWARE FOUNDATION
__dict__: Where Python Stores Attributes
Most Python objects store their attributes in a __dict__ dictionary. Modules and classes always use __dict__, but not everything does.
TREY HUNNER
My Favorite Django Packages
A descriptive list of Mattias’s favorite Django packages divided into areas, including core helpers, data structures, CMS, PDFs, and more.
MATTHIAS KESTENHOLZ
A Close Look at a FastAPI Example Application
Set up an example FastAPI app, add path and query parameters, and handle CRUD operations with Pydantic for clean, validated endpoints.
REAL PYTHON
Quiz: A Close Look at a FastAPI Example Application
Practice FastAPI basics with path parameters, request bodies, async endpoints, and CORS. Build confidence to design and test simple Python web APIs.
REAL PYTHON
An Annual Release Cycle for Django
Carlton wants Django to move to an annual release cycle. This post explains why he thinks this way and what the benefits might be.
CARLTON GIBSON
Behave: ML Tests With Behavior-Driven Development
This walkthrough shows how to use the Behave library to bring behavior-driven testing to data and machine learning Python projects.
CODECUT.AI • Shared by Khuyen Tran
Polars and Pandas: Working With the Data-Frame
This post compares the syntax of Polars and pandas with a quick peek at the changes coming in pandas 3.0.
JUMPINGRIVERS.COM • Shared by Aida Gjoka
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
November 12, 2025
REALPYTHON.COM
Python Leiden User Group
November 13, 2025
PYTHONLEIDEN.NL
Python Kino-Barcamp Südost
November 14 to November 17, 2025
BARCAMPS.EU
Python Atlanta
November 14, 2025
MEETUP.COM
PyCon Wroclaw 2025
November 15 to November 16, 2025
PYCONWROCLAW.COM
PyCon Ireland 2025
November 15 to November 17, 2025
PYCON.IE
Happy Pythoning!
This was PyCoder’s Weekly Issue #708.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
November 11, 2025 07:30 PM UTC
Daniel Roy Greenfeld
Visiting Tokyo, Japan from November 12 to 24
I'm excited to announce that me and Audrey will be visiting Japan from November 12 to November 24, 2025! This will be our first time in Japan, and we can't wait to explore Tokyo. Yes, we'll be in Tokyo for most of it, near the Shinjuku area, working from coffee shops, meeting some colleagues, and exploring the city during our free time. Our six year old daughter is with us, so our explorations will be family-friendly.
Unfortunately, we'll be between Python meetups in the Tokyo area. However, if you are in Toyo and write software in any shape or form, and would like to get together for coffee or a meal, please let me know!
If you do Brazilian Jiu-Jitsu in Tokyo, please let me know as well! I'd love to drop by a gym while I'm there.
