Planet Python
Last update: November 20, 2025 01:42 AM UTC
November 19, 2025
Django Weblog
Twenty years of Django releases
On November 16th 2005, Django co-creator Adrian Holovaty announced the first ever Django release, Django 0.90. Twenty years later, today here we are shipping the first release candidate of Django 6.0 đ.
Since weâre celebrating Djangoâs 20th birthday this year, here are a few release-related numbers that represent Djangoâs history:
- 447 releases over 20 years. Thatâs about 22 per year on average. Weâre at 38 so far for 2025. Fun fact: 33 of those releases predate PyPI, and were published via the Django website only!
- 131 security vulnerabilities addressed in those Django releases. Our security issues archive is a testament to our stellar track-record.
- 262,203 releases of Django-related packages. Djangoâs community ecosystem is gigantic. Thereâs tens of releases of Django packages per day as of 2025. There were 52 just today. With the caveat this depends a lot on what you classify as a "Django" package.
This is what decadesâ worth of a stable framework looks like. Expect more gradual improvements and bug fixes over the next twenty yearsâ worth of releases. And if you like this kind of data, check out the State of Django 2025 report by JetBrains, with lots of statistics on our ecosystem (and thereâs a few hours left on their Get PyCharm Pro with 30 % Off & Support Django offer).
Support Django
If you or your employer counts on Djangoâs 20 years of stability, consider whether you can support the project via donations to our non-profit Django Software Foundation.
- â ïž today only - Get PyCharm Pro for 30% off - all the revenue goes to our Foundation.
- Donate on the Django website
- Donate on GitHub sponsors
- Check out how to become a Corporate Member
Once youâve done it, post with #DjangoBirthday and tag us on Mastodon / on Bluesky / on X / on LinkedIn so we can say thank you!
Of our US $300,000.00 goal for 2025, as of November 19th, 2025, we are at:
- 58.7% funded
- $176,098.60 donated
November 19, 2025 03:27 PM UTC
Real Python
Build a Python MCP Client to Test Servers From Your Terminal
Building an MCP client in Python can be a good option when youâre coding MCP servers and want a quick way to test them. In this step-by-step project, youâll build a minimal MCP client for the command line. Itâll be able to connect to an MCP server through the standard input/output (stdio) transport, list the serverâs capabilities, and use the serverâs tools to feed an AI-powered chat.
By the end of this tutorial, youâll understand that:
- You can build an MCP client app for the command line using the MCP Python SDK and
argparse. - You can list a serverâs capabilities by calling
.list_tools(),.list_prompts(), and.list_resources()on aClientSessioninstance. - You can use the OpenAI Python SDK to integrate MCP tool responses into an AI-powered chat session.
Next, youâll move through setup, client implementation, capability discovery, chat handling, and packaging to test MCP servers from your terminal.
Prerequisites
To get the most out of this coding project, you should have some previous knowledge of how to manage a Python project with uv. You should also know the basics of working with the asyncio and argparse libraries from the standard library.
To satisfy these knowledge requirements, you can take a look at the following resources:
- Managing Python Projects With
uv: An All-in-One Solution - Pythonâs
asyncio: A Hands-On Walkthrough - Build Command-Line Interfaces With Pythonâs
argparse
Familiarity with OpenAIâs Python API, openai, will also be helpful because youâll use this library to power the chat functionality of your MCP client. Youâll also use the Model Context Protocol (MCP) Python SDK.
Donât worry if you donât have all of the prerequisite knowledge before starting this tutorialâthatâs completely okay! Youâll learn through the process of getting your hands dirty as you build the project. If you get stuck, then take some time to review the resources linked above. Then, get back to the code.
Youâll also need an MCP server to try your client as you build it. Donât worry if you donât have one availableâyou can use the server provided in step 2.
In this tutorial, you wonât get into the details of creating MCP servers. To learn more about this topic, check out the Python MCP Server: Connect LLMs to Your Data tutorial. Finally, you can download the projectâs source code and related files by clicking the link below.
Get Your Code: Click here to download the free sample code youâll use to build a Python MCP client to test servers from your terminal.
Take the Quiz: Test your knowledge with our interactive âBuild a Python MCP Client to Test Servers From Your Terminalâ quiz. Youâll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Build a Python MCP Client to Test Servers From Your TerminalLearn how to create a Python MCP client, start an AI-powered chat session, and run it from the command line. Check your understanding.
Step 1: Set Up the Project and the Environment
To manage your MCP client project, youâll use uv, a command-line tool for Python project management. If you donât have this tool on your current system, then itâs worth checking out the Managing Python Projects With uv: An All-in-One Solution tutorial.
Note: If you prefer not to use uv, then you can use a combination of alternative tools such as pyenv, venv, pip, or poetry.
Once you have uv or another tool set up, go ahead and open a terminal window. Then, move to a directory where you typically store your projects. From there, run the following commands to scaffold and initialize a new mcp-client/ project:
$ uv init mcp-client
$ cd mcp-client/
$ uv add mcp openai
The first command creates a new Python project in an mcp-client/ directory. The resulting directory will have the following structure:
mcp-client/
âââ .git/
âââ .gitignore
âââ .python-version
âââ README.md
âââ main.py
âââ pyproject.toml
First, you have the .git/ directory and the .gitignore file, which will help you version-control the project.
The .python-version file contains the default Python version for the current project. This file tells uv which Python version to use when creating a dedicated virtual environment for the project. This file will contain the version number of the Python interpreter youâre currently using.
Next, you have an empty README.md file that you can use to provide basic documentation for your project. The main.py file provides a Python script that you can optionally use as the projectâs entry point. You wonât use this file in this tutorial, so feel free to remove it.
Finally, you have the pyproject.toml file, which youâll use to prepare your project for building and distribution.
Read the full article at https://realpython.com/python-mcp-client/ »
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 19, 2025 02:00 PM UTC
PyCharm
At JetBrains, we love seeing the developer community grow and thrive. Thatâs why we support open-source projects that make a real difference â the ones that help developers learn, build, and create better software together. Weâre proud to back open-source maintainers with free licenses and to contribute to initiatives that strengthen the ecosystem and the people behind it.
In this post, we highlight five openâsource projects from different ecosystems, written in established languages like Python and JavaScript or fastâgrowing ones like Rust. Different as they are, each shares the same goal: elevating the developer experience. Together, they show how the right tools boost productivity and make workflows more enjoyable.
Ratatui
Born as the community-driven successor to the discontinued tui-rs library, Ratatui brings elegance to terminal UIs. Itâs modular, ergonomic, and designed to help developers build interactive dashboards, widgets, and even embedded interfaces that go beyond the terminal.
JetBrains IDEs help me focus on the code rather than the tooling. Theyâre self-contained, so I donât need to configure much to get started â they just work. With powerful code highlighting, automatic fixes, refactorings, and structural search, I can easily jump around the codebase and make edits.
â Orhun Parmaksız, Ratatui Core Maintainer
The upcoming 0.30.0 release focuses on modularity, splitting the main crate into smaller, independently usable packages. This change simplifies maintenance and makes it easier to use widgets in other contexts. And with new no_std support, Ratatui is expanding to power a wide range of use cases beyond the terminal.
Django
If Ratatui brings usability to the terminal, Django brings it to the web. Originally created in 2003 to meet both fast-paced newsroom deadlines and the demands of experienced developers, Django remains the go-to framework for âperfectionists with deadlinesâ. It eliminates repetitive tasks, enforces clean, pragmatic design, and provides built-in solutions for security, scalability, and database management â helping developers write less code and achieve more.
JetBrains IDEs, especially PyCharm, boost productivity with built-in Django support â including project templates, automatic settings detection, and model-to-database migrations â as well as integrated debugging and testing tools that simplify finding and fixing issues. The version control integration also makes it easier for contributors to refine and polish their work.
â Sarah Boyce, Django Fellow
Backed by a thriving global community, Djangoâs roadmap includes composite primary key support, built-in CSP integration, and a focus on making Django accessible by default. Every eight-month release delivers incremental improvements while maintaining backward compatibility â clear proof that long-term stability and innovation can coexist.
JHipster
Both Django and JHipster help developers move fast, but they take different paths. JHipster began as the âanti-mullet stackâ â serious in the back, party in the front â created to help developers quickly bootstrap full-stack applications with Spring on the backend and Angular.js on the frontend. Today, itâs still one of the most comprehensive open-source generators, offering a complete full-stack solution with built-in security, performance, and best practices.
JHipster has always been about great productivity and great tooling, so naturally, weâve always been IntelliJ IDEA fans â we even have our own JHipster IntelliJ IDEA plugin! What I love most is the clean UI, the performance, and all the plugins that make my life so much easier. I use Maven and Docker support all the time, and theyâre both absolutely top-notch.
â Julien Dubois, JHipster Creator
The project is now split into two teams â JHipster Classic, which focuses on the original full-stack generator written in JavaScript, and JHipster Lite, which develops a modernized, DDD-oriented version written in Java and targeted primarily at the backend. This structure allows the community to experiment more freely and attract new contributors.
As AI-assisted generation evolves, JHipsterâs mission remains the same: empowering developers with the latest cutting-edge technology and a true full-stack approach.
Biome
Once the structure is in place, consistency becomes the next challenge. Thatâs where Biome, a modern, all-in-one toolchain for maintaining web projects, comes in. It supports every major web language and maintains a consistent experience between the CLI and the editor. The goal of its creators was simple: make a tool that can handle everything from development to production, with fewer dependencies, less setup time, faster CI runs, and clear, helpful diagnostics.
Iâm a long-term user of JetBrains IDEs! RustRover has greatly improved since launch â its debugging features and new JavaScript module mean I can maintain all Biome projects, even our Astro-based website, in a single IDE. Itâs great that JetBrains really listens to users and their feedback.
â Emanuele Stoppa, Biome Creator
Biomeâs roadmap includes adding Markdown support, type inference, .d.ts file generation, JSDoc support, and embedded-language support. As a community-led project, Biome welcomes contributions of all kinds â every bit of help makes a difference.
Vuestic UI
When itâs time to polish the frontend, Vuestic UI takes over. This open-source project focuses on accessibility, theming, and a delightful developer experience. Built for Vue 3, it offers a flexible, easy-to-use component library that scales effortlessly from quick prototypes to enterprise-grade dashboards.
The right development environment makes a huge difference when building complex open-source tools like Vuestic UI and Vuestic Admin. Our team relies on JetBrains IDEs every day for their best-in-class refactoring tools that let us make bold changes with confidence, fast and reliable code navigation, and rock-solid performance. Most of what we need works right out of the box â no extra plugins or setup required. For us, JetBrains isnât just a preference â itâs a productivity multiplier.
â Maxim Kobetz, Senior Vue.js Developer
After 12 years in frontend development, WebStorm â along with IntelliJ IDEA and PyCharm â has always been my trusted toolkit. Even now, when Iâm not coding every day, I know I can rely on WebStorm for quick tweaks â every update feels smooth and never disrupts my workflow. Itâs intuitive, beautiful, and just works the way I expect it to. I know switching IDEs is always a time sink, but with JetBrains, itâs absolutely worth it â youâll never want to switch again.
â Anastasiia Zvenigorodskaia, Community Manager at Vuestic UI & Viuestic Admin
These projects showcase a common truth: Great developer experience happens when tools get out of your way. With JetBrains IDEs enhancing everything from code navigation to collaboration, these teams turn ideas into usable, elegant tools.
Whether youâre crafting a command-line tool, prototyping a web service, or polishing a UI library, these tools highlight a shared goal â making development faster, more intuitive, and more enjoyable. Explore them, contribute if you can, and keep shaping the kind of developer experience youâd want for yourself.
November 19, 2025 01:40 PM UTC
Django Weblog
Django 6.0 release candidate 1 released
Django 6.0 release candidate 1 is now available. It represents the final opportunity for you to try out a mosaic of modern tools and thoughtful design before Django 6.0 is released.
The release candidate stage marks the string freeze and the call for translators to submit translations. Provided no major bugs are discovered that can't be solved in the next two weeks, Django 6.0 will be released on or around December 3. Any delays will be communicated on the on the Django forum.
Please use this opportunity to help find and fix bugs (which should be reported to the issue tracker), you can grab a copy of the release candidate package from our downloads page or on PyPI.
The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E
November 19, 2025 12:00 PM UTC
Real Python
Quiz: Build a Python MCP Client to Test Servers From Your Terminal
In this quiz, you’ll test your understanding of how to Build a Python MCP Client to Test Servers From Your Terminal.
By working through this quiz, you’ll revisit how to add a minimal chat interface, create an AI handler to power the chat, handle runtime errors, and update the entry point to run the chat from the command line.
You will confirm when to initialize the AI handler and how to surface clear error messages to users. For a guided review, see the linked tutorial.
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 19, 2025 12:00 PM UTC
Django Weblog
Going build-free with native JavaScript modules
For the last decade and more, we've been bundling CSS and JavaScript files. These build tools allowed us to utilize new browser capabilities in CSS and JS while still supporting older browsers. They also helped with client-side network performance, minimizing the content to be as small as possible and combining files into one large bundle to reduce network handshakes. We've gone through a lot of build tools iterations in the process; from Grunt (2012) to Gulp (2013) to Webpack (2014) to Parcel (2017) to esbuild (2020) and Vite (2020).
And with modern browser technologies there is less need for these build tools.
- Modern CSS supports many of the features natively that the build tools were created for. CSS nesting to organize code, variables, @supports for feature detection.
- JavaScript ES6 / ES2015 was a big step forward, and the language has been progressing steadily ever since. It now has native module support with the import / export keywords
- Meanwhile, with HTTP/2 performance improvements, parallel requests can be made over the same connection, removing the constraints of the HTTP/1.x protocol.
These build processes are complex, particularly for beginners to Django. The tools and associated best practices move quickly. There is a lot to learn and you need to understand how to utilize them with your Django project. You can build a workflow that stores the build results in your static folder, but there is no core Django support for a build pipeline, so this largely requires selecting from a number of third party packages and integrating them into your project.
The benefit this complexity adds is no longer as clear cut, especially for beginners. There are still advantages to build tools, but you can can create professional results without having to use or learn any build processes.
Build-free JavaScript tutorial
To demonstrate modern capabilities, let's expand Djangoâs polls tutorial with some newer JavaScript. Weâll use modern JS modules and we wonât require a build system.
To give us a reason to need JS let's add a new requirement to the polls; to allow our users to add their own suggestions, instead of only being able to vote on the existing options. We update our form to have a new option under the selection code:
or add your own <input type="text" name="choice_text" maxlength="200" />
Now our users can add their own options to polls if the existing ones don't fit. We can update the voting view to handle this new option. We add a new choice_text input, and if there is no vote selection we will potentially handle adding the new option, while still providing an error message if neither is supplied. We also provide an error if both are selected.
def vote(request, question_id):
if request.POST['choice'] and request.POST['choice_text']:
return render(request, 'polls/detail.html', {
'question': question,
'error_message': "You can't vote and provide a new option.",
})
question = get_object_or_404(Question, pk=question_id)
try:
selected_choice = question.choice_set.get(pk=request.POST['choice'])
except (KeyError, Choice.DoesNotExist):
if request.POST['choice_text']:
selected_choice = Choice.objects.create(
question=question,
choice_text=request.POST['choice_text'],
)
else:
return render(request, 'polls/detail.html', {
'question': question,
'error_message': "You didn't select a choice or provide a new one.",
})
selected_choice.votes += 1
selected_choice.save()
return HttpResponseRedirect(reverse('polls:results', args=(question.id,)))
Now that our logic is a bit more complex it would be nicer if we had some JavaScript to do this. We can build a script that handles some of the form validation for us.
function noChoices(choices, choice_text) {
return (
Array.from(choices).some((radio) => radio.checked) ||
(choice_text[0] && choice_text[0].value.trim() !== "")
);
}
function allChoices(choices, choice_text) {
return (
!Array.from(choices).some((radio) => radio.checked) &&
choice_text[0] &&
choice_text[0].value.trim() !== ""
);
}
export default function initFormValidation() {
document.getElementById("polls").addEventListener("submit", function (e) {
const choices = this.querySelectorAll('input[name="choice"]');
const choice_text = this.querySelectorAll('input[name="choice_text"]');
if (!noChoices(choices, choice_text)) {
e.preventDefault();
alert("You didn't select a choice or provide a new one.");
}
if (!allChoices(choices, choice_text)) {
e.preventDefault();
alert("You can't select a choice and also provide a new option.");
}
});
}
Note how we use export default in the above code. This means form_validation.js is a JavaScript module. When we create our main.js file, we can import it with the import statement:
import initFormValidation from "./form_validation.js";
initFormValidation();
Lastly, we add the script to the bottom of our details.html file, using Djangoâs usual static template tag. Note the type="module" this is needed to tell the browser we will be using import/export statements.
<script type="module" src="{% static 'polls/js/main.js' %}"></script>
Thatâs it! We got the modularity benefits of modern JavaScript without needing any build process. The browser handles the module loading for us. And thanks to parallel requests since HTTP/2, this can scale to many modules without a performance hit.
In production
To deploy, all we need is Django's support for collecting static files into one place and its support for adding hashes to filenames. In production it is a good idea to use ManifestStaticFilesStorage storage backend. It stores the file names it handles by appending the MD5 hash of the fileâs content to the filename. This allows you to set far future cache expiries, which is good for performance, while still guaranteeing new versions of the file will make it to usersâ browsers.
This backend is also able to update the reference to form_validation.js in the import statement, with its new versioned file name.
Future work
ManifestStaticFilesStorage works, but a lot of its implementation details get in the way. It could be easier to use as a developer.
- The support for
import/exportwith hashed files is not very robust. - Comments in CSS with references to files can lead to errors during collectstatic.
- Circular dependencies in CSS/JS can not be processed.
- Errors during collectstatic when files are missing are not always clear.
- Differences between implementation of StaticFilesStorage and ManifestStaticFilesStorage can lead to errors in production that don't show up in development (like #26329, about leading slashes).
- Configuring common options means subclassing the storage when we could use the existing OPTIONS dict.
- Collecting static files could be faster if it used parallelization (pull request: #19935 Used a threadpool to parallelise collectstatic)
We discussed those possible improvements at the Django on the Med đïž sprints and Iâm hopeful we can make progress.
I built django-manifeststaticfiles-enhanced to attempt to fix all these. The core work is to switch to a lexer for CSS and JS, based on Ned Batchelderâs JsLex that was used in Django previously. It was expanded to cover modern JS and CSS by working with Claude Code to do the grunt work of covering the syntax.
It also switches to using a topological sort to find dependencies, whereas before we used a more brute force approach of repeated processing until we saw no more changes, which lead to more work, particularly on storages that used the network. It also meant we couldn't handle circular dependencies.
To validate it works, I ran a performance benchmark on 50+ projects, itâs been tested issues and with similar (often improved) performance. On average, itâs about 30% faster.
While those improvements would be welcome, do go ahead with trying build-free JavaScript and CSS in your Django projects today! Modern browsers make it possible to create great frontend experiences without the complexity.
November 19, 2025 08:13 AM UTC
Python GUIs
Getting Started With DearPyGui for GUI Development â Your First Steps With the DearPyGui Library for Desktop Python GUIs
Getting started with a new GUI framework can feel daunting. This guide walks you through the essentials of DearPyGui. From installation and first app to widgets, layouts, theming, and advanced tooling.
With DearPyGui, you can quickly build modern, highâperformance desktop interfaces using Python.
Getting to Know DearPyGui
DearPyGui is a GPUâaccelerated and crossâplatform GUI framework for Python, built on Dear ImGui with a retainedâmode Python API. It renders all UI using the GPU rather than native OS widgets, ensuring consistent, highâperformance UI across Windows, Linux, macOS, and even Raspberry PiâŻ4.
Note that official wheels for Raspberry Pi may lag behind. Users sometimes compile from source.
DearPyGui's key features include the following:
- Modern, consistent UI across platforms
- High performance via GPU rendering and C/C++ core
- Customizable styles/themes and full developer tools
- Over 70 widgets, including plots, node editors, and tables
- Built-in demo app, theme inspector, logging, metrics, and debugger
This GUI framework is ideal for building interfaces ranging from simple utilities to real-time dashboards, dataâscience tools, or interactive games.
Installing and Setting Up DearPyGui
You can install DearPyGui from PyPI using pip:
$ pip install dearpygui
This command installs DearPyGui from PyPI.
Writing Your First GUI App
In general, DearPyGui apps follow the following structure:
dpg.create_context()— Initialize DearPyGui and call it before anything elsedpg.create_viewport()— Create the main application window or viewport- Define UI widgets within windows or groups — Add and configure widgets and containers to build your interface
dpg.setup_dearpygui()— Set up DearPyGui internals and resources before showing the viewportdpg.show_viewport()— Make the viewport window visible to the userdpg.start_dearpygui()— Start the DearPyGui main event and render loopdpg.destroy_context()— Clean up and release all DearPyGui resources on exit
Here's a quick application displaying a window with basic widgets:
import dearpygui.dearpygui as dpg
def main():
dpg.create_context()
dpg.create_viewport(title="Viewport", width=300, height=100)
with dpg.window(label="DearPyGui Demo", width=300, height=100):
dpg.add_text("Hello, World!")
dpg.setup_dearpygui()
dpg.show_viewport()
dpg.start_dearpygui()
dpg.destroy_context()
if __name__ == "__main__":
main()
Inside main(), we initialize the library with dpg.create_context(), create a window (viewport) via dpg.create_viewport(), define the GUI, set up the library with dpg.setup_dearpygui(), show the viewport with dpg.show_viewport(), and run the render loop using dpg.start_dearpygui(). When you close the window, dpg.destroy_context() cleans up resources.
You define the GUI itself inside a dpg.window() context block, which parents the a text label with the "Hello, World!" Text.
Always follow the lifecycle order: create context → viewport → setup → show → start → destroy. Otherwise, the app may crash.
Run it! Here's what your first app looks like.
DearPyGui first app
Exploring Widgets
DearPyGui includes a wide variety of widgets:
- Basic widgets, including buttons, text input, sliders, and checkboxes
- Containers like windows, groups (horizontal and vertical grouping), tabs, collapsing headers, and menus
- Interactive widgets, such as color pickers, combo boxes, tables, and menus
Here's an example that showcases some basic DearPyGui widgets:
import dearpygui.dearpygui as dpg
def main():
dpg.create_context()
dpg.create_viewport(title="Widgets Demo", width=400, height=450)
with dpg.window(
label="Common DearPyGui Widgets",
width=380,
height=420,
pos=(10, 10),
):
dpg.add_text("Static label")
dpg.add_input_text(
label="Text Input",
default_value="Type some text here...",
tag="widget_input",
)
dpg.add_button(label="Click Me!")
dpg.add_checkbox(label="Check Me!")
dpg.add_radio_button(
("DearPyGui", "PyQt6", "PySide6"),
)
dpg.add_slider_int(
label="Int Slider",
default_value=5,
min_value=0,
max_value=10,
)
dpg.add_slider_float(
label="Float Slider",
default_value=0.5,
min_value=0.0,
max_value=1.0,
)
dpg.add_combo(
("DearPyGui", "PyQt6", "PySide6"),
label="GUI Library",
)
dpg.add_color_picker(label="Pick a Color")
dpg.add_progress_bar(
label="Progress",
default_value=0.5,
width=250,
)
dpg.setup_dearpygui()
dpg.show_viewport()
dpg.start_dearpygui()
dpg.destroy_context()
if __name__ == "__main__":
main()
This code uses the following functions to add the widgets to the GUI:
add_text(): A label for static text or instructionsadd_input_text(): A singleâline text entry fieldadd_button(): A clickable button for user actionsadd_checkbox(): A toggle for boolean valuesadd_radio_button(): A group of radio buttons for selecting one from several optionsadd_slider_int(),add_slider_float(): Sliders with integer and floating-point stepsadd_combo(): A dropdown selection widgetadd_color_picker(): A color picker widgetadd_progress_bar(): A progressbar widget to display visual progress
Run it! Here's what the app will look like.
DearPyGui basic widgets
Laying Out the GUI
By default, DearPyGui stacks widgets vertically. However, positioning options include the following:
- Horizontal grouping using
with dpg.group(horizontal=True): - Vertical spacing using
dpg.add_spacer() - Indentation using the perâitem
indentkeyword argument, like indpg.add_checkbox(label="Option A", indent=30)or after creation withdpg.configure_item(tag, indent) - Absolute positioning via
pos=(x, y)when creating items, or withdpg.set_item_pos(tag, (x, y))after creation
Widgets go inside containers like dpg.window(). You can nest containers to build complex GUI layouts:
import dearpygui.dearpygui as dpg
def main():
dpg.create_context()
dpg.create_viewport(title="Layout Demo", width=520, height=420)
with dpg.window(
label="Layout Demo",
width=500,
height=380,
pos=(10, 10),
):
dpg.add_text("1) Vertical layout:")
dpg.add_button(label="Top")
dpg.add_button(label="Middle")
dpg.add_button(label="Bottom")
dpg.add_spacer(height=12)
dpg.add_text("2) Horizontal layout:")
with dpg.group(horizontal=True):
dpg.add_button(label="Left")
dpg.add_button(label="Center")
dpg.add_button(label="Right")
dpg.add_spacer(height=12)
dpg.add_text("3) Indentation:")
dpg.add_checkbox(label="Indented at creation (30px)", indent=30)
dpg.add_checkbox(label="Indented after creation (35px)", tag="indent_b")
dpg.configure_item("indent_b", indent=35)
dpg.add_spacer(height=12)
dpg.add_text("4) Absolute positioning:")
dpg.add_text("Positioned at creation: (x=100, y=300)", pos=(100, 300))
dpg.add_text("Positioned after creation: (x=100, y=320)", tag="move_me")
dpg.set_item_pos("move_me", [100, 320])
dpg.setup_dearpygui()
dpg.show_viewport()
dpg.start_dearpygui()
dpg.destroy_context()
if __name__ == "__main__":
main()
In this example, we create an app that showcases basic layout options in DearPyGui. The first section of widgets shows the default vertical stacking by adding three buttons one after another. Then, you use dpg.add_spacer(height=12) to insert vertical whitespace between sections.
Then, we create a horizontal row of buttons with dpg.group(horizontal=True), which groups items side-by-side. Next, we have an indentation section that demonstrates how to indent widgets at creation (indent=30) and after creation using dpg.configure_item().
Finally, we use absolute positioning by placing one text item at a fixed coordinate using pos=(100, 300) and moving another after creation with dpg.set_item_pos(). These patterns are all part of DearPyGui’s container and item-configuration model, which we can use to arrange the widgets in a user-friendly GUI.
Run it! You'll get a window like the following.
DearPyGui layouts
Event Handling with Callbacks
DearPyGui uses callbacks to handle events. Most widgets accept a callback argument, which is executed when we interact with the widget itself.
The example below provides a text input and a button. When you click the button, it launches a dialog with the input text:
import dearpygui.dearpygui as dpg
def on_click_callback(sender, app_data, user_data):
text = dpg.get_value("input_text")
dpg.set_value("dialog_text", f'You typed: "{text}"')
dpg.configure_item("dialog", show=True)
def main() -> None:
dpg.create_context()
dpg.create_viewport(title="Callback Example", width=270, height=120)
with dpg.window(label="Callback Example", width=250, height=80, pos=(10, 10)):
dpg.add_text("Type something and press Click Me!")
dpg.add_input_text(label="Input", tag="input_text")
dpg.add_button(label="Click Me!", callback=on_click_callback)
with dpg.window(
label="Dialog",
modal=True,
show=False,
width=230,
height=80,
tag="dialog",
no_close=True,
pos=(10, 10),
):
dpg.add_text("", tag="dialog_text")
dpg.add_button(
label="OK",
callback=lambda s, a, u: dpg.configure_item("dialog", show=False),
)
dpg.setup_dearpygui()
dpg.show_viewport()
dpg.start_dearpygui()
dpg.destroy_context()
if __name__ == "__main__":
main()
The button takes the on_click_callback() callback as an argument. When we click the button, DearPyGui invokes the callback with three standard arguments:
sender, which holds the button's IDapp_data, which holds extra data specific to certain widgetsuser_data, which holds custom data you could have supplied
Inside the callback, we pull the current text from the input widget using dpg.get_value(), and finally, we display the input text in a modal window.
Run it! You'll get a window like the following.
DearPyGui callbacks
To see this app in action, type some text into the input and click the Click Me! button.
Drawing Shapes and Plotting
DearPyGui comes with powerful plotting capabilities. It includes high-performance plots, including lines, bars, scatter, and histograms. These plots allow interactive zoom and pan and real-time data updates, making them excellent for scientific visualizations and dashboards.
Here's a quick example of how to create a plot using DearPyGui's plotting widgets:
import dearpygui.dearpygui as dpg
import numpy as np
def main() -> None:
dpg.create_context()
dpg.create_viewport(title="Plotting Example", width=420, height=320)
x = np.linspace(0, 2 * np.pi, 100)
y1 = np.sin(x)
y2 = np.cos(x)
with dpg.window(label="Plot Window", width=400, height=280, pos=(10, 10)):
with dpg.plot(label="Sine and Cosine Plot", height=200, width=360):
dpg.add_plot_legend()
dpg.add_plot_axis(dpg.mvXAxis, label="X")
with dpg.plot_axis(dpg.mvYAxis, label="Y"):
dpg.add_line_series(x.tolist(), y1.tolist(), label="sin(x)")
dpg.add_line_series(x.tolist(), y2.tolist(), label="cos(x)")
dpg.setup_dearpygui()
dpg.show_viewport()
dpg.start_dearpygui()
dpg.destroy_context()
if __name__ == "__main__":
main()
In this example, we create two line series: sine and cosine curves. To plot them, we use NumPyâgenerated data. We also add X and Y axes, plus a legend for clarity. You can update the series in a callback for live data dashboards.
Run it! You'll get a plot like the one shown below.
DearPyGui plotting demo
Conclusion
DearPyGui offers a powerful and highly customizable GUI toolkit for desktop Python applications. With a rich widget set, interactive plotting, node editors, and built-in developer tools, it's a great choice for both simple and complex interfaces.
Try building your first DearPyGui app and experimenting with widgets, callbacks, layouts, and other interesting features!
November 19, 2025 08:00 AM UTC
November 18, 2025
The Python Coding Stack
I Donât Like Magic âą Exploring The Class Attributes That Arenât Really Class Attributes âą [Club]
I don’t like magic. I don’t mean the magic of the Harry Potter kind—that one I’d like if only I could have it. It’s the “magic” that happens behind the scenes when a programming language like Python does things out of sight. You’ll often find things you have to “just learn” along the Python learning journey. “That’s the way things are,” you’re told.
That’s the kind of magic I don’t like. I want to know how things work. So let me take you back to when I first learnt about named tuples—the NamedTuple in the typing module, not the other one—and data classes. They share a similar syntax, and it’s this shared syntax that confused me at first. I found these topics harder to understand because of this.
Their syntax is different from other stuff I had learnt up to that point. And I could not reconcile it with the stuff I knew. That bothered me. It also made me doubt the stuff I already knew. Here’s what I mean. Let’s look at a standard class first:
class Person:
classification = “Human”
def __init__(self, name, age, profession):
self.name = name
self.age = age
self.profession = professionYou define a class attribute, .classification, inside the class block, but outside any of the special methods. All instances will share this class attribute. Then you define the .__init__() special method and create three instance attributes: .name, .age, and .profession. Each instance will have its own versions of these instance attributes. If you’re not familiar with class attributes and instance attributes, you can read my seven-part series on object-oriented programming: A Magical Tour Through Object-Oriented Programming in Python • Hogwarts School of Codecraft and Algorithmancy
Now, let’s assume you don’t actually need the class attribute and that this class will only store data. It won’t have any additional methods. You decide to use a data class instead:
from dataclasses import dataclass
@dataclass
class Person:
name: str
age: int
profession: strOr you prefer to use a named tuple, and you reach out for typing.NamedTuple:
from typing import NamedTuple
class Person(NamedTuple):
name: str
age: int
profession: strThe syntax is similar. I’ll tell you why I used to find this confusing soon.
Whichever option you choose, you can create an instance using Person(”Matthew”, 30, “Python Programmer”). And each instance you create will have its own instance attributes .name, .age, and .profession.
But wait a minute! The data class and the named tuple use syntax that’s similar to creating class attributes. You define these just inside the class block and not in an .__init__() method. How come they create instance attributes? “That’s just how they work” is not good enough for me.
These aren’t class attributes. Not yet. There’s no value associated with these identifiers. Therefore, they can’t be class attributes, even though you write them where you’d normally add class attributes in a standard class. However, they can be class attributes if you include a default value:
@dataclass
class Person:
name: str
age: int
profession: str = “Python Programmer”The .profession attribute now has a string assigned to it. In a data class, this represents the default value. But if this weren’t a data class, you’d look at .profession and recognise it as a class attribute. But in a data class, it’s not a class attribute, it’s an instance attribute, as are .name and .age, which look like…what do they look like, really? They’re just type hints. Yes, type hints without any object assigned. Python type hints allow you to do this:
>>> first_name: strThis line is valid in Python. It does not create the variable name. You can confirm this:
>>> first_name
Traceback (most recent call last):
File “<input>”, line 1, in <module>
NameError: name ‘first_name’ is not definedAlthough you cannot just write first_name if the identifier doesn’t exist, you can use first_name: str. This creates an annotation which serves as the type hint. Third-party tools now know that when you create the variable first_name and assign it a value, it ought to be a string.
So, let’s go back to the latest version of the Person data class with the default value for one of the attributes:
@dataclass
class Person:
name: str
age: int
profession: str = “Python Programmer”But let’s ignore the @dataclass decorator for now. Indeed, let’s remove this decorator:
class Person:
name: str
age: int
profession: str = “Python Programmer”You define a class with one class attribute, .profession and three type hints:
name: strage: intprofession: str
How can we convert this information into instance attributes when creating an instance of the class? I won’t try to reverse engineer NamedTuple or data classes here. Instead, I’ll explore my own path to get a sense of what might be happening in those tools.
Let’s start hacking away…
November 18, 2025 10:01 PM UTC
PyCoderâs Weekly
Issue #709: deepcopy(), JIT, REPL Tricks, and More (Nov. 18, 2025)
#709 â NOVEMBER 18, 2025
View in Browser »
Why Python’s deepcopy Can Be So Slow
“Pythonâs copy.deepcopy() creates a fully independent clone of an object, traversing every nested element of the object graph.” That can be expensive. Learn what it is doing and how you can sometimes avoid the cost.
SAURABH MISRA
A Plan for 5-10%* Faster Free-Threaded JIT by Python 3.16
Just In Time compilation is under active development in the CPython interpreter. This blog post outlines the targets for the next two Python releases.
KEN JIN
Fast Container Builds: 202 - Check out the Deep Dive
This blog explores the causes and consequences of slow container builds, with a focus on understanding how BuildKitâs capabilities support faster container builds. â
DEPOT sponsor
The Python Standard REPL: Try Out Code and Ideas Quickly
The Python REPL gives you instant feedback as you code. Learn to use this powerful tool to type, run, debug, edit, and explore Python interactively.
REAL PYTHON
Python Jobs
Python Video Course Instructor (Anywhere)
Python Tutorial Writer (Anywhere)
Articles & Tutorials
Preparing Data Science Projects for Production
How do you prepare your Python data science projects for production? What are the essential tools and techniques to make your code reproducible, organized, and testable? This week on the show, Khuyen Tran from CodeCut discusses her new book, “Production Ready Data Science.”
REAL PYTHON podcast
Becoming a Core Developer
Throughout your open source journey, you have no doubt been interacting with the core development team of the projects to which you have been contributing. Have you ever wondered how people become core developers of a project?
STEFANIE MOLIN
Modern, Self-Hosted Authentication
Keep your users, your data and your stack with PropelAuth BYO. Easily add Enterprise authentication features like Enterprise SSO, SCIM and session management. Keep your sales team happy and give your CISO piece of mind â
PROPELAUTH sponsor
38 Things Python Developers Should Learn in 2025
Talk Python interviews Peter Wang and Calvin Hendrix-Parker and they discuss loads of things in the Python ecosystem that are worth learning, including free-threaded CPython, MCP, DuckDB, Arrow, and much more.
TALK PYTHON podcast
Trusted Publishing for GitLab Self-Managed and Organizations
The Trusted Publishing system for PyPI is seeing rapid adoption. This post talks about its growth along with the next steps: adding GitLab and handling organizations.
MIKE FIELDER
Decompression Is Up to 30% Faster in CPython 3.15
Zstandard compression got added in Python 3.14, but work is on-going. Python 3.15 is showing performance improvements in both zstd and other compression modules.
EMMA SMITH
__slots__ for Optimizing Classes
Most Python objects store their attributes in __dict__, which is a dictionary. Modules and classes always use __dict__, but not everything does.
TREY HUNNER
Convert Documents Into LLM-Ready Markdown
Get started with Python MarkItDown to turn PDFs, Office files, images, and URLs into clean, LLM-ready Markdown in seconds.
REAL PYTHON
Quiz: Convert Documents Into LLM-Ready Markdown
Practice MarkItDown basics. Convert PDFs, Word documents, Excel documents, and HTML documents to Markdown. Try the quiz.
REAL PYTHON
Convert Scikit-learn Pipelines into SQL Queries with Orbital
Orbital is a new library that converts Scikit-learn pipelines into SQL queries, enabling machine learning model inference directly within SQL databases.
POSIT sponsor
Python Operators and Expressions
Operators let you combine objects to create expressions that perform computations – the core of how Python works.
REAL PYTHON course
A Generator, Duck Typing, and a Branchless Conditional Walk Into a Bar
What’s your favorite line of code? Rodrigo expounds about generators, duck typing, and branchless conditionals.
RODRIGO GIRĂO SERRĂO
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
November 19, 2025
REALPYTHON.COM
DELSU Tech Invasion 3.0
November 19 to November 21, 2025
HAMPLUSTECH.COM
PyData Bristol Meetup
November 20, 2025
MEETUP.COM
PyLadies Dublin
November 20, 2025
PYLADIES.COM
Python Sul 2025
November 21 to November 24, 2025
PYTHON.ORG.BR
Happy Pythoning!
This was PyCoder’s Weekly Issue #709.
View in Browser »
[ Subscribe to đ PyCoder’s Weekly đ â Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
November 18, 2025 07:30 PM UTC
Real Python
Break Out of Loops With Python's break Keyword
In Python, the break statement lets you exit a loop prematurely, transferring control to the code that follows the loop. This tutorial guides you through using break in both for and while loops. You’ll also briefly explore the continue keyword, which complements break by skipping the current loop iteration.
By the end of this video course, you’ll understand that:
- A
breakin Python is a keyword that lets you exit a loop immediately, stopping further iterations. - Using
breakoutside of loops doesn’t make sense because it’s specifically designed to exit loops early. - The
breakdoesn’t exit all loops, only the innermost loop that contains it.
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 18, 2025 02:00 PM UTC
Mike Driscoll
Black Friday Python Deals Came Early
Black Friday deals came early this year. You can get 50% off of any of my Python books or courses until the end of November. You can use this coupon code at checkout: BLACKISBACKÂ
The following links already have the discount applied:
Python eBooks
- Python 101
- Python 201: Intermediate Python
- The Python Quiz Book
- Automating Excel with Python
- Python Logging
- Pillow: Image Processing with Python
- Creating GUI Applications with wxPython
- JupyterLab 101
- Creating TUI Applications with Textual and Python
Python Courses
- Python 101 Video Series
- Automating Excel with Python Video series + eBook
- Python Logging Video Course
The post Black Friday Python Deals Came Early appeared first on Mouse Vs Python.
November 18, 2025 01:41 PM UTC
PyCharm
Open Source in Focus: Projects Weâre Proud to Support
November 18, 2025 12:07 PM UTC
PyCon
Join us in âTrailblazing Python Securityâ at PyCon US 2026
PyCon US 2026 is coming to Long Beach, California! PyCon US is the premiere conference for the Python programming language in North America. Python experts and enthusiasts from around the globe will gather in Long Beach to discuss and learn about the latest developments to the Python programming language and massive ecosystem of Python projects.
Brand new this year are two themed talk tracks: âTrailblazing Python Securityâ and âPython and the Future of AIâ. We want to hear talks from you! The PyCon US Call for Proposals (CFP) for PyCon US 2026 is now open through December 19th, 2025. Donât wait to submit your talks, the earlier you submit the better.
If your company or organization would like to show support for a more secure Python ecosystem by sponsoring the âTrailblazing Python Securityâ talk track check out the PyCon US 2026 Sponsor Prospectus or reach out via email to sponsors@python.org. Weâve made three sponsor slots available for the track: one lead sponsor and two co-sponsors, so act fast!
Weâre also looking for mentors! If youâre an experienced speaker and want to help someone with their proposal, the PyCon US Proposal Mentorship Program is for you! We typically get twice the number of mentees seeking support than we do for volunteer mentors. Sign up to mentor via this form by November 21, 2025.
If you're interested in Python and security: why should you attend PyCon US 2026?
Many Pythonistas use the Open Source software available on the Python Package Index (PyPI). PyCon US is the flagship conference hosted by the Python Software Foundation, the stewards of the Python Package Index. Many brand-new security features are announced and demoed live at PyCon US, such as âPyPI Digital Attestationsâ, âPyPI Organizationsâ, and âTrusted Publishersâ.
Youâll be among the first Pythonistas to hear about these new features and chat with the developers and maintainers of PyPI.
| Open Space about handling vulnerabilities in Python projects |
PyCon US always has many opportunities to learn about the latest in Python security. Last year at PyCon US 2025 hosted a âPython Security Mini-Summitâ Open Space with speakers discussing the Cyber Resilience Act (CRA), CVE and Open Source, and supply-chain within the Scientific Python community. Expect even more security content this year!
The conference talk schedule includes many talks about using memory-safe systems programming languages like Rust with Python, authentication with popular Web frameworks, how to handle security vulnerabilities as an Open Source project, and how the âPhantom Dependencyâ problem affects the Python package ecosystem.
We hope youâll consider joining us in Long Beach, CA for PyCon US 2026. See you there! đ„đïž
November 18, 2025 11:17 AM UTC
Seth Michael Larson
BrotliCFFI has two new maintainers
Quick post announcing that the Python package brotlicffi has two new maintainers: Nathan Goldbaum and Christian Clauss. Thank you both for stepping up to help me with this package.
Both these folks (along with a few others) have shown up and gotten straight to work in adding support for Python 3.14, free-threaded Python, and the latest release of Brotli. Iâve given myself the task to change the PyPI publishing workflow to allow a group of contributors to make new releases to mostly get out of their way!
Iâm grateful that the project lives under the python-hyper organization which is structured in such a way that allows onboarding new contributors quickly after theyâve shown interest in contributing to a project meaningfully.
Thanks for keeping RSS alive! â„
November 18, 2025 12:00 AM UTC
November 17, 2025
Rodrigo GirĂŁo SerrĂŁo
Floodfill algorithm in Python
Learn how to implement and use the floodfill algorithm in Python.
What is the floodfill algorithm?
Click the image below to randomly colour the region you click.
Go ahead, try it!
IMG_WIDTH = 160 IMG_HEIGHT = 160 PIXEL_SIZE = 2 import asyncio import collections import random from pyscript import display from pyodide.ffi import create_proxy import js from js import fetch canvas = js.document.getElementById("bitmap") ctx = canvas.getContext("2d") URL = "/blog/floodfill-algorithm-in-python/_python.txt" async def load_bitmap(url: str) -> list[list[int]]: # Fetch the text file from the URL response = await fetch(url) text = await response.text() bitmap: list[list[int]] = [] for line in text.splitlines(): line = line.strip() if not line: continue row = [int(ch) for ch in line if ch in "01"] if row: bitmap.append(row) return bitmap def draw_bitmap(bitmap): rows = len(bitmap) cols = len(bitmap[0]) if rows > 0 else 0 if rows == 0 or cols == 0: return for y, row in enumerate(bitmap): for x, value in enumerate(row): if value == 1: ctx.fillStyle = "black" else: ctx.fillStyle = "white" ctx.fillRect(x * PIXEL_SIZE, y * PIXEL_SIZE, PIXEL_SIZE, PIXEL_SIZE) _neighbours = [(1, 0), (-1, 0), (0, 1), (0, -1)] async def fill_bitmap(bitmap, x, y): if bitmap[y][x] == 1: return ctx = canvas.getContext("2d") r, g, b = (random.randint(0, 255) for _ in range(3)) ctx.fillStyle = f"rgb({r}, {g}, {b})" def draw_pixel(x, y): ctx.fillRect(x * PIXEL_SIZE, y * PIXEL_SIZE, PIXEL_SIZE, PIXEL_SIZE) pixels = collections.deque([(x, y)]) seen = set((x, y)) while pixels: nx, ny = pixels.pop() draw_pixel(nx, ny) for dx, dy in _neighbours: x_, y_ = nx + dx, ny + dy if x_ < 0 or x_ >= IMG_WIDTH or y_ < 0 or y_ >= IMG_HEIGHT or (x_, y_) in seen: continue if bitmap[y_][x_] == 0: seen.add((x_, y_)) pixels.appendleft((x_, y_)) await asyncio.sleep(0.0001) is_running = False def get_event_coords(event): """Return (clientX, clientY) for mouse/pointer/touch events.""" # PointerEvent / MouseEvent: clientX/clientY directly available if hasattr(event, "clientX") and hasattr(event, "clientY") and event.clientX is not None: return event.clientX, event.clientY # TouchEvent: use the first touch point if hasattr(event, "touches") and event.touches.length > 0: touch = event.touches.item(0) return touch.clientX, touch.clientY # Fallback: try changedTouches if hasattr(event, "changedTouches") and event.changedTouches.length > 0: touch = event.changedTouches.item(0) return touch.clientX, touch.clientY return None, None async def on_canvas_press(event): global is_running if is_running: return is_running = True try: # Avoid scrolling / zooming taking over on touch if hasattr(event, "preventDefault"): event.preventDefault() clientX, clientY = get_event_coords(event) if clientX is None: # Could not read coordinates; bail out gracefully return rect = canvas.getBoundingClientRect() # Account for CSS scaling: map from displayed size to canvas units scale_x = canvas.width / rect.width scale_y = canvas.height / rect.height x_canvas = (clientX - rect.left) * scale_x y_canvas = (clientY - rect.top) * scale_y x_idx = int(x_canvas // PIXEL_SIZE) y_idx...November 17, 2025 03:49 PM UTC
Real Python
How to Serve a Website With FastAPI Using HTML and Jinja2
By the end of this guide, youâll be able to serve dynamic websites from FastAPI endpoints using Jinja2 templates powered by CSS and JavaScript. By leveraging FastAPIâs HTMLResponse, StaticFiles, and Jinja2Templates classes, youâll use FastAPI like a traditional Python web framework.
Youâll start by returning basic HTML from your endpoints, then add Jinja2 templating for dynamic content, and finally create a complete website with external CSS and JavaScript files to copy hex color codes:
To follow along, you should be comfortable with Python functions and have a basic understanding of HTML and CSS. Experience with FastAPI is helpful but not required.
Get Your Code: Click here to download the free sample code that shows you how to serve a website with FastAPI using HTML and Jinja2.
Take the Quiz: Test your knowledge with our interactive âHow to Serve a Website With FastAPI Using HTML and Jinja2â quiz. Youâll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Serve a Website With FastAPI Using HTML and Jinja2Review how to build dynamic websites with FastAPI and Jinja2, and serve HTML, CSS, and JS with HTMLResponse and StaticFiles.
Prerequisites
Before you start building your HTML-serving FastAPI application, youâll need to set up your development environment with the required packages. Youâll install FastAPI along with its standard dependencies, including the ASGI server you need to run your application.
Select your operating system below and install FastAPI with all the standard dependencies inside a virtual environment:
These commands create and activate a virtual environment, then install FastAPI along with Uvicorn as the ASGI server, and additional dependencies that enhance FastAPIâs functionality. The standard option ensures you have everything you need for this tutorial, including Jinja2 for templating.
Step 1: Return Basic HTML Over an API Endpoint
When you take a close look at a FastAPI example application, you commonly encounter functions returning dictionaries, which the framework transparently serializes into JSON responses.
However, FastAPIâs flexibility allows you to serve various custom responses besides thatâfor example, HTMLResponse to return content as a text/html type, which your browser interprets as a web page.
To explore returning HTML with FastAPI, create a new file called main.py and build your first HTML-returning endpoint:
main.py
from fastapi import FastAPI
from fastapi.responses import HTMLResponse
app = FastAPI()
@app.get("/", response_class=HTMLResponse)
def home():
html_content = """
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Home</title>
</head>
<body>
<h1>Welcome to FastAPI!</h1>
</body>
</html>
"""
return html_content
The HTMLResponse class tells FastAPI to return your content with the text/html content type instead of the default application/json response. This ensures that browsers interpret your response as HTML rather than plain text.
Before you can visit your home page, you need to start your FastAPI development server to see the HTML response in action:
Read the full article at https://realpython.com/fastapi-jinja2-template/ »
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 17, 2025 02:00 PM UTC
Quiz: How to Serve a Website With FastAPI Using HTML and Jinja2
In this quiz, you’ll test your understanding of building dynamic websites with FastAPI and Jinja2 Templates.
By working through this quiz, you’ll revisit how to return HTML with HTMLResponse, serve assets with StaticFiles, render Jinja2 templates with context, and include CSS and JavaScript for interactivity like copying hex color codes.
If you are new to FastAPI, review Get Started With FastAPI. You can also brush up on Python functions and HTML and CSS.
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 17, 2025 12:00 PM UTC
Python Bytes
#458 I will install Linux on your computer
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong>Possibility of a new website for Django</strong></li> <li><strong><a href="https://github.com/slaily/aiosqlitepool?featured_on=pythonbytes">aiosqlitepool</a></strong></li> <li><strong><a href="https://deptry.com?featured_on=pythonbytes">deptry</a></strong></li> <li><strong><a href="https://github.com/juftin/browsr?featured_on=pythonbytes">browsr</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=s2HlckfeBCs' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="458">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: Possibility of a new website for Django</strong></p> <ul> <li>Current Django site: <a href="https://www.djangoproject.com?featured_on=pythonbytes">djangoproject.com</a></li> <li>Adam Hillâs in progress redesign idea: <a href="https://django-homepage.adamghill.com?featured_on=pythonbytes">django-homepage.adamghill.com</a></li> <li>Commentary in the <a href="https://forum.djangoproject.com/t/want-to-work-on-a-homepage-site-redesign/42909/35?featured_on=pythonbytes">Want to work on a homepage site redesign? discussion</a></li> </ul> <p><strong>Michael #2: <a href="https://github.com/slaily/aiosqlitepool?featured_on=pythonbytes">aiosqlitepool</a></strong></p> <ul> <li>đĄïžA resilient, high-performance asynchronous connection pool layer for SQLite, designed for efficient and scalable database operations.</li> <li>About 2x better than regular SQLite.</li> <li>Pairs with <a href="https://github.com/omnilib/aiosqlite?featured_on=pythonbytes">aiosqlite</a></li> <li><code>aiosqlitepool</code> in three points: <ul> <li><strong>Eliminates connection overhead</strong>: It avoids repeated database connection setup (syscalls, memory allocation) and teardown (syscalls, deallocation) by reusing long-lived connections.</li> <li><strong>Faster queries via "hot" cache</strong>: Long-lived connections keep SQLite's in-memory page cache "hot." This serves frequently requested data directly from memory, speeding up repetitive queries and reducing I/O operations.</li> <li><strong>Maximizes concurrent throughput</strong>: Allows your application to process significantly more database queries per second under heavy load.</li> </ul></li> </ul> <p><strong>Brian #3: <a href="https://deptry.com?featured_on=pythonbytes">deptry</a></strong></p> <ul> <li>âdeptry is a command line tool to check for issues with dependencies in a Python project, such as unused or missing dependencies. It supports projects using Poetry, pip, PDM, uv, and more generally any project supporting PEP 621 specification.â</li> <li>âDependency issues are detected by scanning for imported modules within all Python files in a directory and its subdirectories, and comparing those to the dependencies listed in the project's requirements.â</li> <li><p>Note if you use <code>project.optional-dependencies</code></p> <div class="codehilite"> <pre><span></span><code><span class="k">[project.optional-dependencies]</span> <span class="n">plot</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="s2">"matplotlib"</span><span class="p">]</span> <span class="n">test</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="s2">"pytest"</span><span class="p">]</span> </code></pre> </div></li> <li><p>you have to set a config setting to get it to work right:</p> <div class="codehilite"> <pre><span></span><code><span class="k">[tool.deptry]</span> <span class="n">pep621_dev_dependency_groups</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="s2">"test"</span><span class="p">,</span><span class="w"> </span><span class="s2">"docs"</span><span class="p">]</span> </code></pre> </div></li> </ul> <p><strong>Michael #4: <a href="https://github.com/juftin/browsr?featured_on=pythonbytes">browsr</a></strong></p> <ul> <li><strong><code>browsr</code></strong> đïž is a pleasant <strong>file explorer</strong> in your terminal. It's a command line <strong>TUI</strong> (text-based user interface) application that empowers you to browse the contents of local and remote filesystems with your keyboard or mouse.</li> <li>You can quickly navigate through directories and peek at files whether they're hosted <strong>locally</strong>, in <strong>GitHub</strong>, over <strong>SSH</strong>, in <strong>AWS S3</strong>, <strong>Google Cloud Storage</strong>, or <strong>Azure Blob Storage</strong>.</li> <li>View code files with syntax highlighting, format JSON files, render images, convert data files to navigable datatables, and more.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li>Understanding the MICRO</li> <li>TDD chapter coming out later today or maybe tomorrow, but itâs close.</li> </ul> <p>Michael:</p> <ul> <li><a href="https://marketplace.visualstudio.com/items?itemName=johnpapa.vscode-peacock&featured_on=pythonbytes">Peacock</a> is excellent</li> </ul> <p><strong>Joke: <a href="https://x.com/thatstraw/status/1977317574779048171?featured_on=pythonbytes">I will find you</a></strong></p>
November 17, 2025 08:00 AM UTC
November 16, 2025
Ned Batchelder
Why your mock breaks later
In Why your mock doesnât work I explained this rule of mocking:
Mock where the object is used, not where it’s defined.
That blog post explained why that rule was important: often a mock doesn’t work at all if you do it wrong. But in some cases, the mock will work even if you don’t follow this rule, and then it can break much later. Why?
Let’s say you have code like this:
# user.py
def get_user_settings():
with open(Path("~/settings.json").expanduser()) as f:
return json.load(f)
def add_two_settings():
settings = get_user_settings()
return settings["opt1"] + settings["opt2"]
You write a simple test:
def test_add_two_settings():
# NOTE: need to create ~/settings.json for this to work:
# {"opt1": 10, "opt2": 7}
assert add_two_settings() == 17
As the comment in the test points out, the test will only pass if you create the correct settings.json file in your home directory. This is bad: you don’t want to require finicky environments for your tests to pass.
The thing we want to avoid is opening a real file, so it’s a natural impulse
to mock out open():
# test_user.py
from io import StringIO
from unittest.mock import patch
@patch("builtins.open")
def test_add_two_settings(mock_open):
mock_open.return_value = StringIO('{"opt1": 10, "opt2": 7}')
assert add_two_settings() == 17
Nice, the test works without needing to create a file in our home directory!
Much later...
One day your test suite fails with an error like:
...
File ".../site-packages/coverage/python.py", line 55, in get_python_source
source_bytes = read_python_source(try_filename)
File ".../site-packages/coverage/python.py", line 39, in read_python_source
return source.replace(b"\r\n", b"\n").replace(b"\r", b"\n")
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
TypeError: replace() argument 1 must be str, not bytes
What happened!? Coverage.py code runs during your tests, invoked by the
Python interpreter. The mock in the test changed the builtin open, so
any use of it anywhere during the test is affected. In some cases, coverage.py
needs to read your source code to record the execution properly. When that
happens, coverage.py unknowingly uses the mocked open, and bad things
happen.
When you use a mock, patch it where it’s used, not where it’s defined. In this case, the patch would be:
@patch("myproduct.user.open")
def test_add_two_settings(mock_open):
... etc ...
With a mock like this, the coverage.py code would be unaffected.
Keep in mind: it’s not just coverage.py that could trip over this mock. There
could be other libraries used by your code, or you might use open
yourself in another part of your product. Mocking the definition means
anything using the object will be affected. Your intent is to only
mock in one place, so target that place.
Postscript
I decided to add some code to coverage.py to defend against this kind of over-mocking. There is a lot of over-mocking out there, and this problem only shows up in coverage.py with Python 3.14. It’s not happening to many people yet, but it will happen more and more as people start testing with 3.14. I didn’t want to have to answer this question many times, and I didn’t want to force people to fix their mocks.
From a certain perspective, I shouldn’t have to do this. They are in the wrong, not me. But this will reduce the overall friction in the universe. And the fix was really simple:
open = open
This is a top-level statement in my module, so it runs when the module is
imported, long before any tests are run. The assignment to open will
create a global in my module, using the current value of open, the one
found in the builtins. This saves the original open for use in my module
later, isolated from how builtins might be changed later.
This is an ad-hoc fix: it only defends one builtin. Mocking other builtins
could still break coverage.py. But open is a common one, and this will
keep things working smoothly for those cases. And there’s precedent: I’ve
already been using a more involved technique to defend
against mocking of the os module for ten years.
Even better!
No blog post about mocking is complete without encouraging a number of other best practices, some of which could get you out of the mocking mess:
- Use
autospec=Trueto make your mocks strictly behave like the original object: see Why your mock still doesnât work. - Make assertions about how your mock was called to be sure everything is connected up properly.
- Use verified fakes instead of auto-generated mocks: Fast tests for slow services: why you should use verified fakes.
- Separate your code so that computing functions like our
add_two_settingsdon’t also do I/O. This makes the functions easier to test in the first place. Take a look at Function Core, Imperative Shell. - Dependency injection lets you explicitly pass test-specific objects where they are needed instead of relying on implicit access to a mock.
November 16, 2025 12:55 PM UTC
November 15, 2025
Kay Hayen
Nuitka Release 2.8
This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, âdownload nowâ.
This release adds a ton of new features and corrections.
Bug Fixes
Standalone: For the âPython Build Standaloneâ flavor ensured that debug builds correctly recognize all their specific built-in modules, preventing potential errors. (Fixed in 2.7.2 already.)
Linux: Fixed a crash when attempting to modify the RPATH of statically linked executables (e.g., from
imageio-ffmpeg). (Fixed in 2.7.2 already.)Anaconda: Updated
PySide2support to correctly handle path changes in newer Conda packages and improved path normalization for robustness. (Fixed in 2.7.2 already.)macOS: Corrected handling of
QtWebKitframework resources. Previous special handling was removed as symlinking is now default, which also resolved an issue of file duplication. (Fixed in 2.7.2 already.)Debugging: Resolved an issue in debug builds where an incorrect assertion was done during the addition of distribution metadata. (Fixed in 2.7.1 already.)
Module: Corrected an issue preventing
stubgenfrom functioning with Python versions earlier than 3.6. (Fixed in 2.7.1 already.)UI: Prevented Nuitka from crashing when
--include-modulewas used with a built-in module. (Fixed in 2.7.1 already.)Module: Addressed a compatibility issue where the
codemode for the constants blob failed with the C++ fallback. This fallback is utilized on very old GCC versions (e.g., default on CentOS7), which are generally not recommended. (Fixed in 2.7.1 already.)Standalone: Resolved an assertion error that could occur in certain Python setups due to extension module suffix ordering. The issue involved incorrect calculation of the derived module name when the wrong suffix was applied (e.g., using
.soto derive a module name likegdbmmoduleinstead of justgdbm). This was observed with Python 2 on CentOS7 but could potentially affect other versions with unconventional extension module configurations. (Fixed in 2.7.1 already.)Python 3.12.0: Corrected the usage of an internal structure identifier that is only available in Python 3.12.1 and later versions. (Fixed in 2.7.1 already.)
Plugins: Prevented crashes in Python setups where importing
pkg_resourcesresults in aPermissionError. This typically occurs in broken installations, for instance, where some packages are installed with root privileges. (Fixed in 2.7.1 already.)macOS: Implemented a workaround for data file names that previously could not be signed within app bundles. The attempt in release 2.7 to sign these files inadvertently caused a regression for cases involving illegal filenames. (Fixed in 2.7.1 already.)
Python 2.6: Addressed an issue where
staticmethodobjects lacked the__func__attribute. Nuitka now tracks the original function as a distinct value. (Fixed in 2.7.1 already.)Corrected behavior for
orderedsetimplementations that lack aunionmethod, ensuring Nuitka does not attempt to use it. (Fixed in 2.7.1 already.)Python 2.6: Ensured compatibility for setups where the
_PyObject_GC_IS_TRACKEDmacro is unavailable. This macro is now used beyond assertions, necessitating support outside of debug mode. (Fixed in 2.7.1 already.)Python 2.6: Resolved an issue caused by the absence of
sys.version_info.releaselevelby utilizing a numeric index instead and adding a new helper function to access it. (Fixed in 2.7.1 already.)Module: Corrected the
__compiled__.mainvalue to accurately reflects the package in which a module is loaded, this was not the case for Python versions prior to 3.12. (Fixed in 2.7.1 already.)Plugins: Further improved the
dill-compatplugin by preventing assertions related to empty annotations and by removing hard-coded module names for greater flexibility. (Fixed in 2.7.1 already.)Windows: For onefile mode using DLL mode, ensure all necessary environment variables are correctly set for
QtWebEngine. Previously, default Qt paths could point incorrectly near the onefile binary. (Fixed in 2.7.3 already.)PySide6: Fixed an issue with
PySide6where slots defined in base classes might not be correctly handled, leading to them only working for the first class that used them. (Fixed in 2.7.3 already.)Plugins: Enhanced Qt binding plugin support by checking for module presence without strictly requiring metadata. This improves compatibility with environments like Homebrew or
uvwhere package metadata might be absent. (Fixed in 2.7.3 already.)macOS: Ensured the
appletarget is specified during linking to prevent potential linker warnings about using anunknowntarget in certain configurations. (Fixed in 2.7.3 already.)macOS: Disabled the use of static
libpythonwithpyenvinstallations, as this configuration is currently broken. (Fixed in 2.7.3 already.)macOS: Improved error handling for the
--macos-app-protected-resourceoption by catching cases where a description is not provided. (Fixed in 2.7.3 already.)Plugins: Enhanced workarounds for
PySide6, now also covering single-shot timer callbacks. (Fixed in 2.7.4 already.)Plugins: Ensured that the Qt binding module is included when using accelerated mode with Qt bindings. (Fixed in 2.7.4 already.)
macOS: Avoided signing through symlinks and minimized their use to prevent potential issues, especially during code signing of application bundles. (Fixed in 2.7.4 already.)
Windows: Implemented path shortening for paths used in onefile DLL mode to prevent issues with long or Unicode paths. This also benefits module mode. (Fixed in 2.7.4 already.)
UI: The options nanny plugin no longer uses a deprecated option for macOS app bundles, preventing potential warnings or issues. (Fixed in 2.7.4 already.)
Plugins: Ensured the correct macOS target architecture is used. This particularly useful for
PySide2with universal CPython binaries, to prevent compile time crashes e.g. when cross-compiling for a different architecture. (Fixed in 2.7.4 already.)UI: Fixed a crash that occurred on macOS if the
ccachedownload was rejected by the user. (Fixed in 2.7.4 already.)UI: Improved the warning message related to macOS application icons for better clarity. (Added in 2.7.4 already.)
Standalone: Corrected an issue with QML plugins on macOS when using newer
PySide6versions. (Fixed in 2.7.4 already.)Python 3.10+: Fixed a memory leak where the matched value in pattern matching constructs was not being released. (Fixed in 2.7.4 already.)
Python3: Fixed an issue where exception exits for larger
rangeobjects, which are not optimized away, were not correctly annotated by the compiler. (Fixed in 2.7.4 already.)Windows: Corrected an issue with the automatic use of icons for
PySide6applications on non-Windows if Windows icon options were used. (Fixed in 2.7.4 already.)Onefile: When using DLL mode there was a load error for the DLL with MSVC 14.2 or earlier, but older MSVC is to be supported. (Fixed in 2.7.5 already.)
Onefile: Fix, the splash screen was showing in DLL mode twice or more; these extra copies couldnât be stopped. (Fixed in 2.7.5 already.)
Standalone: Fixed an issue where data files were no longer checked for conflicts with included DLLs. The order of data file and DLL copying was restored, and macOS app signing was made a separate step to remove the order dependency. (Fixed in 2.7.6 already.)
macOS: Corrected our workaround using symlinks for files that cannot be signed. When
--output-directorywas used, as it made incorrect assumptions about thedistfolder path. (Fixed in 2.7.6 already.)UI: Prevented checks on onefile target specifications when not actually compiling in onefile mode, e.g. on macOS with
--mode=app. (Fixed in 2.7.6 already.)UI: Improved error messages for data directory options by include the relevant part in the output. (Fixed in 2.7.6 already.)
Plugins: Suppressed
UserWarningmessages from thepkg_resourcesmodule during compilation. (Fixed in 2.7.6 already.)Python3.11+: Fixed an issue where descriptors for compiled methods were incorrectly exposed for Python 3.11 and 3.12. (Fixed in 2.7.7 already.)
Plugins: Avoided loading modules when checking for data file existence. This prevents unnecessary module loading and potential crashes in broken installations. (Fixed in 2.7.9 already.)
Plugins: The
global_change_functionanti-bloat feature now operates on what should be the qualified names (__qualname__) instead of just function names, preventing incorrect replacements of methods with the same name in different classes. (Fixed in 2.7.9 already.)Onefile: The
containing_dirattribute of the__compiled__object was regressed in DLL mode on Windows, pointing to the temporary DLL directory instead of the directory containing the onefile binary. (Fixed in 2.7.10 already, note that the solution in 2.7.9 had a regression.)Compatibility: Fixed a crash that occurred when an import attempted to go outside its package boundaries. (Fixed in 2.7.11 already.)
macOS: Ignored a warning from
codesignwhen using self-signed certificates. (Fixed in 2.7.11 already.)Onefile: Fixed an issue in DLL mode where environment variables from other onefile processes (related to temporary paths and process IDs) were not being ignored, which could lead to conflicts. (Fixed in 2.7.12 already.)
Compatibility: Fixed a potential crash that could occur when processing an empty code body. (Fixed in 2.7.13 already.)
Plugins: Ensured that DLL directories created by plugins could be at the top level when necessary, improving flexibility. (Fixed in 2.7.13 already.)
Onefile: On Windows, corrected an issue in DLL mode where
original_argv0wasNone; it is now properly set. (Fixed in 2.7.13 already.)macOS: Avoided a warning that appeared on newer macOS versions. (Fixed in 2.7.13 already.)
macOS: Allowed another DLL to be missing for
PySide6to support more setups. (Fixed in 2.7.13 already.)Standalone: Corrected the existing import workaround for Python 3.12 that was incorrectly renaming existing modules of matching names into sub-modules of the currently imported module. (Fixed in 2.7.14 already.)
Standalone: On Windows, ensured that the DLL search path correctly uses the proper DLL directory. (Fixed in 2.7.14 already.)
Python 3.5+: Fixed a memory leak where the called object could be leaked in calls with keyword arguments following a star dict argument. (Fixed in 2.7.14 already.)
Python 3.13: Fixed an issue where
PyState_FindModulewas not working correctly with extension modules due to sub-interpreter changes. (Fixed in 2.7.14 already.)Onefile: Corrected an issue where the process ID (PID) was not set in a timely manner, which could affect onefile operations. (Fixed in 2.7.14 already.)
Compatibility: Fixed a crash that could occur when a function with both a star-list argument and keyword-only arguments was called without any arguments. (Fixed in 2.7.16 already.)
Standalone: Corrected an issue where distribution names were not checked case-insensitively, which could lead to metadata not being included. (Fixed in 2.7.16 already.)
Linux: Avoid using full zlib with extern declarations but instead only the CRC32 functions we need. Otherwise conflicts with OS headers could occur.
Standalone: Fixed an issue where scanning for standard library dependencies was unnecessarily performed.
Plugins: Made the runtime query code robust against modules that in stdout during import
This affected at least
togagiving some warnings on Windows with mere stdout prints. We now have a marker for the start of our output that we look for and safely ignore them.Windows: Do not attempt to attach to the console when running in DLL mode. For onefile with DLL mode, this was unnecessary as the bootstrap already handles it, and for pure DLL mode, it is not desired.
Onefile: Removed unnecessary parent process monitoring in onefile mode, as there is no child process launched.
Anaconda: Determine version and project name for conda packages more reliably
It seems Anaconda is giving variables in package metadata and often no project name, so we derive it from the conda files and its meta data in those cases.
macOS: Make sure the SSL certificates are found when downloading on macOS, ensuring successful downloads.
Windows: Fixed an issue where console mode
attachwas not working in onefile DLL mode.Scons: Fixed an issue where
pragmawas used with oldergccgcccan give warnings about them. This fixes building on older OSes with the system gcc.Compatibility: Fix, need to avoid using filenames with more than 250 chars for long module names.
For cache files, const files, and C files, we need to make sure, we donât exceed the 255 char limits per path element that literally every OS has.
Also enhanced the check code for legal paths to cover this, so user options are covered from this errors too.
Moved file hashing to file operations where it makes more sense to allow module names to use hashing to provide a legal filename to refer to themselves.
Compatibility: Fixed an issue where walking included compiled packages through the Nuitka loader could produce incorrect names in some cases.
Windows: Fixed wrong calls made when checking
stderrproperties during launch if it wasNone.Debugging: Fixed an issue where the segfault non-deployment disable itself before doing anything else.
Plugins: Fix, the warning to choose a GUI plugin for
matplotlibwas given withtk-interplugin enabled still, which is of course not appropriate.Distutils: Fix, do not recreate the build folder with a
.gitignorefile.We were re-creating it as soon as we looked at what it would be, now itâs created only when asking for that to happen.
No-GIL: Addressed compile errors for the free-threaded dictionary implementation that were introduced by necessary hot-fixes in the version 2.7.
Compatibility: Fixed handling of generic classes and generic type declarations in Python 3.12.
macOS: Fixed an issue where entitlements were not properly provided for code signing.
Onefile: Fixed delayed shutdown for terminal applications in onefile DLL mode.
Was waiting for non-used child processes, which donât exist and then the timeout for that operation, which is always happening on CTRL-C or terminal shutdown.
Python3.13: Fix, seems interpreter frames with None code objects exist and need to be handled as well.
Standalone: Fix, need to allow for
setuptoolspackage to be user provided.Windows: Avoided using non-encodable dist and build folder names.
Some paths donât become short, but still be non-encodable from the file system for tools. In these cases, temporary filenames are used to avoid errors from C compilers and other tools.
Python3.13: Fix, ignore stdlib
cgimodule that might be left over from previous installsThe module was removed during development, and if you install over an old alpha version of 3.13 a newer Python, Nuitka would crash on it.
macOS: Allowed the
libfolder for the Python Build Standalone flavor, improving compatibility.macOS: Allowed libraries for
rpathresolution to be found in all Homebrew folders and not justlib.Onefile: Need to allow
..in paths to allow outside installation paths.
Package Support
Standalone: Introduced support for the
niceguipackage. (Added in 2.7.1 already.)Standalone: Extended support to include
xgboost.coreon macOS. (Added in 2.7.1 already.)Standalone: Added needed data files for
ursinapackage. (Added in 2.7.1 already.)Standalone: Added support for newer versions of the
pydanticpackage. (Added in 2.7.4 already.)Standalone: Extended
libonnxruntimesupport to macOS, enabling its use in compiled applications on this platform. (Added in 2.7.4 already.)Standalone: Added necessary data files for the
pygameextrapackage. (Added in 2.7.4 already.)Standalone: Included GL backends for the
arcadepackage. (Added in 2.7.4 already.)Standalone: Added more data directories for the
ursinaandpanda3dpackages, improving their out-of-the-box compatibility. (Added in 2.7.4 already.)Standalone: Added support for newer
skimagepackage. (Added in 2.7.5 already.)Standalone: Added support for the
PyTaskbarpackage. (Added in 2.7.6 already.)macOS: Added
tk-intersupport for Python 3.13 with official CPython builds, which now use framework files for Tcl/Tk. (Added in 2.7.6 already.)Standalone: Added support for the
paddlexpackage. (Added in 2.7.6 already.)Standalone: Added support for the
jinxedpackage, which dynamically loads terminal information. (Added in 2.7.6 already.)Windows: Added support for the
ansiconpackage by including a missing DLL. (Added in 2.7.6 already.)macOS: Enhanced configuration for the
pypylonpackage, however, itâs not sufficient. (Added in 2.7.6 already.)Standalone: Added support for newer
numpyversions. (Added in 2.7.7 already.)Standalone: Added support for older
vtkpackage. (Added in 2.7.8 already.)Standalone: Added support for newer
certifiversions that useimportlib.resources. (Added in 2.7.9 already.)Standalone: Added support for the
reportlab.graphics.barcodemodule. (Added in 2.7.9 already.)Standalone: Added support for newer versions of the
transformerspackage. (Added in 2.7.11 already.)Standalone: Added support for newer versions of the
sklearnpackage. (Added in 2.7.12 already.)Standalone: Added support for newer versions of the
scipypackage. (Added in 2.7.12 already.)Standalone: Added support for older versions of the
cv2package (specifically version 4.4). (Added in 2.7.12 already.)Standalone: Added initial support for the
vllmpackage. (Added in 2.7.12 already.)Standalone: Ensured all necessary DLLs for the
pygamepackage are included. (Added in 2.7.12 already.)Standalone: Added support for newer versions of the
zaber_motionpackage. (Added in 2.7.13 already.)Standalone: Added missing dependencies for the
pymediainfopackage. (Added in 2.7.13 already.)Standalone: Added support for newer versions of the
sklearnpackage by including a missing dependency. (Added in 2.7.13 already.)Standalone: Added support for newer versions of the
togapackage. (Added in 2.7.14 already.)Standalone: Added support for the
wordninja-enhancedpackage. (Added in 2.7.14 already.)Standalone: Added support for the
Fast-SSIMpackage. (Added in 2.7.14 already.)Standalone: Added a missing data file for the
rfc3987_syntaxpackage. (Added in 2.7.14 already.)Standalone: Added missing data files for the
trimeshpackage. (Added in 2.7.15 already.)Standalone: Added support for the
gdsfactory,klayout, andkfactorypackages. (Added in 2.7.15 already.)Standalone: Added support for the
vllmpackage. (Added in 2.7.16 already.)Standalone: Added support for newer versions of the
tkinterwebpackage. (Added in 2.7.15 already.)Standalone: Added support for newer versions of the
cmsis_pack_managerpackage. (Added in 2.7.15 already.)Standalone: Added missing data files for the
idlelibpackage. (Added in 2.7.15 already.)Standalone: Avoid including debug binary on non-Windows for Qt Webkit.
Standalone: Add dependencies for pymediainfo package.
Standalone: Added support for the
winptypackage.Standalone: Added support for newer versions of the
gipackage.Standalone: Added support for newer versions of the
litellmpackage.Standalone: Added support for the
traitsandpyfacepackages.Standalone: Added support for newer versions of the
transformerspackage.Standalone: Added data files for
rasteriopackage.Standalone: Added support for
ortoolspackage.Standalone: Added support newer âvtkâ package
New Features
Python3.14: Added experimental support for Python3.14, not recommended for use yet, as this is very fresh and might be missing a lot of fixes.
Release: Added an extra dependency group for the Nuitka build-backend, intended for use in
pyproject.tomland other build-system dependencies. To use it depend inNuitka[build-wheel]instead of Nuitka. (Added in 2.7.7 already.)For release we also added
Nuitka[onefile],Nuitka[standalone],Nuitka[app]as extra dependency groups. If icon conversions are used, e.g.Nuitka[onefile,icon-conversion]adds the necessary packages for that. If you donât care about whatâs being pulled inNuitka[all]can be used, by defaultNuitkaonly comes with the bare minimum needed and will inform about missing packages.macOS: Added
--macos-sign-keyring-filenameand--macos-sign-keyring-passwordto automatically unlock a keyring for use during signing. This is very useful for CI where no UI prompt can be used.Windows: Detect when
inputcannot be used due to no console or the console not providing proper standard input and produce a dialog for entry instead. Shells likecmd.exeexecute inputs as commands entered when attaching to them. With this, the user is informed to make the input into the dialog instead. In case of no terminal, this just brings up the dialog for GUI mode.Plugins: Introduced
global_change_functionto the anti-bloat engine, allowing function replacements across all sub-modules of a package at once. (Added in 2.7.6 already.)Reports: For Python 3.13+, the compilation report now includes information on GIL usage. (Added in 2.7.7 already.)
macOS: Added an option to prevent an application from running in multiple instances. (Added in 2.7.7 already.)
AIX: Added support for this OS as well, now standalone and module mode work there too.
Scons: When C a compilation fails to due warnings in
--debugmode, recognize that and provide the proper extra options to use if you want to ignore that.Non-Deployment: Added a non-deployment handler to catch modules
Non-Deployment: Added non-deployment handler to catch modules that error exit on import, while assumed to work perfectly.
This will give people an indication that the
numpymodule is expected to work and that maybe just the newest version is not and we need to be told about it.Non-Deployment: Added a non-deployment handler for
DistributionNotFoundexceptions in the main program, which now points the user to the necessary metadata options.UI: Made
--include-data-files-externalthe primary option for placing data files alongside the created program.This now works with standalone mode too, and is no longer onefile specific, the name should reflect that and people can now use it more broadly.
Plugins: Added support for multiple warnings of the same kind. The
dill-compatplugin needs that as it supports multiple packages.Plugins: Added detector for the
dill-compatplugin that detects usages ofdill,cloudpickleandray.cloudpickle.Standalone: Add support for including Visual Code runtime dlls on Windows.
When MSVC (Visual Studio) is installed, we take the runtime DLLs from its folders. We cannot take the ones from the
redistpackages installed to system folders for license reasons.Gives a warning when these DLLs would be needed, but were not found.
We might want to add an option later to exclude them again, for size purposes, but correctness out of the box is more important for now.
UI: Make sure the distribution name is correct for
--include-distribution-metadataoption values.Plugins: Added support for configuring re-compilation of extension modules from their source code.
When we have both Python code and an extension module, we only had a global option available on the command line.
This adds
--recompile-extension-modulesfor more fine grained choices as it allows to specify names and patterns.For
zmq, we need to enforce it to never be compiled, as it checks if it is compiled with Cython at runtime, so re-compilation is never possible.
Reports: Include environment flags for C compiler and linker picked up for the compilation. Sometimes these cause compilation errors that and this will reveal there presence.
Optimization
Enhanced detection of
raisestatements that use compile-time constant values which are not actual exception instances.This improvement prevents Nuitka from crashing during code generation when encountering syntactically valid but semantically incorrect code, such as
raise NotImplemented. While such code is erroneous, it should not cause a compiler crash. (Added in 2.7.1 already.)With unknown locals dictionary variables trust very hard values there too.
With this using hard import names also optimize inside of classes.
This makes
gcloudmetadata work, which previously wasnât resolved in their code.
macOS: Enhanced
PySide2support by removing the general requirement for onefile mode. Onefile mode is now only enforced forQtWebEnginedue to its specific stability issues when not bundled this way. (Added in 2.7.4 already.)Scons: Added support for C23 embedding of the constants blob with ClangCL, avoiding the use of resources. Since the onefile bootstrap does not yet honor this for its payload, this feature is not yet complete but could help with size limitations in the future.
Plugins: Overhauled the UPX plugin.
Use better compression than before, hint the user at disabling onefile compression where applicable to avoid double compression. Output warnings for files that are not considered compressible. Check for
upxbinary sooner.Scons: Avoid compiling
haclcode for macOS where itâs not needed.
Anti-Bloat
Improved handling of the
astropypackage by implementing global replacements instead of per-module ones. Similar global handling has also been applied toIPythonto reduce overhead. (Added in 2.7.1 already.)Avoid
docutilsusage in themarkdown2package. (Added in 2.7.1 already.)Reduced compiled size by avoiding the use of âdocutilsâ within the
markdown2package. (Added in 2.7.1 already.)Avoid including the testing framework from the
langsmithpackage. (Added in 2.7.6 already.)Avoid including
setuptoolsfromjax.version. (Added in 2.7.6 already.)Avoid including
unittestfrom thereportlabpackage. (Added in 2.7.6 already.)Avoid including
IPythonfor thekeraspackage using a more global approach. (Added in 2.7.11 already.)Avoid including the
tritonpackage when compilingtransformers. (Added in 2.7.11 already.)Avoid a bloat warning for an optional import in the
seabornpackage. (Added in 2.7.13 already.)Avoid compiling generated
google.protobuf.*_pb2files. (Added in 2.7.7 already.)Avoid including
tritonandsetuptoolswhen using thexformerspackage. (Added in 2.7.16 already.)Refined
dasksupport to not removepandas.testingwhenpytestusage is allowed. (Added in 2.7.16 already.)Avoid compiling the
tensorflowmodule that is very slow and contains generated code.Avoid using
setuptoolsincupypackage.Avoid false bloat warning in
seadocpackage.Avoid using
daskinsklearnpackage.Avoid using
cupy.testingin thecupypackage.Avoid using
IPythonin theroboflowpackage.Avoid including
rayfor thevllmpackage.Avoid using
dillin thetorchpackage.
Organizational
UI: Remove obsolete options to control the compilation mode from help output. We are keeping them only to not break existing workflows, but
--mode=...should be used now, and these options will start triggering warnings soon.Python3.13.4: Reject broken CPython official release for Windows.
The link library included is not the one needed for GIL, and as such it breaks Nuitka heavily and must be errored out on, all smaller or larger micro versions work, but this one does not.
Release: Do not use Nuitka 2.7.9 as it broke data file access via
__file__in onefile mode on Windows. This is a brown paper bag release, with 2.7.10 containing only the fix for that. Sorry for the inconvenience.Release: Ensured proper handling of newer
setuptoolsversions during Nuitka installation. (Fixed in 2.7.4 already.)UI: Sort
--list-distribution-metadataoutput and remove duplicates. (Changed in 2.7.8 already.)Visual Code: Added a Python 2.6 configuration for Win32 to aid in comparisons and legacy testing.
UI: Now lists available Qt plugin families if
--include-qt-plugincannot find one.UI: Warn about compiling a file named
__main__.pywhich should be avoided, instead you should specify the package directory in that case.UI: Make it an error to compile a file named
__init__.pyfor standalone mode.
Debugging: The
--editoption now correctly finds files even when using long, non-shortened temporary file paths.Debugging: The
pyside6plugin now enforces--no-debug-immortal-assumptionswhen--debugis on because PySide6 violates these and we donât need Nuitka to check for that then as it will abort when it finds them.Quality: Avoid writing auto-formatted files with same contents
That avoids stirring up tools that listen to changes.
For example the Nuitka website auto-builder otherwise rebuilt per release post on docs update.
Quality: Use latest version of
deepdiff.Quality: Added autoformat for JSON files.
Release: The man pages were using outdated options and had no example for standalone or app modes. Also the actual options were no longer included.
GitHub: Use the
--modeoptions in the issue template as well.GitHub: Enhanced wordings for bug report template to give more directions and more space for excellent reports to be made.
GitHub: The bug report template now requests the output of our package metadata listing tool, as it provides more insight into how Nuitka perceives the environment.
Debugging: Re-enabled important warnings for Clang, which had unnoticed for a long time and prevented a few things from being recognized.
Debugging: Support arbitrary debuggers through âdebugger-choice.
Support arbitrary debuggers for use in the
--debuggermode, if you specify all of their command line you can do anything there.Also added predefined
valgrind-memcheckmode for memory checker tool of Valgrind to be used.UI: Added rich as a progress bar that can be used. Since itâs available via pip, it can likely be found and requires no inline copy. Added colors and similar behavior for
tqdmas well.UI: Remove obsolete warning for Linux with
upxplugin.We donât use
appimageanymore for a while now, so its constraints no longer apply.UI: Add warnings for module specific options too. The logic to not warn on GitHub Actions was inverted, this restores warnings for normal users.
UI: Output the module name in question for
options-nannyplugin and parameter warnings.UI: When a forbidden import comes from an implicit import, report it properly.
Sometimes
.pyifiles from extension modules cause an import, but it was not clear which one; now it will indicate the module causing it.UI: More clear error message in case a Python for scons was not found.
Actions: Cover debug mode compilation at least once.
Quality: Resolve paths from all OSes in
--edit. Sometime I want to look at a file on a different OS, and there is no need to enforce being on the same one for path resolution to work.Actions: Updated to a newer Ubuntu version for testing, as to get
clang-formatinstalled anymore.Debugging: Allow for C stack output in signal handlers, this is most useful when doing the non-deployment handler that catches them to know where they came from more precisely.
UI: Show no-GIL in output of Python flavor in compilation if relevant.
Tests
Removed Azure CI configuration, as testing has been fully migrated to GitHub Actions. (Changed in 2.7.9 already.)
Improved test robustness against short paths for package-containing directories. (Added in 2.7.4 already.)
Prevented test failures caused by rejected download prompts during test execution, making CI more stable. (Added in 2.7.4 already.)
Refactored common testing code to avoid using
doctests, preventing warnings in specific standalone mode test scenarios related to reference counting. (Added in 2.7.4 already.)Tests: Cover the memory leaking call re-formulation with a reference count test.
Cleanups
Plugins: Improved
pkg_resourcesintegration by using the__loader__attribute of the registering module for loader type registration, avoiding modification of the globalbuiltinsdictionary. (Fixed in 2.7.2 already.)Improved the logging mechanism for module search scans. It is now possible to enable tracing for individual
locateModulecalls, significantly enhancing readability and aiding debugging efforts.Scons: Refactored architecture specific options into dedicated functions to improve code clarity.
Spelling: Various spelling and wording cleanups.
Avoid using
#ifdefin C code templates, and letâs just avoid it generally.Added missing slot function names to the ignored word list.
Renamed variables related to slots to be more verbose and proper spelling as a result, as thatâs for better understanding of their use anyway.
Scons: Specify versions supported for Scons by excluding the ones that are not, rather than manually maintaining a list. This adds automatic support for Python 3.14.
Plugins: Removed a useless call to
internas it did not have thought it does.Attach copyright during code generation for code specializations
This also enhances the formatting for almost all files by making leading and trailing new lines more consistent.
One C file turns out unused and was removed as a left over from a previous refactoring.
Summary
This release was supposed to focus on scalability, but that didnât happen again due to a variety of important issues coming up as well as a created downtime after high private difficulties after a planned surgery. However, the upcoming release will have it finally.
The onefile DLL mode as used on Windows has driven a lot of need for corrections, some of which are only in the final release, and this is probably the first time it should be usable for everything.
For compatibility, working with the popular (yet - not yes recommended UV-Python), Windows UI fixes for temporary onefile and macOS improvements, as well as improved Android support are excellent.
The next release of Nuitka however will have to focus on scalability and maintenance only. But as usual, not sure if it can happen.
November 15, 2025 01:52 PM UTC
November 14, 2025
Real Python
The Real Python Podcast â Episode #274: Preparing Data Science Projects for Production
How do you prepare your Python data science projects for production? What are the essential tools and techniques to make your code reproducible, organized, and testable? This week on the show, Khuyen Tran from CodeCut discusses her new book, "Production Ready Data Science."
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 14, 2025 12:00 PM UTC
EuroPython Society
Recognising Michael Foord as an Honorary EuroPython Society Fellow
Hi everyone. Today, we are honoured to announce a very special recognition.
The EuroPython Society has posthumously elected Michael Foord (aka voidspace) as an Honorary EuroPython Society Fellow.
Michael Foord (1974–2025)
Michael was a long-time and deeply influential member of the Python community. He began using Python in 2002, became a Python core developer, and left a lasting mark on the language through his work on unittest and the creation of the mock library. He also started the tradition of the Python Language Summits at PyCon US, and he consistently supported and connected the Python community across Europe and beyond.
However, his legacy extends far beyond code. Many of us first met Michael through his writing and tools, but what stayed with people was the example he set through his contributions, and how he showed up for others. He answered questions with patience, welcomed newcomers, and cared about doing the right thing in small, everyday ways. He made space for people to learn. He helped the Python community in Europe grow stronger and more connected. He made our community feel like a community.
His impact was celebrated widely across the community, with many tributes reflecting his kindness, humour, and dedication:
At EuroPython 2025, we held a memorial and kept a seat for him in the Forum Hall:
A lasting tribute
EuroPython Society Fellows are people whose work and care move our mission forward. By naming Michael an Honorary Fellow, we acknowledge his technical contributions and also the kindness and curiosity that defined his presence among us. We are grateful for the example he set, and we miss him.
Our thoughts and thanks are with Michael&aposs friends, collaborators, and family. His work lives on in our tools. His spirit lives on in how we treat each other.
With gratitude,
Your friends at EuroPython Society
November 14, 2025 09:00 AM UTC
November 13, 2025
Paolo Melchiorre
How to use UUIDv7 in Python, Django and PostgreSQL
Learn how to use UUIDv7 today with stable releases of Python 3.14, Django 5.2 and PostgreSQL 18. A step by step guide showing how to generate UUIDv7 in Python, store them in Django models, use PostgreSQL native functions and build time ordered primary keys without writing SQL.
November 13, 2025 11:00 PM UTC
Python Engineering at Microsoft
Python in Visual Studio Code â November 2025 Release
We’re excited to announce that the November 2025 release of the Python extension for Visual Studio Code is now available!
This release includes the following announcements:
- Add Copilot Hover Summaries as docstring
- Localized Copilot Hover Summaries
- Convert wildcard imports Code Action
- Debugger support for multiple interpreters via the Python Environments Extension
If you’re interested, you can check the full list of improvements in our changelogs for the Python and Pylance extensions.
Add Copilot Hover Summaries as docstring
You can now add your AI-generated documentation directly into your code as a docstring using the new Add as docstring command in Copilot Hover Summaries. When you generate a summary for a function or class, navigate to the symbol definition and hover over it to access the Add as docstring command, which inserts the summary below your cursor formatted as a proper docstring.
This streamlines the process of documenting your code, allowing you to quickly enhance readability and maintainability without retyping.

Localized Copilot Hover Summaries
GitHub Copilot Hover Summaries inside Pylance now respect your display language within VS Code. When you invoke an AI-generated summary, you’ll get strings in the language you’ve set for your editor, making it easier to understand the generated documentation.

Convert wildcard imports into Code Action
Wildcard imports (from module import *) are often discouraged in Python because they can clutter your namespace and make it unclear where names come from, reducing code clarity and maintainability.
Pylance now helps you clean up modules that still rely on from module import * via a new Code Action. It replaces the wildcard with the explicit symbols, preserving aliases and keeping the import to a single statement. To try it out, you can click on the line with the wildcard import and press Ctrl + . (or Cmd + . on macOS) to select the Convert to explicit imports Code Action.

Debugger support for multiple interpreters via the Python Environments Extension
The Python Debugger extension now leverages the APIs from the Python Environments Extension (vscode-python-debugger#849). When enabled, the debugger can recognize and use different interpreters for each project within a workspace. If you have multiple folders configured as projectsâeach with its own interpreter – the debugger will now respect these selections and use the interpreter shown in the status bar when debugging.
To enable this functionality, set “python.useEnvironmentsExtension”: true in your user settings. The new API integration is only active when this setting is turned on.
Please report any issues you encounter to the Python Debugger repository.
Other Changes and Enhancements
We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include:
- Resolve unexpected blocking during PowerShell command activation (vscode-python-environments#952)
- The Python Environments Extension now respects the existing python.poetryPath user setting to specify which Poetry executable to use (vscode-python-environments#918)
- The Python Environments Extension now detects both requirements.txt and dev-requirements.txt files when creating a new virtual environment for automatic dependency installation (vscode-python-environments#506)
We would also like to extend special thanks to this month’s contributors:
- @iBug: Fixed Python REPL cursor drifting in vscode-python#25521
Try out these new improvements by downloading the Python extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or â + â§ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.
The post Python in Visual Studio Code – November 2025 Release appeared first on Microsoft for Python Developers Blog.
November 13, 2025 06:41 PM UTC
November 12, 2025
Python Software Foundation
Python is for everyone: Join in the PSF year-end fundraiser & membership drive!
The Python Software Foundation (PSF) is the charitable organization behind Python, dedicated to advancing, supporting, and protecting the Python programming language and the community that sustains it. That mission and cause are more than just words we believe in. Our tiny but mighty team works hard to deliver the projects and services that allow Python to be the thriving, independent, community-driven language it is today. Some of what the PSF does includes producing PyCon US, hosting the Python Package Index (PyPI), supporting 5 Developers-in-Residence, maintaining critical community infrastructure, and more.
Python is for teaching, learning, playing, researching, exploring, creating, workingâ the list goes on and on and on! Support this year's fundraiser with your donations and memberships to help the PSF, the Python community, and the language stay strong and sustainable. Because Python is for everyone, thanks to you.
There are two direct ways to join through donate.python.org:
- Donate directly to the PSF! Your donation is a direct way to support and power the future of the Python programming language and community you love. Every donation makes a difference, and we work hard to make a little go a long way.
- Become a PSF Supporting Member! When you sign up as a Supporting Member of the PSF, you become a part of the PSF, are eligible to vote in PSF elections, and help us sustain our mission with your annual support. You can sign up as a Supporting Member at the usual annual rate ($99 USD), or you can take advantage of our sliding scale option (starting at $25 USD)!
>>> Donate or Become a Member Today! <<<
If you already donated and/or youâre already a member, you can:
- Share the fundraiser with your regional and project-based communities: Share this blog post in your Python-related Discords, Slacks, social media accounts- wherever your Python community is! Keep an eye on our social media accounts to see the latest stories and news for the campaign.
- Share your Python story with a call to action: We invite you to share your personal Python, PyCon, or PSF story. What impact has it made in your life, in your community, in your career? Share your story in a blog post or on your social media platform of choice and add a link to donate.python.org.
- Ask your employer to sponsor: If your company is using Python to build its products and services, check to see if they already sponsor the PSF on our Sponsors page. If not, reach out to your organization's internal decision-makers and impress on them just how important it is for us to power the future of Python together, and send them our sponsor prospectus.
Your donations and support:
- Keep Python thriving
- Support CPython and PyPI progress
- Increase security across the Python ecosystem
- Bring the global Python community together
- Make our community more diverse and robust every year
Highlights from 2025:
- Producing another wonderful PyCon US: We welcomed 2,225 attendees for PyCon US 2025â 1,404 of whom were newcomersâ at the David L. Lawrence Convention Center in beautiful downtown Pittsburgh. PyCon US 2025 was packed with 9 days of content, education, and networking for the Python community, including 6 Keynote Sessions, 91 Talks, including the Charlas Spanish track, 24 Tutorials, 20 Posters, 30+ Sprint Projects, 146 Open Spaces, and 60 Booths!
- Continuing to enhance Python and PyPIâs security through Developers-in-Residence: The PSFâs PyPI Safety and Security Engineer, Mike Fiedler, has implemented new safeguards, including automation to detect expiring email domains and prevent impersonation attacks, as well as guidance for maintainers to use more secure authentication methods like WebAuthn and Trusted Publishers. The PSFâs Security Developer-in-Residence, Seth Larson, continues to lead efforts to strengthen Pythonâs security and transparency. His work on PEP 770 introduces standardized Software Bill-of-Materials (SBOMs) within Python packages, improving visibility into dependencies for stronger supply chain security. A new white paper co-authored with Alpha-Omega outlines how these improvements enhance trust and measurability across the ecosystem.
- Adoption of pypistats.org: The PSF infrastructure team has officially adopted the operation of pypistats.org, which had been run by volunteer Christopher Flynn for over six years (thank you, Christopher!). The PSFâs Infrastructure Team now handles the serviceâs infrastructure, costs, and domain registrationâ and the service itself remains open source and community-maintained.
- Advancing PyPI Organizations: The rollout of PyPI Organizations is now well underway, marking a major milestone in improving project management and collaboration across the Python ecosystem. With new Terms of Service finalized and supporting tools in place, the PSF has cleared its backlog of requests and approved thousands of organizationsâincluding 2,409 Community and 4979 Company organizations as of today. Hundreds of these organizations have already begun adding members, transferring projects, and subscribing to the new Company tier, generating sustainable support for the PSF. Weâre excited to see how teams are using these new features to better organize and maintain their projects on PyPI.
- Empowering the Python community through Fiscal Sponsorship: We are proud to continue supporting our 20 fiscal sponsoree organizations with their initiatives and events all year round. The PSF provides 501(c)(3) tax-exempt status to fiscal sponsorees such as PyLadies and Pallets, and provides back office support so they can focus on their missions. Consider donating to your favorite PSF Fiscal Sponsoree and check out our Fiscal Sponsorees page to learn more about what each of these awesome organizations is all about!
- Serving our community with grants: The PSF Grants Program awarded approximately $340K to 86 grantees around the world; supporting local conferences, workshops, and community initiatives that keep Python growing and accessible to all. While we had to make the difficult decision to pause the program early to ensure financial sustainability, we would love to reopen it as soon as possible. Your participation in this yearâs fundraiser fuels that effort!
- Honoring community leaders: The PSF honored three leaders with Distinguished Service Awards this year. Ewa Jodlowska helped transform the PSF into a professional, globally supportive organization. Thomas Wouters has contributed decades of leadership, guidance, and institutional knowledge. Van Lindberg provided essential legal expertise that guided the PSF through growth and governance. Their dedication has left a lasting impact on the PSF, Python, and its community. The PSF was also thrilled to recognize Katie McLaughlin, Sarah Kuchinsky, and Rodrigo GirĂŁo SerrĂŁo with Community Service Awards (CSA) for their outstanding contributions to the Python community. Their dedication, creativity, and generosity embody the spirit of Python and strengthen our global community. We recognized Jay Miller with a CSA for his work to improve diversity, inclusion, and equity in the global Python community through founding and sustaining Black Python Devs. We also honored Matt Lebrun and Micaela Reyes with CSA's for their efforts to grow and support the Python community in the Philippines through conferences, meetups, and volunteer programs.
- Finding strength in the Python community: When the PSF shared the news about turning down a NSF grant, the outpouring of support from the Python community was nothing short of incredible. In just one day, you helped raise over $60K and welcomed 125 new Supporting Members- in the week after, that number jumped to $150K+ and 270+ new Supporting Members! A community-led matching campaign and countless messages of support, solidarity, and encouragement reminded us that while some choices are tough, we never face them alone. The PSF Board & Staff are deeply moved and energized by your words, actions, and continued belief in our shared mission. This moment has set the stage for a record-breaking end-of-year fundraiser, and we are so incredibly grateful to be in community with each of you.
