Planet Python
Last update: April 22, 2026 09:43 PM UTC
April 22, 2026
Kay Hayen
Nuitka Release 4.0
This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, “download now”.
This release is a major release with many new features and the long wanted improvements for scalability of the Python compilation.
Bug Fixes
Accelerated: The enhanced detection for uninstalled Anaconda and WinPython was not fully working. (Fixed in 2.8.1 already.)
Onefile: Fixed an issue in DLL mode where signal handlers were not being registered, which could prevent proper program termination on signals like CTRL-C. (Fixed in 2.8.1 already.)
Windows: Fixed incorrect handling of forward slashes in cache directory paths, which caused issues with Nuitka-Action. (Fixed in 2.8.1 already.)
UI: The
--output-diroption was not being honored in accelerated mode when--output-filenamewas also provided. (Fixed in 2.8.2 already.)UI: The
--output-filenameoption help said it wouldn’t work for standalone mode when in fact it did for a while already. (Fixed in 2.8.2 already.)Onefile: On Windows, fixed a crash when using
--output-dirwhere it was checking for the wrong folder to exist. (Fixed in 2.8.2 already.)macOS: Fixed a crash that could occur when many package-specific directories were used, which could lead to the
otoolcommand line being too long. (Fixed in 2.8.2 already.)Standalone: For the “Python Build Standalone” flavor, ensured that debug builds correctly recognize all their specific built-in modules, preventing potential errors. (Fixed in 2.8.4 already.)
macOS: Fixed an issue where
$ORIGINr-paths were set but ended up unused, which in some cases caused errors by exhausting the header space and preventing the build entirely. (Fixed in 2.8.5 already.)macOS: Fixed an issue to ensure the system
xattrbinary is used.Otherwise, using
arch -x86_64 pythonfor compilation could fail when some packages are installed that providexattras well, because that might be anarm64binary only and would not work. (Fixed in 2.8.5 already.)UI: Fixed a misleading typo in the rejection message for unsupported Python 3.13.4. (Fixed in 2.8.5 already.)
Accelerated: The runner scripts
.cmdor.shnow are also placed respecting the--output-filenameand--output-diroptions. (Fixed in 2.8.5 already.)Plugins: Ensured that plugins detected by namespace usage are also activated in module mode. (Fixed in 2.8.5 already.)
Standalone: Fixed an issue where non-existent packages listed in
top_level.txtfiles could cause errors during metadata collection. (Fixed in 2.8.6 already.)Standalone: Corrected the classification of the
sitemodule, which was previously treated as a standard library module in some cases. (Fixed in 2.8.6 already.)Windows: Ensured that temporary link libraries and export files created during compilation are properly deleted, preventing them from being included in the standalone distribution. (Fixed in 2.8.6 already.)
Python3.14: Adapted to core changes by no longer inlining
haclcode for this version. (Fixed in 2.8.6 already.)Python 3.14: Follow allocator changes and immortal flags changes.
Python 3.14: Follow GC changes for compiled frames as well.
Python 3.14: Catch attempts to clear a compiled suspended frame object.
Fixed a potential mis-optimization for uses of
locals()when transforming the variable name reference call. (Fixed in 2.8.6 already.)Module: Fixed
pkgutil.iter_modulesnot working when loading a module into a namespace. (Fixed in 2.8.7 already.)Reports: Fixed a crash when creating the compilation report before the source directory is created. (Fixed in 2.8.7 already.)
Standalone: Fixed ignoring of non-existent packages from
top_level.txtfor metadata. (Fixed in 2.8.7 already.)UI: The
--no-progress-baroption was not disabling the Scons progress bars. (Fixed in 2.8.7 already.)UI: Fixed an exception in the
tqdmprogress bar during process shutdown. (Fixed in 2.8.7 already.)Windows: Fixed incorrect
sys.executablevalue in onefile DLL mode. (Fixed in 2.8.9 already.)Python3.14: Added missing implicit dependency for
_ctypeson Windows. (Fixed in 2.8.9 already.)Python3.13+: Fixed missing export of
PyInterpreter_*API.Python3.14: Adapted to change in evaluation order of
__exit__and__enter__.Multiprocessing: Fixed issue where
sys.argvwas not yet corrected whenargparsewas used early in spawned processes.Scons: Fixed an issue where Zig was not used as a fallback when MinGW64 was present but unusable.
Windows: Made onefile binary work on systems without runtime DLLs installed as well.
Scons: Made tracing robust against threaded outputs.
Python3.12+: Enhanced workaround for loading of extension modules with sub-packages to cover more cases.
Scons: Fixed missing Zig version output.
Scons: Fixed Zig detection to enforce PATH or CC usage on macOS instead of download, since it’s not available.
UI: Fixed normalization of user paths, improving macOS support for reporting.
Linux: Fixed the workaround for the
memsetzero length warning, which was wrongly applied to Clang. Only GCC requires it, and Clang complained about it.Linux: More robust fallback to
g++whengccis too old for C11 support.Compatibility: Fixed a bug where
delof a subscript could cause wrong runtime behavior due to missing control flow escape annotations for the subscript value itself and the index.macOS: Fixed an issue where
Info.plistuser-facing entitlements keys mapping to multiple internal entitlements were not handled correctly.UI: Ensured tracing uses at least 80 characters for very narrow terminals to maintain readability.
Compatibility: Fixed an issue where nested loops could have incorrect traces, potentially leading to mis-optimizations.
Linux: Fixed an issue where
_XOPEN_SOURCEwas mistakenly appended for Clang, causing warnings.Scons: Improved passed variables handling by detecting
Noneor invalid types earlier.Fixed a bug where propagating class dictionaries needed extra micro passes to ensure proper optimization of their traces for the new variables.
Scons: Fixed an issue with process spawning when using
rusagecapture.Scons: Followed the file closing behavior of the standard communicate closer to avoid potential hangs.
Package Support
Anti-Bloat: Avoided a warning during program shutdown when using a compiled
xgboostpackage. (Fixed in 2.8.1 already.)Standalone: Added support for the
oracledbpackage. (Fixed in 2.8.2 already.)macOS: Added support for newer
PySide6versions. (Fixed in 2.8.4 already.)Standalone: Added support for including more metadata for the
transformerspackage. (Fixed in 2.8.5 already.)Standalone: Metadata from Nuitka Package Configuration is now only included if the corresponding package is part of the compilation. (Fixed in 2.8.5 already.)
Standalone: Added support for the
win32ctypespackage. (Fixed in 2.8.6 already.)Standalone: Added support for newer versions of the
daskpackage. (Fixed in 2.8.6 already.)Standalone: Added support for the
dataparserpackage. (Added in 2.8.7 already.)Standalone: Added support for
puremagic,pygments.lexersandtomliin standalone mode.Standalone: Added automatic detection of
mypycruntime dependencies, no need to manually configure that anymore. Also our configuration was often only correct for a single OS, and single upstream versions which is now fixed for packages having it before.Standalone: Added support for the newer
av(PyAV) package version.Standalone: Added support for the
sentry_sdk,jedi,parso, andline_profilerpackages.Standalone: Added support for newer
pandasversions.
New Features
UI: Added support for
--projectparameter to build using configuration frompyproject.toml(e.g. Poetry, Setuptools).With this, you can simply run
python -m nuitka --project --mode=onefileand it will use thepyproject.tomlorsetup.py/setup.cfgfiles to get the configuration and build the Nuitka binary.Previously Nuitka could only be used for building wheels with
buildpackage, and for building wheels that is still the best way.The
--projectoption is currently compatible withbuildandpoetryand detects the used build system automatically.Zig: Added experimental support for using Zig project’s
zig ccas a C compiler backend for Nuitka. This can be enabled by setting theCCenvironment variable to point to thezigorzig.exeexecutable.Reports: Started capturing
rusagefor OSes that support it.Only POSIX-compliant OSes will do it (Linux, macOS, and all BSD variants), but Android does not.
Not yet part of the actual report, as we need to figure out how to use and present the information.
Scons: Added experimental support for enabling Thin LTO with the Clang compiler.
Standalone: Honor
--nofollow-import-tofor stdlib modules as well.This allows users to manually reduce standard library usage, but it can also cause crashes from extension modules not prepared for the absence of standard library modules.
Onefile: Allowed disabling the onefile timeout and hard killing on CTRL-C entirely by providing
--onefile-child-grace-time=infinity.Scons: Added newer inline copy of Scons which supports Visual Studio 2026. (Added in 2.8.7 already.)
Scons: Allowed using Python versions only partially supported for Nuitka with Scons. (Added in 2.8.7 already.)
UI: Added option
--devel-profile-compilationfor compile time profiling. Also renamed the old runtime profiling option--profileto--debug-profile-runtime, that is however still broken.Reports: Including CPU instr and cycle counters in timing on native Linux.
With appropriate configuration on Linux this allows to get at very precise timing configuration so we can judge even small compile time improvements correctly. We then don’t need many runs to average out noise from other effects.
Don’t use wall clock but process time for steps that are not doing IO like module optimization for more accurate values otherwise, it is however not very accurate still.
Python3.12+: Added support for function type syntax (generics).
Python3.14: Added groundwork for deferred evaluation of function annotations.
Python3.14: Added support for uncompiled generator integration which is crucial for
asynciocorrectness and general usability with modern frameworks.Debugging: Added
--debug-self-forkingto debug fork bombs.Windows: Added
--include-windows-runtime-dllsoption to control inclusion of Windows C runtime DLLs. Defaults toauto.Python 3.14: Added experimental support for deferred annotations.
Plugins: Added option
--qt-debug-pluginsfor debugging Qt plugin loading.DLLs: Added support for DLL tags to potentially control inclusion with more granularity.
macOS: Added support for many more protected resource entitlements (Siri, Bluetooth, HomeKit, etc.) to the bundle details.
Python: Added support for
@nuitka_ignoredecorator to exclude functions from compilation.@nuitka_ignore def my_cpython_func(): # This function is not compiled, but stays bytecode ...
UI: Added support for merging user and standard YAML Nuitka package configurations, currently only including proper merging of implicit imports.
Optimization
Avoid making duplicate hard imports by dropping assignments if the variable was already assigned to the same value.
Found previous assignment traces faster.
The assignment and
delnodes were using functions to find what they already knew from the last micro pass. Theself.variable_tracealready kept track of the previous value trace situation.For matching unescaped traces we will do similar, but it’s not really used right now, so make it only a TODO as that will eventually be very similar.
Also speeds up the first micro pass even more, because it doesn’t have to search and do other things. If no previous trace exists, none is attempted to be used.
Also the common check if no by-name uses or merges of a value occurred was always used inverted and now should be slightly faster to use and allow to short-circuit.
While this accelerated the first micro pass by a lot for per-assignment work, it mainly means to cleanup the design such that traces are easier to re-recognize. And this is a first step with immediate impacts.
Much faster Python passes.
The “Escape” and “Unknown” traces now have their own number spaces. This allows doing some quick checks for a trace without using the actual object, but just its number.
Narrow the scope of variables to the outline scope that uses them, so that they don’t need to be dealt with in merging later code where they don’t ever change anymore and are not used at all.
When checking for unused variables, do not ask the trace collection to filter its traces. Instead it works off the ones attached to the variable already. This avoids a lot of searching work. It also uses a method to decide if a trace constitutes usage rather than a long
elifchain.
Faster variable trace maintenance.
We now trace variables in trace collection as a dictionary per variable with a dictionary of the versions, this is closer to our frequent usage per variable.
That makes it a lot easier to update variables after the tracing is finished to know their users and writers.
Requires a lot less work, but also makes work less memory local such that the performance gain is relatively small despite less work being done.
It also avoids having to maintain a per-variable set for its using scopes.
Decide presence of writing traces for parameter variables faster.
Avoid unnecessary micro passes.
Detect variable references discarded sooner for better micro-pass efficiency. We were spending an extra pass on the whole module to stabilize the variable usage, which can end up being a lot of work.
After a module optimization pass found no changes, we no longer make an extra micro pass to avoid stabilization bugs, but only check against it not happening in debug mode. Depending on the number of micro passes, this can be a relatively high performance gain. For the
telethon.tl.typesmodule this was a 13% performance gain on top.
For “PASS 1” of
telethon.tl.types, which has been one of the known troublemakers with many classes and type annotations, all changes combined improve the compilation time by 1500%.Faster code generation.
Indentation in generated C code is no longer performed to speed up code generation. To restore readability, use the new option
--devel-generate-readable-codewhich will useclang-formatto format the C code.
Recognized module variable usages inside outlined functions that are in a loop, which improves the effectiveness of caching at run-time. (Added in 2.8.6 already.)
Standalone: Partially solved a TODO of minimizing intermediate directories in r-paths of ELF platforms, by only putting them there if the directory they point to will contain DLLs or binaries. This removes unused elements and reduces r-path size.
Windows: Made the caching of external paths effective, which significantly speeds up DLL resolution in subsequent compilations. (Fixed in 2.8.6 already.)
macOS: Removed extended attributes from data files as well, improving performance. (Fixed in 2.8.7 already.)
Scons: Stopped detecting installed MinGW to avoid overhead as it is not supported. (Fixed in 2.8.9 already.)
Scons: Added caching for MSVC information to reduce compilation time and if already available, use that to detect Windows SDK location rather that using
vswhere.exeeach time.Avoid computing large
%string interpolations at compile time. These could cause constants to be included in the binary as a result.Avoid including
importlib._bootstrapandimportlib._bootstrap_externalas they are available as frozen modules.Fixed un-hashable dictionary keys not being properly optimized, forcing runtime handling.
Anti-Bloat
Avoid including
tzdataon non-Windows platforms. (Fixed in 2.8.7 already.)Avoid including
pyparsing.testingin thepyparsingpackage.Added configuration to avoid compiled via C for large generated files for the
sqlfluffpackage.
Organizational
UI: Don’t say
--include-data-files-externaldoesn’t work in standalone mode.It actually has worked for a while, and we since renamed that option, but the help still said it wouldn’t work in standalone mode.
Debugging: Added assertions for code object creation.
We were getting assertions from Python when built with Zig, and these are supposed to provide those as well.
Debugging: In case of tool commands failing, output the too long command line if that was the error given.
Anti-Bloat: Don’t allow custom
nofollowmodes, point the user to the correct option instead. This was never needed, but two ways of providing this user decision make no sense.UI: The help text for
--include-data-files-externalwas updated to reflect that it works in standalone mode. (Fixed in 2.8.5 already.)Release: Use lowercase names for source archives in PyPI uploads. (Fixed in 2.8.7 already.)
Quality: Fixed an issue where “assume yes” was not being passed for downloads in the commit hook.
UI: Improved wording for missing C compiler message.
Debugging: More clear verbose trace for dropped expressions.
Debugging: Output what module had extra changes during debug extra micro pass.
Quality: Manage more development tools (
clang-format, etc.) via private pip space for better consistency and isolation.AI: Enhanced pull request template with directions for AI-driven PRs.
AI: Added agent command
create-mreto assist in creating a minimal reproduction example (MRE).User Manual: Added documentation about redistribution requirements for Python 3.12-3.14.
Quality: Added
--un-pushedargument to auto-format tool for checking only un-pushed changes.Scons: Improved error message to point to Zig support if no C compiler is found.
MonolithPy: Follow rename of our Python fork to MonolithPy to avoid confusion with the Nuitka compiler project itself.
Scons: Prefer English output and warn user for missing English language pack with MSVC in case or outputs being made.
UI: When running non-interactively, print the default response that is assumed for user queries to stdout as well, so it becomes visible in the logs.
UI: Warn when using protected resources options without standalone/bundle mode enabled on macOS.
Reports: Sort DLLs and entry points in compilation reports by destination path for deterministic output.
Quality: Skip files with
spell-checker: disableincodespellchecks.Release: Avoid compiling bytecode for inline copies that are not compatible with the running Python version during install.
Visual Studio: Ignored names in backticks and code blocks in ReST for spelling checks.
Actions: Ensured compilation reports are always recorded, even in case of errors, as they are most useful then.
AI: Added a workflow
create-mreto assist in creating a Minimal Reproducible Example from a larger file triggering a Nuitka bug. This has guidance on avoiding standalone mode and instructions for reducing code to just produce a MRE that is really small.AI: Added a workflow
fix-module-not-found-errorfor solving simpleModuleNotFoundErrorat runtime errors.AI: Added further strategies for Minimal Reproducible Example (MRE) reduction to the agent workflow.
UI: Reject input paths from standard library locations to prevent compiling files from there as main files.
Tests
Added support for
--allwith--max-failuresoption to the test runner to stop after a specified number of failures, or just run all tests and output the failed tests in the end.Also the tests specified can be a glob pattern, to match multiple tests, not just a test name.
Added examples to the help output of the runner to guide the usage of the developers.
Ignore multiline source code outputs of Python3.14 in tracebacks for output comparison, Nuitka won’t do those.
Added test cases for poetry and distutils. Also verify that standalone mode works with
--projectfor the supported build systems.Made the distutils tests cases much more consistent.
Watch: Improved binary name detection from compilation reports for better mode support beyond standalone mode.
Allow downloading tools (like
clang-format) for all test cases.Added options to enforce Zig or Clang usage for C compiling.
Suppress
pipoutput when not running interactively to avoid test output differences.Added
nuitka.formatandnuitka.package_configto self-compilation tests.Added colorization to test comparison diffs if a tty is available.
Avoided using
--nofollow-importsin tests as some Python flavors do not work with it when using--mode=standalone.
Cleanups
Moved options to new
nuitka.optionspackage.Python3.14: Fixed a type mismatch warning seen with MSVC. (Fixed in 2.8.9 already.)
Massive amounts of spelling cleanups. Correct spelling is more and more places allows identification of bugs more immediately, therefore these are very worthwhile.
Code cleanup and style improvements in
ErrorsandOutputDirectoriesmodules.Replaced usages of
os.environ.getwithos.getenvfor consistency and denser code.Moved MSVC re-dist detection to
DllDependenciesWin32.Release: Don’t install
zstandardby default anymore.UI: Tone down complaint about checksum mismatches.
Static source files are now provided by Nuitka directly.
Renamed C function
modulecode_tomodule_code_for consistency.
Summary
This release is finally a break-through for scalability. We will continue the push for scalability in the next release as well, but with more of a focus on the C compilation step, to generate C code that is easier for the backend compiler.
Also, this release finally addresses many usability problems. The
non-deployment hooks for imports not found, that were actively excluded,
are one such thing. The start of --project enables far easier
adoption of Nuitka for existing projects.
Other huge improvements are related to generics, they are now much better support, closing gaps in the Python3.12 support.
The onefile DLL mode as used on Windows is finally perfect and should have no issues anymore, while enabling big future improvements.
Unfortunately 3.14 support is not yet ready and will have to be delayed until the next release.
Real Python
Altair: Declarative Charts With Python
There’s a moment many data analysts know well: you have a new dataset and a clear question, and you open a notebook only to find yourself writing boilerplate axis and figure setup before you’ve even looked at the data. Matplotlib gives you fine-grained control, but that control comes with a cost. Altair takes a completely different approach to data visualization in Python.
Instead of scripting every visual detail, you describe what your data means. This includes specifying which column goes on which axis, what should be colored, and what should be interactive. Altair then generates the visualization.
If you’re wondering whether it’s worth adding another visualization library to your toolkit, here’s how Altair and Matplotlib compare:
| Use Case | Pick Altair | Pick Matplotlib |
|---|---|---|
| Interactive exploratory charts in notebooks | ✅ | — |
| Pixel-precise publication figures or 3D plots | — | ✅ |
Altair generates web-native charts. The output is HTML and JavaScript, which means charts render right in your notebook and can be saved as standalone HTML files or embedded in web pages. It’s not a replacement for Matplotlib, and it doesn’t try to be. Think of them as tools you reach for in different situations.
Get Your Code: Click here to download the free sample code you’ll use to build interactive Python charts the declarative way with Altair.
Take the Quiz: Test your knowledge with our interactive “Altair: Declarative Charts With Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Altair: Declarative Charts With PythonTest your knowledge of Altair, the declarative data visualization library for Python that turns DataFrames into interactive charts.
Start Using Altair in Python
It’s a good idea to install Altair in a dedicated virtual environment. It pulls in several dependencies like pandas and the Vega-Lite renderer, and a virtual environment keeps them from interfering with your other projects. Create one and install Altair with pip:
$ python -m venv altair-venv
$ source altair-venv/bin/activate
(altair-venv) $ python -m pip install altair
This tutorial uses Python 3.14 and Altair 6.0. All the code runs inside a Jupyter notebook, which is the most common environment for interactive data exploration with Altair. If you prefer a different JavaScript-capable environment like VS Code, Google Colab, or JupyterLab, feel free to use that instead. To launch a Jupyter notebook, run the following:
(altair-venv) $ python -m pip install notebook
(altair-venv) $ jupyter notebook
The second command launches the Jupyter Notebook server in your browser. Create a new notebook and enter the following code, which builds a bar chart from a small DataFrame containing daily step counts for one week:
import altair as alt
import pandas as pd
steps = pd.DataFrame({
"Day": ["1-Mon", "2-Tue", "3-Wed", "4-Thu", "5-Fri", "6-Sat", "7-Sun"],
"Steps": [6200, 8400, 7100, 9800, 5500, 9870, 3769],
})
weekly_steps = alt.Chart(steps).mark_bar().encode(
x="Day",
y="Steps",
)
weekly_steps
You should see a bar chart displaying daily step counts:
Step Counts as a Bar Chart
The dataset is intentionally minimal because data isn’t the main focus: it has seven rows for seven days, and two columns for the day name and step count. Notice how the weekly_steps chart is constructed. Every Altair chart follows this same pattern. It’s built from these three building blocks:
- Data: A pandas DataFrame handed to
alt.Chart(). - Mark: The visual shape you want, chosen via
.mark_*(). Here,.mark_bar()draws bars. Other options include.mark_point(),.mark_line(), and.mark_arc(). - Encode: The mapping from data columns to visual properties, declared inside
.encode(). Here,Daygoes to the x-axis andStepsto the y-axis.
This is Altair’s core grammar in action: Data → Mark → Encode. You’ll use it every time.
Read the full article at https://realpython.com/altair-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm
PyCharm for Django Fundraiser: Why Django Matters in the AI Era – And Why We’re Supporting It
Spend a few minutes around developer content, and it’s easy to come away with the impression that web apps now appear to almost write themselves.
Everything that follows – review, verification, refactoring, debugging, and the open-source frameworks that make those apps dependable – gets less attention. AI can speed up code generation, but it does not remove the need for stable foundations. A lot of AI-generated code works because it’s built on top of mature open-source frameworks, libraries, and documentation.
AI can scaffold a web app in thirty seconds. Django is what keeps it running for ten years. That gap is only getting more valuable.
Will Vincent, former Django Board Member, co-host of the Django Chat podcast and co-writer of the weekly Django News newsletter
As AI makes OSS easier to consume, it can also make the work behind it easier to overlook. But OSS still needs support – perhaps more than ever.
PyCharm for Django Fundraiser
PyCharm has supported Django through fundraising campaigns and ongoing collaboration with the Django Software Foundation (DSF). This year, we’re doing it again.
Together with the Django community, this campaign raised $350,000 for Django from 2016 to 2025. That support helps keep Django secure, stable, relevant, and sustainable, while also supporting community programs such as Django Girls and official events. Previous PyCharm fundraisers accounted for approximately 25% of the DSF budget, according to Django’s official blog.
Django is the rare framework that rewards you the longer you use it: mature, dependable, and still innovating. Best-in-class software, matched by one of the most welcoming communities in open source.
Will Vincent, former Django Board Member, co-host of the Django Chat podcast and co-writer of the weekly Django News newsletter
If Django has helped you learn, ship, or maintain real web products, this is a direct way to give back.
You can donate to the Django Software Foundation directly, or you can support Django through this fundraiser and get a tool you’ll rely on every day.
Django’s ‘batteries included’ philosophy was built for humans who wanted to ship fast. Turns out it’s perfect for AI agents too — fewer decisions, fewer dependencies, and fewer ways to go wrong.
Will Vincent, former Django Board Member, co-host of the Django Chat podcast and co-writer of the weekly Django News newsletter
The offer
During this campaign, get 30% off PyCharm Pro, with 100% of the proceeds going to the DSF. Or you can bundle PyCharm Pro with the JetBrains AI Pro plan and get 40% off PyCharm Pro.
This campaign ends in less than two weeks, so act now!
Why PyCharm Pro
Perfect for your workflow
The hard part of modern development is often not writing code from scratch – it’s understanding the whole project well enough to change it safely.
That’s where PyCharm Pro proves its value:
- Navigate and refactor across your entire Django project, from templates to databases.
- Work with databases without leaving the IDE.
- Build and debug Django templates with full awareness of your context.
- Develop frontend code with built-in support for JavaScript, TypeScript, and major frameworks.
- Run and debug remote and Docker-based environments with ease
No editor understands Django like PyCharm does — from template tags to ORM queries to migrations, it sees the whole stack the way you do.
Will Vincent, former Django Board Member, co-host of the Django Chat podcast and co-writer of the weekly Django News newsletter
For Django work, I think PyCharm is one of the best tools available. I use it every day. If you haven’t given it a try, this campaign is a great opportunity – AND it supports the Django Software Foundation!
Sarah Boyce, Django Fellow and Djangonaut Space co-organizer
AI on your terms
If you want AI in PyCharm, you can start with JetBrains AI directly in the IDE. You can also shape it to fit your workflow. Bring your own key, sign in with a supported provider, use third-party or local models, or connect compatible agents such as Claude Code and Codex via ACP.
That gives you more control over how you work with AI, instead of locking you into a single workflow, model, or provider. And if AI isn’t what you need, you can simply turn it off.
Support the framework you use every day
If Django is part of how you build, this purchase can improve your workflow while also investing in the framework behind it.
Happy coding!
Real Python
Quiz: SQLite and SQLAlchemy in Python: Move Your Data Beyond Flat Files
In this quiz, you’ll test your understanding of the concepts in the video course SQLite and SQLAlchemy in Python: Move Your Data Beyond Flat Files.
By working through this quiz, you’ll revisit how Python, SQLite, and SQLAlchemy work together to give your programs reliable data storage. You’ll also check your grasp of primary and foreign keys, SQLAlchemy’s Core and ORM layers, and the many-to-many relationships that tie your data together.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python GUIs
Checkboxes in Table Views with a Custom Model — Show checkboxes for boolean values in PyQt/PySide table views
I have a QTableView with a custom QAbstractTableModel, and I want to add a column of checkboxes. Should I create a custom delegate class for the checkbox, or is there a simpler way to do this?
You can use a custom delegate to draw a checkbox widget, but you don't have to. Qt provides a built-in mechanism for this: Qt.CheckStateRole. By returning Qt.Checked or Qt.Unchecked from your model's data() method, Qt will render a checkbox automatically — no delegate required.
Let's walk through how this works, starting with a simple display and then adding some interactivity.
Displaying checkboxes using Qt.CheckStateRole
The simplest way to add checkboxes to a QTableView is to handle Qt.CheckStateRole in your model's data() method. When Qt asks your model for data with this role, returning Qt.Checked or Qt.Unchecked tells Qt to draw a checkbox in that cell.
Here's a minimal example that shows a checked checkbox in every cell:
def data(self, index, role):
if role == Qt.DisplayRole:
value = self._data[index.row()][index.column()]
return str(value)
if role == Qt.CheckStateRole:
return Qt.Checked
This produces a table where every cell has both text and a checked checkbox:

In a real application, you would return Qt.Checked or Qt.Unchecked based on actual boolean values in your data. You might also restrict checkboxes to a specific column — for example, one that holds True/False values — rather than showing them everywhere.
Making checkboxes toggleable
Displaying checkboxes is a good start, but users will expect to be able to click them. To make checkboxes interactive, you need three things:
- A data store for the check state — a list (or column) that tracks which items are checked.
Qt.ItemIsUserCheckablereturned fromflags()— this tells Qt that the cell supports toggling.- A
setData()implementation forQt.CheckStateRole— this stores the updated state when the user clicks a checkbox.
Let's put all of this together in a complete example.
import sys
from PyQt6 import QtCore, QtGui, QtWidgets
from PyQt6.QtCore import Qt
class TableModel(QtCore.QAbstractTableModel):
def __init__(self, data, checked):
super().__init__()
self._data = data
self._checked = checked
def data(self, index, role):
if role == Qt.ItemDataRole.DisplayRole:
value = self._data[index.row()][index.column()]
return str(value)
if role == Qt.ItemDataRole.CheckStateRole:
checked = self._checked[index.row()][index.column()]
if checked:
return Qt.CheckState.Checked
return Qt.CheckState.Unchecked
def setData(self, index, value, role):
if role == Qt.ItemDataRole.CheckStateRole:
checked = value == Qt.CheckState.Checked.value
self._checked[index.row()][index.column()] = checked
self.dataChanged.emit(index, index, [role])
return True
return False
def rowCount(self, index):
return len(self._data)
def columnCount(self, index):
return len(self._data[0])
def flags(self, index):
return (
Qt.ItemFlag.ItemIsSelectable
| Qt.ItemFlag.ItemIsEnabled
| Qt.ItemFlag.ItemIsUserCheckable
)
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.table = QtWidgets.QTableView()
data = [
[1, 9, 2],
[1, 0, -1],
[3, 5, 2],
[3, 3, 2],
[5, 8, 9],
]
checked = [
[True, True, True],
[False, False, False],
[True, False, False],
[True, False, True],
[False, True, True],
]
self.model = TableModel(data, checked)
self.table.setModel(self.model)
self.setCentralWidget(self.table)
app = QtWidgets.QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
Run this and you'll see a table with checkboxes next to every value. Clicking any checkbox toggles it on and off, and the underlying checked list is updated accordingly.
Storing check state separately
The checked list mirrors the structure of the data list — each cell has a corresponding True or False value. This keeps the boolean check state separate from the data.
You could store it in the same data structure, as a [bool, data_value] nested list, or tuple if you like.
Returning the check state in data()
When Qt asks for Qt.ItemDataRole.CheckStateRole, we look up the boolean value for that cell and return either Qt.CheckState.Checked or Qt.CheckState.Unchecked:
if role == Qt.ItemDataRole.CheckStateRole:
checked = self._checked[index.row()][index.column()]
if checked:
return Qt.CheckState.Checked
return Qt.CheckState.Unchecked
For these return X if True, otherwise return Y type returns you can also use and X if bool else Y expression.
if role == Qt.ItemDataRole.CheckStateRole:
checked = self._checked[index.row()][index.column()]
return Qt.CheckState.Checked if checked else Qt.CheckState.Unchecked
Handling user clicks in setData()
When the user clicks a checkbox, Qt calls setData() with the new value and the Qt.ItemDataRole.CheckStateRole role. We compare the incoming value to Qt.CheckState.Checked.value to determine whether the box was checked or unchecked, then store the result:
def setData(self, index, value, role):
if role == Qt.ItemDataRole.CheckStateRole:
checked = value == Qt.CheckState.Checked.value
self._checked[index.row()][index.column()] = checked
self.dataChanged.emit(index, index, [role])
return True
return False
Notice the self.dataChanged.emit(...) call — this notifies the view that the data has changed so it can redraw the cell. Always emit this signal when you modify data in setData().
Enabling user interaction with flags()
The flags() method tells Qt what the user can do with each cell. Including Qt.ItemFlag.ItemIsUserCheckable is what makes the checkbox clickable:
def flags(self, index):
return (
Qt.ItemFlag.ItemIsSelectable
| Qt.ItemFlag.ItemIsEnabled
| Qt.ItemFlag.ItemIsUserCheckable
)
Without this flag, the checkbox will still appear (because you're returning data for CheckStateRole), but the user won't be able to toggle it.
Showing checkboxes in only one column
In many applications, you only want checkboxes in a specific column. You can achieve this by checking index.column() in your data() and flags() methods. For example, to show checkboxes only in column 2:
def data(self, index, role):
if role == Qt.ItemDataRole.DisplayRole:
value = self._data[index.row()][index.column()]
return str(value)
if role == Qt.ItemDataRole.CheckStateRole:
if index.column() == 2:
checked = self._checked[index.row()]
if checked:
return Qt.CheckState.Checked
return Qt.CheckState.Unchecked
def flags(self, index):
flags = Qt.ItemFlag.ItemIsSelectable | Qt.ItemFlag.ItemIsEnabled
if index.column() == 2:
flags |= Qt.ItemFlag.ItemIsUserCheckable
return flags
In this case, self._checked would be a simple one-dimensional list (one boolean per row) rather than a 2D list.
Summary
To add checkboxes to a QTableView with a custom QAbstractTableModel:
- Handle
Qt.ItemDataRole.CheckStateRoleindata()to display checkboxes based on boolean values. - Return
Qt.ItemFlag.ItemIsUserCheckablefromflags()to make checkboxes interactive. - Implement
setData()forQt.ItemDataRole.CheckStateRoleto store the updated state when the user clicks, and emitdataChangedto keep the view in sync.
This approach works natively with Qt's model/view architecture and avoids the complexity of writing a custom delegate. For a more complete guide to displaying data in table views — including using numpy and pandas data sources — see our QTableView with ModelViews tutorial. If you want to show only an icon without text in specific cells, see how to show only an icon in a QTableView cell. You can also learn how to create your own custom widgets for more advanced UI needs.
For an in-depth guide to building Python GUIs with PyQt6 see my book, Create GUI Applications with Python & Qt6.
April 21, 2026
PyCoder’s Weekly
Issue #731: Visualize ML, Vector DBs, Type Checker Comparison, and More (April 21, 2026)
#731 – APRIL 21, 2026
View in Browser »
Machine Learning Visualized
This is a series of Jupyter notebooks that help visualize the algorithms that are used in machine learning. Learn more about neural networks, regression, k-means clustering, and more.
GAVING HUNG
Vector Databases and Embeddings With ChromaDB
Learn how to use ChromaDB, an open-source vector database, to store embeddings and give context to large language models in Python.
REAL PYTHON course
Wallaby for Python runs Tests as you Type and Streams Results Next to Code, Plus AI Context
Wallaby brings pytest / unittest results, runtime values, coverage, errors, and time-travel debugging into VS Code, so you can fix Python faster and give Copilot, Cursor, or Claude the execution context they need to stop guessing. Try it free, now in beta →
WALLABY TEAM sponsor
Python Type Checker Comparison: Speed and Memory Usage
A benchmark comparison of speed and memory usage across Python type checkers including Pyrefly, Ty, Pyright, and Mypy.
AARON POLLACK
PEP 831: Frame Pointers Everywhere: Enabling System-Level Observability for Python (Draft)
This PEP proposes two things:
PYTHON.ORG
Discussions
Articles & Tutorials
Reassessing the LLM Landscape & Summoning Ghosts
What are the current techniques being employed to improve the performance of LLM-based systems? How is the industry shifting from post-training towards context engineering and multi-agent orchestration? This week on the show, Jodie Burchell, data scientist and Python Advocacy Team Lead at JetBrains, returns to discuss the current AI coding landscape.
REAL PYTHON podcast
Security Best Practices Featuring uv and pip
This collection of security practices explains how to best use your package management tools to help avoid malicious packages. Example: implement a cool-down period; most malicious packages are found quickly, by not installing on the day of a release your chances of getting something bad go down.
GITHUB.COM/LIRANTAL
Beyond Basic RAG: Build Persistent AI Agents
Master next-gen AI with Python notebooks for agentic reasoning, memory engineering, and multi-agent orchestration. Scale apps using production-ready patterns for LangChain, LlamaIndex, and high-performance vector search. Explore & Star on GitHub →
ORACLE sponsor
The Economics of Software Teams
Subtitled “Why Most Engineering Organizations Are Flying Blind”, this article is a breakdown of what software development teams actually cost, what they need to generate to be financially viable, and why most organizations have no visibility into either number.
VIKTOR CESSAN
OWASP Top 10 (2025 List) for Python Devs
The OWASP Top 10 is a list of common security vulnerabilities in code, like SQL injection. The list has recently been updated and Talk Python interviews Tanya Janca to discuss all the big changes this time around.
TALK PYTHON podcast
Textual: An Intro to DOM Queries
The Textual TUI framework uses a tree structure to store all of the widgets on the page. This DOM is query-able, giving you the ability to find widgets on the fly in your code.
MIKE DRISCOLL
Reflecting on 5 Years as the Developer in Residence
Łukasz Langa is stepping down as the Python Software Foundation’s first CPython Developer in residence. This post talks about his experience there and everything accomplished.
PYTHON SOFTWARE FOUNDATION
Decoupling Your Business Logic From the Django ORM
Where should I keep my business logic? This is a perennial topic in Django. This article proposes a continuum of cases, each with increasing complexity.
CARLTON GIBSON
How to Add Features to a Python Project With Codex CLI
Learn how to use Codex CLI to add features to Python projects via the terminal. Master AI-powered coding without needing a browser or IDE plugins.
REAL PYTHON
PyPI Has Completed Its Second Audit
PyPI has completed its second external security audit. This post shows all the things found and what they’re doing about each of them.
MIKE FIEDLER
New Technical Governance: Request for Community Feedback
The Django Steering Council has proposed new governance mechanism and is looking for feedback from the community.
DJANGO SOFTWARE FOUNDATION
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
April 22, 2026
REALPYTHON.COM
The Carpentries
April 22 to April 24, 2026
INSTATS.ORG
AgentCamp Amsterdam 2026
April 23, 2026
MEETUP.COM
North Bay Python 2026
April 25 to April 27, 2026
NORTHBAYPYTHON.ORG
Python Sheffield
April 28, 2026
GOOGLE.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #731.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Tryton News
Tryton Release 8.0
We are proud to announce the 8.0 LTS release of Tryton.
This release provides many bug fixes, performance improvements and some fine tuning.
You can give it a try on the demo server, use the docker image or download it here.
As usual upgrading from previous series is fully supported.
Here is a list of the most noticeable changes:
Changes for the User
Client
There is now a visual hint on the widgets of modified fields. This way the user can see what was modified before saving.
Web
The tabs can now be reordered.
The logout action has been moved in the same menu as the notification. And an entry for the help was also added. This simplifies visually the content of the header.
Accounting
The reconciliation of accounting lines can now be automated by a scheduled task.
The general ledger display only flat balance which is more comprehensive for a flat list of accounts.
We added the option to calculate rounding for cash using the opposite rounding method. This existed already for standard currency rounding but was missing for cash rounding.
It is now possible to define some payment means on the invoice.
They can be set manually or using a rule engine.
The supported payment means for now are by bank transfer and direct debit.
When invoices are created from an external source (like PEPPOL), we check the amounts against the source when the invoice is validated or posted.
This allows to create invoices for which the calculation is different in Tryton than in the source and let the user manually correct them.
Tryton warns now when the user tries to create an over-payment.
Europe
We added all the VAT exception code on the taxes which can be used in electronic invoices.
Belgium
The taxes are now setup with UNECE and VATEX codes which is useful to generate and parse UBL invoice like on PEPPOL.
Document Incoming
The OCR module supports now the payment reference of the supplier. It is stored on the supplier invoice and used when making a payment.
E-document
We added a button on the PEPPOL document to trigger an update of the status. The users do not always want to wait for the scheduled task to update the status.
We enforce that the unit price of any invoice lines sent to PEPPOL is not negative. This is a rule from the PEPPOL network that is better to enforce before posting the invoice.
The UBL invoice template has been extended to render the buyer’s item identification, the allowance and charges, the billing reference, the payment means, VATEX codes and prepaid amounts.
The UBL invoice parser supports now the payment means.
The UN/CEFACT invoice template renders the payment means and the VATEX codes.
The UNECE module stores now the allowance, charge and special service code on the product. And it stores on the payment means the UNCL4461 code.
Incoterm
The incoterm defines now who is responsible for the export and import duties between the buyer and the seller.
Party
These identifiers have been added: Slovenian Corporate Registration Number, Belgian Social Security Number, Spanish Activity Establishment Code, Russian Primary State Registration Number, Mozambique Tax Number, French Trade Registration Number, Azerbaijan Tax Number and Senegal Tax Number.
The identifiers are now formatted to ease the reading.
We added a new menu entry that lists all the party identifiers.
A “Attn” field has been added to the address. This is useful to manage delivery addresses for web shop when the customer is shipping to another party.
Production
The “Cancellation Stock” group can now also cancel running and done productions.
It is not possible to define if the cost from the time sheets must be included in the production cost calculation. This is defined per work center.
Project
The work efforts are now numbered to ease the communication between employees.
An origin field has been added to the work efforts.
Purchasing
The invoice method “on shipment” has been renamed into “on fulfillment” to be more generic.
The quantities to invoice are now calculated for each purchase line. And purchases with at least one line to invoice is marked as “To invoice”.
Quality
We added a reference field to the inspections. This allows to store an external number if the inspection was performed by an external service.
Sales
The invoice method “on shipment” has been renamed into “on fulfillment” to be more generic.
The quantities to ship and to invoice are now calculated for each sale line. And sales with at least one line to ship or to invoice is marked as “To ship” or “To invoice”.
Stock
We added a special group which is allowed to cancel done shipments and moves. This is useful to correct mistakes.
The shipments have now a wizard to ease the creation of package. It simplifies the operation like putting a package inside another package, putting only a quantity of a move into a package etc.
For UPS carrier, the module charges the duties and taxes to the shipper accordingly to the incoterm.
We store now the original planned date of the requested internal shipments and the requested production. This is useful to find late requests.
Shop
The sales stores now the URL of the corresponding order to the web shop.
Shopify
We support now the payment terms from Shopify. When a sale has a payment term, it is always confirmed in Tryton.
The gift cards are now supported with Shopify. So when a gift card is sold on Shopify, no gift card is created on Tryton. And when the gift card is used on Shopify, it appears as a payment from the gift_card gateway.
The actions from Shopify are now logged on the sale order.
The pick-up delivery method is now supported for Shopify order. When the shipment is packed on Tryton, it is marked as prepared for pickup on Shopify.
New Modules
Account Payment Check
The Account Payment Check Module allows managing and printing checks as payments.
Account Stock EU Excise
The Account Stock EU Excise Module is used to generate the excise duties declaration for European countries.
Production Ethanol
The Production Ethanol Module calculates the gain or loss of alcohol volumes in production.
Sale Project Task Module
The Sale Project Task Module adds the option to create tasks when selling services. The fulfillment of the sales is linked to the progression of these tasks.
Stock Ethanol Module
The Stock Ethanol Module helps to track alcohol in warehouses.
Removed Modules
Those modules have been removed:
account_de_skr03account_esaccount_es_siigoogle_maps
You may find alternatives published by the community.
Changes for the System Administrator
Client
Web
The build of the web client does not require bower anymore.
The session is now stored as cookie. This prevents the session to be leaked in case of security issue with our Javascript code.
The web client uses now relative path to perform the server requests. This allows to serve the web client from a sub-directory.
Server
Basic authentication is now supported also for user application. This is useful when the consumer of the user application can not support bearer authentication.
Document Incoming
The Typless modules requires now to define all the fields set on the service in order to generate a complete feedback even for fields that Typless did not recognized.
E-document
It is now possible to setup a webhook for Peppyrus. This allows to receive the PEPPOL invoices as soon as they landed on the inbox.
Inbound Email
The inbound email gains an action to handle replies to chat channels. The text content above a specific line is added as message to the corresponding channel.
Changes for the Developer
This release removes the support for Python 3.9 and adds Python 3.14.
Server
It is now possible to filter the user being notified by a scheduled task using a domain. This is useful for example when the notification applied only to users having access to a specific company.
The report engine can now use MJML as base format and convert it to HTML. This simplifies the creation of email template with compatibility against most common email clients.
The Field.sql_column receives now a tables dictionary and Model as argument.
The Field.sql_column can now be override by a method on the Model named column_<field name>(tables).
This extends the possibilities for more complex type of fields.
With those improvements, we can not support Function fields without getter but only SQL expression via the column_<field name>. Those fields are automatically searchable and sortable without the need to define domain_<field name> nor order_<field name> methods.
A last_modified field has been added to ModelSQL to avoid to duplicate the code write_date or create_date.
The fmany2one field can now be based on Function field.
We have upgraded the PostgreSQL backend to use Psycopg 3. By default Tryton is using server-side binding which allows to remove the limitation on the size of the list of IDs that can be passed by using array for the in operators.
Thus the reduce_ids and grouped_slice (without size) tools has been deprecated and the Database.IN_MAX replaced by backend.MAX_QUERY_PARAMS.
The delete and delete_many methods have been added to the FileStore API which allows to remove the files of Binary fields when they are deleted or updated.
The button states are now checked when executed with check for access. This ensure that a client can not execute a button that should be disable.
A notify_user method has been added to ModelStorage to ease the notification
of a user.
A contextual _log key can be used to force the logging of events even if they do not originate from a user.
New routes have been added to manage the login/logout with cookie.
It is now possible to include sub-directories in the tryton.cfg. This is useful when developing large module to split it into sub-directories.
A new attribute in the XML data allows to define a value as a path relative to the place of the XML file. This feature works in combination with the sub-directories to avoid to repeat the directory name.
The chat channel now send new messages by email new messages to followers who subscribed with emails.
It is now possible to mount the WSGI application under a prefix.
The RPC calls are not prefixed by /rpc/.
A generic REST API has been added as a user application.
It allows to search, retrieve, update and delete any record of a ModelStorage with the access right enforced for the user and to launch any RPC action and report.
The ModelStorage.__json__ method defines the default fields to include in the response based on usage but the client can also explicitly request the fields (with dotted notation).
The context is passed as value of the X-Tryton-Context header encoded in JSON. The language is selected from the Accept-Language header of the client. And the search can be paginated using the Range header.
Naiad
Naiad is a new Python library to access Tryton’s REST API.
Accounting
We removed the default value for the invoice’s type. It must be set explicitly.
An origin invoices field has been added to the invoice.
The Stripe payment module uses now the version 2025-09-30.clover of the API.
E-document
The UBL template filters now the additional documents per MIME type.
Web Shop
Shopify
We replaced the unmaintained ShopifyAPI library by the new shopifyapp.
2 posts - 1 participant
Real Python
Leverage OpenAI's API in Your Python Projects
Python’s openai library provides the tools you need to integrate the ChatGPT API into your Python applications. With it, you can send text prompts to the API and receive AI-generated responses. You can also guide the AI’s behavior with developer role messages and handle both simple text generation and more complex code creation tasks.
After watching this video course, you’ll understand how examples like this work under the hood. You’ll learn the fundamentals of using the ChatGPT API from Python and have code examples you can adapt for your own projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Leverage OpenAI's API in Your Python Projects
In this quiz, you’ll test your understanding of Leverage OpenAI’s API in Your Python Projects.
By working through this quiz, you’ll revisit key concepts like setting up
authentication, sending prompts with the openai library, controlling AI
behavior with role-based messages, and structuring outputs with Pydantic models.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: uv vs pip: Python Packaging and Dependency Management
In this quiz, you’ll test your understanding of uv vs pip: Python Packaging and Dependency Management.
By working through this quiz, you’ll revisit key differences between uv and pip, including package installation speed, dependency management, reproducible environments, and governance.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
April 20, 2026
Real Python
Gemini CLI vs Claude Code: Which to Choose for Python Tasks
When comparing Gemini CLI vs Claude Code, the answer to “which one is better?” is usually it depends. Both tools boost productivity for Python developers, but they have different strengths. Choosing the right one depends on your budget, workflow, and what you value most in generated code.
Gemini CLI, for instance, is known for its generous free tier, while Claude Code is a paid tool known for its production-ready output.
In this tutorial, you’ll explore features such as user experience, performance, code quality, and usage cost to help make that decision easier. The AI coding assistance these tools provide right in your terminal generally makes writing Python code much more seamless, helping you save time and be more productive.
This table highlights the key differences at a glance:
| Use Case | Gemini CLI | Claude Code |
|---|---|---|
| You need free generous usage limits | ✅ | — |
| You need Google Cloud integration | ✅ | — |
| You need faster task completion | — | ✅ |
| You need code close to production quality | — | ✅ |
You can see that Gemini CLI is a promising choice if you’re looking for free usage limits and prefer Google Cloud integration. However, if you want to complete tasks faster, Claude Code has an edge. Both tools produce code of good quality, but Claude Code generates code that is closer to production quality. If you’d like a more thorough comparison, then read on.
Get Your Code: Click here to download the free sample code for the to-do app projects built with Gemini CLI and Claude Code in this tutorial.
Take the Quiz: Test your knowledge with our interactive “Gemini CLI vs Claude Code: Which to Choose for Python Tasks” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Gemini CLI vs Claude Code: Which to Choose for Python TasksCompare Gemini CLI and Claude Code across user experience, performance, code quality, and cost to find the right AI coding tool for you.
Metrics Comparison: Gemini CLI vs Claude Code
To ground the comparisons in hands-on data, both tools are tested using the same prompt throughout this tutorial:
Prompt
Build a CLI-based mini to-do application in Python. It should allow users to create tasks, mark tasks as completed, list tasks with filtering for completed and pending tasks, delete tasks, include error handling, persist tasks to a local JSON file, and include basic unit tests.
For a fair comparison, Gemini CLI is tested on its free tier using Gemini 3 Flash Preview, which is the default model the free tier provides access to. Claude Code is tested on the Pro plan using Claude Sonnet 4.6, which is the model Claude Code primarily uses for everyday interactions on that plan.
Each tool will run this prompt three times. Completion time, token usage, and the quality of the generated code are recorded from the runs and are referenced in the Performance, Code Quality, and Usage Cost sections of this tutorial.
Note: If you want to learn more about these tools so you can compare them yourself, Real Python has you covered. The How to Use Google’s Gemini CLI for AI Code Assistance tutorial covers installation, authentication, and hands-on usage, while the Getting Started With Claude Code video course walks you through setup and core features.
You should also be comfortable using your terminal, since both Gemini CLI and Claude Code are command-line tools.
The table below provides more detailed metrics to help with each comparison:
| Metric | Gemini CLI | Claude Code |
|---|---|---|
| User Experience | Intuitive, browser-based auth, terminal-native | Minimal setup, terminal-native, strong project awareness |
| Performance | Good performance, however slower generation speed | Good performance, code is generated generally faster |
| Code Quality | Solid, better for exploratory tasks | Strong, better for production-grade work |
| Usage Cost | Free tier available; paid plans for heavier use | Requires a paid subscription to get started |
The following sections explore each metric in detail, so you can decide which tool fits your workflow best.
User Experience
When writing Python programs, it helps to be able to comfortably use your tools without dealing with unintuitive interfaces. Both Gemini CLI and Claude Code prioritize a smooth terminal experience, but user experience goes beyond the interface itself—installation, setup, available models, and features offered are also part of it.
Installation and Setup
A few differences exist between Gemini CLI and Claude Code during installation. Gemini CLI requires a Google account for authentication. Claude Code doesn’t need a Google account. Instead, it requires an Anthropic subscription or API key.
Gemini CLI is first installed using npm:
$ npm install -g @google/gemini-cli
You can also install Gemini CLI with Anaconda, MacPorts, or Homebrew, which you can find in the Gemini CLI documentation.
When installing Claude Code, you run the following commands:
Read the full article at https://realpython.com/gemini-cli-vs-claude-code/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Mike Driscoll
Textual – Logging to File and to Textual Console
When you are developing a user interface, it can be valuable to have a log of what’s going on. Creating a log in Textual, a text-based user interface, is even easier than creating one for wxPython or Tkinter. Why? Well, because Textual includes a logger that is compatible with Python’s own logging module, so it’s almost plug-and-play to hook it all up!
You’ll learn how to do this in this short tutorial!
Logging to File and the Console
Textual includes a built-in logging-type handler that you can use with Python’s own logging module called TextualHandler. Python has many built-in logging handler objects that you can use to write to stdout, a file, or even to an email address!
You can hook up multiple handlers to a logger object and write to all of them at once, which gives you a lot of flexibility.
To see how this works in Textual, you will create a very simple application that contains only two buttons. Go ahead and open your favorite Python IDE or text editor and create a new file called log_to_file.py. Then enter the following code into it:
# log_to_file.py
import logging
from textual.app import App, ComposeResult
from textual.logging import TextualHandler
from textual.widgets import Button
class LogExample(App):
def __init__(self) -> None:
super().__init__()
self.logger = logging.getLogger(name="log_example")
self.logger.setLevel(logging.INFO)
file_handler = logging.FileHandler("tui.log")
self.logger.addHandler(file_handler)
formatter = logging.Formatter(("%(asctime)s - %(name)s - %(levelname)s - %(message)s"))
file_handler.setFormatter(formatter)
textual_handler = TextualHandler()
self.logger.addHandler(textual_handler)
def compose(self) -> ComposeResult:
yield Button("Toggle Dark Mode", classes="dark mode")
yield Button("Exit", id="exit")
def on_button_pressed(self, event: Button.Pressed) -> None:
if event.button.id == "exit":
self.logger.info("User exited")
self.exit()
elif event.button.has_class("dark", "mode"):
self.theme = (
"textual-dark" if self.theme == "textual-light" else "textual-light"
)
self.logger.info(f"User toggled app theme to {self.theme}")
if __name__ == "__main__":
app = LogExample()
app.run()
As you can see, you have just two buttons for the user to interact with:
- Toggle Dark Mode – for toggling dark or light mode
- Exit – for exiting the application
No matter which button the user presses, the application will log out something. By default, Textual logs to stdout, but you cannot see it because your application will be on screen. If you want to see the logs, you will need to use one of the Textual Console applications, which is part of Textual’s devtools. If you do not have the dev tools installed, you can do so by running this command:
pip install textual-dev
Now that you have the dev tools handy, open up a new terminal window or tab and run this command:
textual console
To get Textual to send the log messages to console, you need to run your Textual application in developer mode. You will run it in a different terminal than Textual Console!
Here’s the special command:
textual run --dev log_to_file.py
You will see various events and other logged metadata appear in the Textual Console regardless of whether you specifically log to it. However, now if you do call self.log or you use Python’s print() function, you will see those appear in your log.
You will also see your log messages in your log file (tui.log), though it won’t include all the extra stuff that Textual Console displays. You only get what you log explicitly written into your log file.
Wrapping Up
And there you have it. You now know how to use Textual’s own built-in logging handler in conjunction with Python’s logging module. Remember, you can use Textual’s logging handler in addition to one or more of Python’s logging modules. You can format the output any way you want too!
Learn More About Logging
If you want to learn more about logging in Python, you might find my book, Python Logging, helpful.
![]()
Purchase the book today on Gumroad, Leanpub or Amazon!
The post Textual – Logging to File and to Textual Console appeared first on Mouse Vs Python.
Real Python
Quiz: How to Conceptualize Python Fundamentals for Greater Mastery
In this quiz, you’ll test your understanding of How to Conceptualize Python Fundamentals for Greater Mastery.
By working through this quiz, you’ll revisit a framework for forming a clear mental picture of Python concepts, including defining ideas in your own words, finding real-world and software analogies, comparing similar concepts, and learning by teaching.
With this framework in hand, you’ll be better equipped to approach new Python topics with confidence.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Bytes
#477 Lazy, Frozen, and 31% Lighter
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://django-modern-rest.readthedocs.io/en/latest/?featured_on=pythonbytes">Django Modern Rest</a></strong></li> <li><strong>Already playing with Python 3.15</strong></li> <li><strong><a href="https://mkennedy.codes/posts/cutting-python-web-app-memory-over-31-percent/?featured_on=pythonbytes">Cutting Python Web App Memory Over 31%</a></strong></li> <li><strong><a href="https://tryke.dev?featured_on=pythonbytes">tryke - A Rust-based Ptyhon test runner with a Jest-style API</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=WmJtmS5Fn7U' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="477">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a> <strong>Connect with the hosts</strong></li> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky) Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</li> </ul> <p><strong>Michael #1: <a href="https://django-modern-rest.readthedocs.io/en/latest/?featured_on=pythonbytes">Django Modern Rest</a></strong></p> <ul> <li>Modern REST framework for Django with types and async support</li> <li>Supports Pydantic, Attrs, and msgspec</li> <li>Has ai coding support with llms.txt</li> <li>See an example at the <a href="https://django-modern-rest.readthedocs.io/en/latest/pages/getting-started.html#showcase">“showcase” section</a></li> </ul> <p><strong>Brian #2: Already playing with Python 3.15</strong></p> <ul> <li><a href="https://blog.python.org/2026/04/python-3150a8-3144-31313/?featured_on=pythonbytes">3.15.0a8, 2.14.4 and 3.13.13 are out</a> <ul> <li>Hugo von Kemenade</li> </ul></li> <li>beta comes in May, CRs in Sept, and Final planned for October</li> <li>But still, there’s awesome stuff here already, here’s what I’m looking forward to: <ul> <li><a href="https://docs.python.org/3.15/whatsnew/3.15.html#whatsnew315-lazy-imports"><strong>PEP 810</strong></a>: Explicit lazy imports</li> <li><a href="https://docs.python.org/3.15/whatsnew/3.15.html#whatsnew315-frozendict"><strong>PEP 814</strong></a>: <code>frozendict</code> built-in type</li> <li><a href="https://docs.python.org/3.15/whatsnew/3.15.html#whatsnew315-unpacking-in-comprehensions"><strong>PEP 798</strong></a>: Unpacking in comprehensions with <code>*</code> and <code>**</code></li> <li><a href="https://docs.python.org/3.15/whatsnew/3.15.html#whatsnew315-utf8-default"><strong>PEP 686</strong></a>: Python now uses UTF-8 as the default encoding</li> </ul></li> </ul> <p><strong>Michael #3: <a href="https://mkennedy.codes/posts/cutting-python-web-app-memory-over-31-percent/?featured_on=pythonbytes">Cutting Python Web App Memory Over 31%</a></strong></p> <ul> <li>I cut 3.2 GB of memory usage from our Python web apps using five techniques: <ul> <li>async workers</li> <li>import isolation</li> <li>the Raw+DC database pattern</li> <li>local imports for heavy libraries</li> <li>disk-based caching</li> </ul></li> <li><a href="https://mkennedy.codes/posts/cutting-python-web-app-memory-over-31-percent/?featured_on=pythonbytes">See the full article</a> for details.</li> </ul> <p><strong>Brian #4: <a href="https://tryke.dev?featured_on=pythonbytes">tryke - A Rust-based Ptyhon test runner with a Jest-style API</a></strong></p> <ul> <li>Justin Chapman</li> <li>Watch mode, Native async support, Fast test discovery, In-source testing, Support for doctests, Client/server mode for fast editor integrations, Pretty, per-assertion diagnostics, Filtering and marks, Changed mode (like pytest-picked), Concurrent tests, Soft assertions,</li> <li>JSON, JUnit, Dot, and LLM reporters</li> <li>Honestly haven’t tried it yet, but you know, I’m kinda a fan of thinking outside the box with testing strategies so I welcome new ideas.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://aleyan.com/blog/2026-why-arent-we-uv-yet/?featured_on=pythonbytes">Why are’t we uv yet?</a> <ul> <li>Interesting take on the “agents prefer pip”</li> <li>Problem with analysis. <ul> <li>Many projects are libraries and don’t publish uv.lock file</li> <li>Even with uv, it still often seen as a developer preference for non-libarries. You can sitll use uv with requirements.txt</li> </ul></li> </ul></li> <li><a href="https://us.pycon.org/2026/schedule/talks/?featured_on=pythonbytes">PyCon US 2026 talks schedule is up</a> <ul> <li>Interesting that there’s an AI track now. I won’t be attending, but I might have a bot watch the videos and summarize for me. :)</li> </ul></li> <li><a href="https://justinjackson.ca/tech-done-to-us?featured_on=pythonbytes">What has technology done to us?</a> <ul> <li>Justin Jackson</li> </ul></li> <li><a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">Lean TDD new cover</a> <ul> <li>Also, 0.6.1 is so ready for me to start f-ing reading the audio book and get on with this shipping the actual f-ing book and yes I realize I seem like I’m old because I use “f-ing” while typing. Michael:</li> </ul></li> <li><a href="https://docs.python.org/release/3.14.4/whatsnew/changelog.html?featured_on=pythonbytes">Python 3.14.4 is out</a></li> <li><a href="https://github.com/BeanieODM/beanie/releases/tag/2.1.0?featured_on=pythonbytes">Beanie 2.1 release</a></li> </ul> <p><strong>Joke: <a href="https://motherduck.com/humandb/?featured_on=pythonbytes">HumanDB</a> - Blazingly slow. Emotionally consistent.</strong></p>
April 19, 2026
Django Weblog
DSF member of the month - Rob Hudson
For April 2026, we welcome Rob Hudson as our DSF member of the month! ⭐

Rob is the creator of django-debug-toolbar (DDT), tool used by more than 100 000 folks in the world. He introduces Content-Security-Policy (CSP) support in Django and contribute to many open source packages. He has been a DSF member since February 2024.
You can learn more about Rob by visiting Rob's website and his GitHub Profile.
Let’s spend some time getting to know Rob better!
Can you tell us a little about yourself
I'm a backend Python engineer based in Oregon, USA. I studied biochemistry in college, where software was just a curiosity and hobby on the side, but I'm grateful that my curiosity turned into a career in tech. My earliest memory of that curiosity was taking apart my Speak & Spell as a kid to see how it worked and never quite getting it back together again.
How did you start using Django?
I followed the path of the "P"s: Perl, then PHP, then Python. When Ruby on Rails arrived it was getting a lot of attention, but I was already enjoying Python, so when Django was announced I was immediately drawn to it. I started building small apps on my own, then eventually led a broader tech stack modernization at work, a health education company where we were building database-driven learning experiences with quizzes and a choose-your-own-adventure flow through health content. Django, Git, and GitHub all came together around that same time as part of that transition. Fun fact: my GitHub user ID is 1106.
What other framework do you know and if there is anything you would like to have in Django if you had magical powers?
I've been building a few projects with FastAPI lately and have really come to appreciate the type-based approach to validation via Pydantic. The way typing syntax influences the validation logic is something I'd love to see influence Django more over time.
Erlang has a feature called the crash dump: when something goes wrong, the runtime writes out the full state of every process to a file you can open and inspect after the fact. As someone who built a debug toolbar because I wanted to see what was going on under the hood. Being provided a freeze frame of the exact moment things went wrong, full state intact, ready to inspect sounds like magic.
The Rust-based tooling emerging in the Python ecosystem is fascinating to watch. Tools like uv, ruff, and efforts around template engines, JSON encoders, ASGI servers, etc. The potential for significant speed improvements without losing what makes Django Django is an interesting space.
What projects are you working on now?
I have a couple of personal fintech projects I'm playing with, one using FastAPI and one using Django. I've been enjoying exploring and wiring up django-bolt for the Django project. I'm impressed with the speed and developer friendliness.
On the django-debug-toolbar front, I recently contributed a cache storage backend and have a longer term idea to add an API layer and a TUI interface that I'd love to get back to working on someday.
Which Django libraries are your favorite (core or 3rd party)?
Django Debug Toolbar (I may be slightly biased). Beyond that: whitenoise and dj-database-url are great examples of libraries that do one thing well and get out of your way. I'd also add granian, a Rust-based ASGI server. And django-allauth, which I'm somehow only just trying for the first time. For settings management I've cycled through a few libraries over the years and am currently eyeing pydantic-settings for a 12-factor approach to configuration.
What are the top three things in Django that you like?
The community. I've been part of it for a long time and it has a quality that's hard to put into words. It feels close knit, genuinely welcoming to newcomers, and there's a rising tide lifts all boats mentality that I don't think you find everywhere. People care about helping each other succeed. The sprints and hallway track at DjangoCon have been a wonderful extension of that.
The ORM. Coming from writing a lot of raw SQL, I appreciate the syntax of Django's ORM which hits a sweet spot of simplicity and power for most use cases.
Stability, documentation, and the batteries included philosophy. I appreciate a framework that at its core doesn't chase trends, has a predictable release cycle, amazingly well written docs (which makes sense coming from its journalism background), and there's enough built in to get surprisingly far without reaching for third party packages.
You are the creator of Django Debug Toolbar, this tool is really popular! What made you create the tool and publish the package?
The inspiration came from Symfony, a PHP framework that had a debug toolbar built in. At the time, I was evaluating frameworks for a tech stack transition at work and thought, why doesn't Django have one of these? So I started hacking on a simple middleware that collected basic stats and SQL queries and injected the toolbar HTML into the rendered page. The first commit was August 2008.
The SQL piece was personally important. Coming from PHP where I wrote a lot of raw SQL by hand, I wanted to see what the ORM was actually generating.
The nudge to actually release it came at the first DjangoCon in 2008 at Google's headquarters. Cal Henderson gave a keynote called "Why I Hate Django" and showed a screenshot of Pownce's debug toolbar in the page header, then talked about internal tooling at Flickr similar to what the Django debug toolbar has currently. Seeing those motivated me to tweet out what I was working on that same day. Apparently I wasn't the only one who wanted to see what the ORM was doing.
It has been created in 2008, what are your reflections on it after so many years?
Mostly gratitude. I had a young family at the time and life got busy, so I stepped back from active maintenance earlier than I would have liked. Watching it flourish under the maintainers who stepped up has been really wonderful to see. They've improved it, kept up with releases, supported the community, and have done a better job of it than I was in a position to do at the time, so I'm grateful to all who carried the torch.
At this point I contribute to it like any other project, which might sound strange for something I created, but it's grown bigger than my early involvement and that feels right. I still follow along and it makes me happy to see it continuing to grow and evolve.
What I didn't anticipate was what it gave back. It helped launch my career as a Django backend developer and I'm fairly certain it played a role in landing me a job at Mozilla. All from of a middleware I hacked together just to see what the ORM was doing.
Being a maintainer is not always easy despite the fact it can be really amazing. Do you have any advice for current and future maintainers in open source?
For what it's worth, what worked for me was building things for fun and to learn rather than setting out to build something popular. I also didn't worry too much about perfection or polish early on.
If life gets busy or your interests move on, I'd say trust the community. Have fun, and if it stops being fun, find some enthusiastic people who still think it's fun and hand it to them gracefully. That worked out better than I could have hoped in my case.
I'm genuinely curious about how AI changes open source. If simple utilities can be generated on the fly rather than installed as packages, what does that mean for small focused libraries? My hope is that the value of open source was never just the code anyway. The collaboration, the issue discussions, the relationships. AI can generate code but it can't replicate those things.
One thing I've noticed is newer developers using AI to generate patches they don't fully understand and submitting them as contributions. I get the impulse, but I'd encourage using AI as a tool for curiosity rather than a shortcut. Let it suggest a fix, then dig into why it works, ask it questions, iterate, which is something I often do myself.
You have introduced CSP support in Django core, congratulations and thank you for this addition! How did the process of creating this contribution go for you?
I picked up django-csp at Mozilla because it had become unmaintained and was a blocker from upgrading to newer Python and Django versions. What started as a simple maintenance task turned into a bit of a yak shave, but a good one. Getting up to speed on CSP led to ticket triage, which led to a refactor, which eventually led me to a 14 year old Django issue requesting CSP in core. Once the refactor was done I made the mistake of actually reading that 14 year old ticket and then felt personally responsible for it.
The more I worked in the space the clearer the ecosystem problem became. As a third party package, django-csp couldn't provide a standardized API that other packages could reliably depend on. If a third party library needed to add nonces to their own templates, they couldn't assume django-csp was installed. Seeing that friction play out in projects like debug toolbar and Wagtail convinced me that CSP support made sense in core.
Working with the Django fellows through the process was a genuine pleasure and I have enormous respect for what they do. They are patient, kind, and shaped what landed in core immensely. What surprised me most was how much they handle behind the scenes and how gracefully they manage the constant demands on their attention. Huge props to Natalia in particular for guiding a large and complex feature through to completion.
Do you remember your first contribution to open source?
Before Django I'd been tinkering on the web for years. I built tastybrew, an online homebrew recipe calculator and sharing site, partly to scratch my own itch and partly to get deeper with PHP and hosting my own projects. Back then open source collaboration wasn't what it is today. Before GitHub there was Freshmeat, SourceForge, emailed patches, maybe your own server with a tarball to download.
My first Django contribution was a small fix to the password reset view in 2006. Over the next several years there were around 40 or so contributions like docs corrections, admin improvements, email handling, security fixes. Contributing felt natural because the code was open and the community was welcoming.
I joined Mozilla in 2011 and shifted focus for a while. Mozilla was quietly contributing quite a bit back to the Django ecosystem during those years, with many 3rd party Django libraries, like django-csp. One of my favorite open source contributions was when I collaborated with a colleague on a Python DSL for Elasticsearch that eventually became the basis for Elastic's official Python client.
What are your hobbies or what do you do when you’re not working?
Reading, cooking, and getting outside when I can. I try to eat a whole food plant based diet and enjoy cooking in that style. Not sure it counts as a hobby but I enjoy wandering grocery stores, browsing what's new, reading ingredients, curious about flavors, thinking about what I could recreate at home.
Getting away from screens is important to me. Gardening, hiking, camping, long walks, travel when possible. Petrichor after rain. Puzzles while listening to audiobooks or podcasts. I brew oolong tea every day, a quiet ritual where the only notification is my tea timer.
Code has always felt more like curiosity than work to me, so I'm not sure where hobby ends and the rest begins.
Anything else you'd like to share?
If you have a Django codebase that needs some love, I'm available for contract work. I genuinely enjoy the unglamorous stuff: upgrading legacy codebases, adding CSP support, and refactoring for simplicity and long term maintainability. There's something satisfying about stepping back, seeing the bigger picture, and leaving things cleaner than you found them. You can find me on GitHub at robhudson.
Doing this interview was a nice way to reflect on my career. I can see that curiosity and adaptation have been pretty good companions. I'm grateful Django and its community have been a big part of that journey.
Thank you for doing the interview, Rob !
April 18, 2026
EuroPython
Humans of EuroPython: Nikoś (nikoshell)
EuroPython wouldn&apost exist without our dedicated volunteers who work tirelessly behind the scenes. They design our website, set up the call for proposals system, review hundreds of submissions, carefully select talks, coordinate speakers, and handle countless logistical details. Every aspect of the conference reflects their passion and expertise. Thank you for making EuroPython possible! 🎉
Below is our conversation with Nikoshell, who worked on the EuroPython 2025 website as well as a part of Communications & Design and Sponsorship teams.
We&aposre grateful for your work on the conference, Nikoshell!
Nikoś aka Nikoshell, contributor and website developer at EuroPython 2025EP: What was your primary role as a volunteer, and what did a typical day of contributing look like for you?
I quickly found a rhythm. Using a streamlined Linux setup with terminal-first tools, I focused on solving problems instead of fighting my tools. I’d catch the European team early, fix blockers like design assets or sponsor content, ship changes, and get feedback within hours. Morning performance fixes allowed richer assets by afternoon, and sponsor updates became social content automatically.
EP: Had you attended EuroPython before, or was volunteering your first experience with it?
First time organizing. Writing Python for 15+ years is one thing; seeing how a conference this size works is different. One code change could impact thousands. It was the most rewarding Python work I’ve done in years.
EP: What&aposs one task you handled that attendees might not realize happens behind the scenes at EuroPython?
I automated sponsor data into social media graphics, saving hours of repetitive work.
EP: Was there a moment when you felt your contribution really made a difference?
Whenever a fix cleared busywork and let the team focus on creative work.
EP: Is there anything you took away from the experience that you still use today?
Collaboration patterns. Fast, trusting, distributed teams set a new bar. I still use those workflows and stay connected with the team.
EP: What would you say to someone considering volunteering at EuroPython but feeling hesitant?
Time matters less than impact. You gain skills, cleaner workflows, and strong connections. Just be willing to learn and ship.
EP: What connects Capture The Flag competitions (CTFs), AI automated solutions, and volunteering for EuroPython in your opinion?
Same mental model: find bottlenecks, remove friction, ship. I’ve competed in CTFs—Capture The Flag cybersecurity challenges—with my team justCatTheFish (ranked #1 in Poland, top 10 worldwide), contributed to pwndbg, and built security infrastructure. EuroPython felt like a CTF challenge solved with a high-speed, aligned team.
EP: Thank you for your contributions, Nikoshell!
Seth Michael Larson
More thoughts on Nintendo Switch 2 storage prices
Since my last post about Nintendo Switch 2 storage and prices three major things have happened affecting Switch 2 game prices:
- Nintendo published a new digital game pricing strategy where digital first-party games would be priced $10 USD less than physical games. This puts the American game market in line with the rest of the world. We'll see below why this change makes sense.
- microSD Express cards have increased drastically in price. The Lexar 1TB microSDXC card cost $200 USD in July 2025 and today is being sold for $335 USD from the same retailer. This means that “price-per-GB” has increased ~$0.13 for the highest capacity cards.
- Nintendo appears to be manufacturing Switch 2 game cartridges with smaller than the typical 64GB capacity. “MIO: Memories in Orbit” released on a physical cartridge with a $30 price tag. This will hopefully mean fewer games being published to “Game Key cards”, especially smaller or indie games.
I created a small Python script which produces tables of data comparing physical and digital prices comparing different microSD Express cards and their price-per-GB ratios across different Nintendo Switch 2 games.
Mario Kart World
This is the game people think of for the Switch 2, and the $80 USD price tag across both digital and physical provided some sticker shock for many. I did not understand how the $60 USD standard across all games hung on for as long as it did.
The table below which includes both the price of the game and incremental price of storage (depending on which storage device you purchase) to compare the price between physical and digital.
| Edition | Storage | Total Price | Game Price | Storage Price | Game Size |
|---|---|---|---|---|---|
| Physical | Cartridge | $80.00 | $80.00 | --- | --- |
| Digital | Lexar 1TB (Costco) | $83.87 | $80.00 | $3.87 ($0.18/GB) | 22 GB |
| Digital | Lexar 512GB | $86.45 | $80.00 | $6.45 ($0.29/GB) | 22 GB |
| Digital | Lexar 1TB | $87.20 | $80.00 | $7.20 ($0.33/GB) | 22 GB |
| Digital | Lexar 256GB | $87.73 | $80.00 | $7.73 ($0.35/GB) | 22 GB |
| Digital | SanDisk 512GB | $87.73 | $80.00 | $7.73 ($0.35/GB) | 22 GB |
| Digital | SanDisk 256GB | $88.59 | $80.00 | $8.59 ($0.39/GB) | 22 GB |
| Digital | SanDisk 128GB | $92.03 | $80.00 | $12.03 ($0.55/GB) | 22 GB |
Yoshi and the Mysterious Book
Now we look at the first game with the new pricing structure in the USA: “Yoshi and the Mysterious Book”. The game is priced at $70 USD physically and $60 USD digitally. Compared to Mario Kart World where all digital editions were more expensive than physical when storage costs are factored in: almost all digital editions are cheaper for Yoshi!
| Edition | Storage | Total Price | Game Price | Storage Price | Game Size |
|---|---|---|---|---|---|
| Physical | Cartridge | $70.00 | $70.00 | --- | --- |
| Digital | Lexar 1TB (Costco) | $63.62 | $60.00 | $3.62 ($0.18/GB) | 20.6 GB |
| Digital | Lexar 512GB | $66.04 | $60.00 | $6.04 ($0.29/GB) | 20.6 GB |
| Digital | Lexar 1TB | $66.74 | $60.00 | $6.74 ($0.33/GB) | 20.6 GB |
| Digital | Lexar 256GB | $67.24 | $60.00 | $7.24 ($0.35/GB) | 20.6 GB |
| Digital | SanDisk 512GB | $67.24 | $60.00 | $7.24 ($0.35/GB) | 20.6 GB |
| Digital | SanDisk 256GB | $68.05 | $60.00 | $8.05 ($0.39/GB) | 20.6 GB |
| Digital | SanDisk 128GB | $71.27 | $60.00 | $11.27 ($0.55/GB) | 20.6 GB |
MIO: Memories in Orbit
MIO is the cheapest game to date that is published on a non-“Game Key card” cartridge for the Switch 2 at $30 USD physically and $20 USD digitally. The game being only 4GB means the digital edition is much cheaper than the physical edition.
| Edition | Storage | Total Price | Game Price | Storage Price | Game Size |
|---|---|---|---|---|---|
| Physical | Cartridge | $30.00 | $30.00 | --- | --- |
| Digital | Lexar 1TB (Costco) | $20.77 | $20.00 | $0.77 ($0.18/GB) | 4.4 GB |
| Digital | Lexar 512GB | $21.29 | $20.00 | $1.29 ($0.29/GB) | 4.4 GB |
| Digital | Lexar 1TB | $21.44 | $20.00 | $1.44 ($0.33/GB) | 4.4 GB |
| Digital | Lexar 256GB | $21.55 | $20.00 | $1.55 ($0.35/GB) | 4.4 GB |
| Digital | SanDisk 512GB | $21.55 | $20.00 | $1.55 ($0.35/GB) | 4.4 GB |
| Digital | SanDisk 256GB | $21.72 | $20.00 | $1.72 ($0.39/GB) | 4.4 GB |
| Digital | SanDisk 128GB | $22.41 | $20.00 | $2.41 ($0.55/GB) | 4.4 GB |
Final Fantasy VII Remake Intergrade
And finally, we look at FF7 Remake Intergrade, which according to its Nintendo page is planned to be over 90GB total. This massive game size makes the price to store the game a significant percentage the total price of the game.
| Edition | Storage | Total Price | Game Price | Storage Price | Game Size |
|---|---|---|---|---|---|
| Digital | Lexar 1TB (Costco) | $55.89 | $40.00 | $15.89 ($0.18/GB) | 90.4 GB |
| Digital | Lexar 512GB | $66.48 | $40.00 | $26.48 ($0.29/GB) | 90.4 GB |
| Digital | Lexar 1TB | $69.57 | $40.00 | $29.57 ($0.33/GB) | 90.4 GB |
| Digital | Lexar 256GB | $71.78 | $40.00 | $31.78 ($0.35/GB) | 90.4 GB |
| Digital | SanDisk 512GB | $71.78 | $40.00 | $31.78 ($0.35/GB) | 90.4 GB |
| Digital | SanDisk 256GB | $75.31 | $40.00 | $35.31 ($0.39/GB) | 90.4 GB |
| Digital | SanDisk 128GB | $89.44 | $40.00 | $49.44 ($0.55/GB) | 90.4 GB |
It will be interesting seeing how specifically the availability of new cartridge types will change whether companies use Game Key cards for their games. I suspect the pressure to use Game Key cards will still be high as the cost of storage continues to increase for companies and those costs cuts into margins.
None of these tables include the benefits and down-sides of each medium. Many digital game buyers like not having to worry about lost or stolen games while in transit or not having to physically store the boxes and cartridges. Many players may not need to increase their Switch 2 storage if they only play a handful of games. And who knows, maybe the price of storage will decrease in the future?
I hope this information helps you make an informed choice when selecting digital or physical Nintendo Switch 2 games in the future. Happy gaming!
Thanks for keeping RSS alive! ♥
April 17, 2026
Mike Driscoll
Textual – An Intro to DOM Queries (Part I)
In this article, you will learn how to query the DOM in Textual. You will discover that the DOM keeps track of all the widgets in your application. By running queries against the DOM, you can find widgets quickly and update them, too.
You will be learning the following topics related to the DOM:
- The query one method
- Textual queries
You will learn more in the second part of this series next week!
You will soon see the value of working with DOM queries and the power that these queries give you. Let’s get started!
The Query One Method
You will find the query_one() method throughout the Textual documentation and many Textual applications on GitHub. You may use query_one() to retrieve a single widget that matches a CSS selector or a widget type.
You can pass in up to two parameters to query_one():
- The CSS selector
- The widget type
- Or both at the same time
If you pass both, pass the CSS selector first, with the widget type as the second parameter.
Try some of this out. Open up your Python editor and create a file named query_input.py. Then enter this code in it:
# query_input.py
from textual.app import App, ComposeResult
from textual.widgets import Button, Input
class QueryInput(App):
def compose(self) -> ComposeResult:
yield Input()
yield Button("Update Input")
def on_button_pressed(self) -> None:
input_widget = self.query_one(Input)
new_string = f"You entered: {input_widget.value}"
input_widget.value = new_string
if __name__ == "__main__":
app = QueryInput()
app.run()
Your code creates an Input and a Button widget. Enter some text in the Input widget and press the button. Your on_button_pressed() method will get called. You call query_one() and pass it an Input widget. Then, you update the returned Input widget’s value with a new string.
Here is what the application might look like:

Now, you will try writing a new piece of code where you use query_one() with a CSS selector. Create a new file called query_one_same_ids.py and use this code:
# query_one_same_ids.py
from textual.app import App, ComposeResult
from textual.widgets import Button, Label
class QueryApp(App):
def compose(self) -> ComposeResult:
yield Label("Press a button", id="label")
yield Button("Test", id="button")
def on_button_pressed(self) -> None:
widget = self.query_one("#label")
widget.update("You pressed the button!")
if __name__ == "__main__":
app = QueryApp()
app.run()
In this example, you create two widgets with different IDs. Then you use query_one() to select the Label widget and update its text.
If you call query_one() and there are no matches, you will get a NoMatches exception. On the other hand, if there is more than one match, the method will return the first item that does match.
What will the following code do if you put it in your example above?
self.query_one(”#label”, Button)
If you guessed that Textual will raise an exception, you should congratulate yourself. You have good intuition! If the widget matches the CSS selector but not the widget type, then you will get a WrongType exception raised.
Textual Queries
Textual has more than one way to query the DOM. You may also use the query() method, which you can use to query or find multiple widgets. When you call query(), it will return a DOMQuery object, which behaves as a list-like container of widgets.
You can see how this works by writing some code. Create a new Python file named query_all.py and add this code to it:
# query_all.py
from textual.app import App, ComposeResult
from textual.widgets import Button, Label
class QueryApp(App):
def compose(self) -> ComposeResult:
yield Label("Press a button", id="label")
yield Button("Test", id="button")
def on_button_pressed(self) -> None:
widgets = self.query()
s = ""
for widget in widgets:
s += f"{widget}\n"
label = self.query_one("#label")
label.update(s)
if __name__ == "__main__":
app = QueryApp()
app.run()
The idea is to get all the widgets in your application and print them out. Of course, you can’t print out anything when your terminal application is blocking stdout, so instead, you create a string of widgets separated by new lines and update the Label widget.
Here is an example of what you might get if you run the code and press the button on your machine:

You might be surprised by that output. Perhaps you thought you would only see a Label and a Button widget in that list? If so, you forgot that a Screen widget is always lurking in the background. But there are also two more: a ToastRack and a Tooltip widget. These come with all your applications. The ToastRack positions Toast widgets, which you use to display a notification message. A Tooltip is a message that appears when you hover your mouse over a widget.
You do not need to know more about those extra widgets now.
Also note that all query methods can be used on both the App and Widget subclasses, which is very handy.
You can use CSS selectors with query() in much the same way as you can with query_one(). The difference, of course, is that query() always returns an iterable DOMObject.
Let’s pretend you want to get all the Button widgets in your application and iterate over them. Create a new Python script called query_button.py with this code:
# query_buttons.py
from textual.app import App, ComposeResult
from textual.widgets import Button, Label
class QueryApp(App):
def compose(self) -> ComposeResult:
yield Label("Press a button", id="label")
yield Button("One", id="one")
yield Button("Two", id="two")
yield Button("Three")
def on_button_pressed(self) -> None:
s = ""
for widget in self.query("Button"):
s += f"{widget}\n"
label = self.query_one("#label")
label.update(s)
if __name__ == "__main__":
app = QueryApp()
app.run()
Here you are passing in a string, “Button”, to query(). If using query_one, you would use the Button type directly. Regardless, when you run this code and press the button, you will see the following:

That worked great! This time, you queried the DOM and returned all the Button widgets in your application.
What if you wanted to find all the disabled buttons in your code? You can disable widgets using the disabled style flag or the CSS attribute. To find those widgets, you would update the query like this: widgets = self.query("Button.disabled").
The query objects in Textual also provide a results() method that you can use as an alternative way of iterating over the widgets. For example, you can use results() to rewrite the query above that would retrieve all the disabled buttons to be something like this:
widgets = self.query(".disabled").results(Button)
s = ""
for widget in widgets:
s += f"{widget}\n"
This code combines the last example query with the last full code example. Although this latter version is more verbose, you might find it easier to read than the original query for disabled widgets.
Another benefit of using results() is that Python type checkers, such as Mypy, can use it to determine the widget type in the loop. When you do not use results(), then Mypy will only know that you are looping over a Widget object, rather than a Button object.
Wrapping Up
You learned the basics of using Textual’s DOM query methods in this article. You can use these query methods to access one or more widgets in your user interface.
Specifically, you learned about the following:
- The query one method
- Textual queries
Textual is a great way to create a user interface with Python. You should check it out today!
Learn More
The post Textual – An Intro to DOM Queries (Part I) appeared first on Mouse Vs Python.
Real Python
The Real Python Podcast – Episode #291: Reassessing the LLM Landscape & Summoning Ghosts
What are the current techniques being employed to improve the performance of LLM-based systems? How is the industry shifting from post-training towards context engineering and multi-agent orchestration? This week on the show, Jodie Burchell, data scientist and Python Advocacy Team Lead at JetBrains, returns to discuss the current AI coding landscape.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Working With Python Virtual Environments
Test your understanding of the Working With Python Virtual Environments video course.
You’ll revisit why virtual environments matter, how to create and activate them, and how to install and manage packages inside an isolated Python environment.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
April 16, 2026
Talk Python to Me
#545: OWASP Top 10 (2025 List) for Python Devs
The OWASP Top 10 just got a fresh update, and there are some big changes: supply chain attacks, exceptional condition handling, and more. Tanya Janca is back on Talk Python to walk us through every single one of them. And we're not just talking theory, we're going to turn Claude Code loose on a real open source project and see what it finds. Let's do it.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/temporal-replay'>Temporal</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>DevSec Station Podcast</strong>: <a href="https://www.devsecstation.com/?featured_on=talkpython" target="_blank" >www.devsecstation.com</a><br/> <strong>SheHacksPurple Newsletter</strong>: <a href="https://newsletter.shehackspurple.ca/?featured_on=talkpython" target="_blank" >newsletter.shehackspurple.ca</a><br/> <strong>owasp.org</strong>: <a href="https://owasp.org?featured_on=talkpython" target="_blank" >owasp.org</a><br/> <strong>owasp.org/Top10/2025</strong>: <a href="https://owasp.org/Top10/2025/?featured_on=talkpython" target="_blank" >owasp.org</a><br/> <strong>from here</strong>: <a href="https://github.com/awesome-selfhosted/awesome-selfhosted?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Kinto</strong>: <a href="https://github.com/Kinto/kinto?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>A01:2025 - Broken Access Control</strong>: <a href="https://owasp.org/Top10/2025/A01_2025-Broken_Access_Control/?featured_on=talkpython" target="_blank" >owasp.org</a><br/> <strong>A02:2025 - SecuA02 Security Misconfiguration</strong>: <a href="https://owasp.org/Top10/2025/A02_2025-Security_Misconfiguration/?featured_on=talkpython" target="_blank" >owasp.org</a><br/> <strong>ASP.NET</strong>: <a href="https://ASP.NET?featured_on=talkpython" target="_blank" >ASP.NET</a><br/> <strong>A03:2025 - Software Supply Chain Failures</strong>: <a href="https://owasp.org/Top10/2025/A03_2025-Software_Supply_Chain_Failures/?featured_on=talkpython" target="_blank" >owasp.org</a><br/> <strong>A04:2025 - Cryptographic Failures</strong>: <a href="https://owasp.org/Top10/2025/A04_2025-Cryptographic_Failures/?featured_on=talkpython" target="_blank" >owasp.org</a><br/> <strong>A05:2025 - Injection</strong>: <a href="https://owasp.org/Top10/2025/A05_2025-Injection/?featured_on=talkpython" target="_blank" >owasp.org</a><br/> <strong>A06:2025 - Insecure Design</strong>: <a href="https://owasp.org/Top10/2025/A06_2025-Insecure_Design/?featured_on=talkpython" target="_blank" >owasp.org</a><br/> <strong>A07:2025 - Authentication Failures</strong>: <a href="https://owasp.org/Top10/2025/A07_2025-Authentication_Failures/?featured_on=talkpython" target="_blank" >owasp.org</a><br/> <strong>A08:2025 - Software or Data Integrity Failures</strong>: <a href="https://owasp.org/Top10/2025/A08_2025-Software_or_Data_Integrity_Failures/?featured_on=talkpython" target="_blank" >owasp.org</a><br/> <strong>A09:2025 - Security Logging and Alerting Failures</strong>: <a href="https://owasp.org/Top10/2025/A09_2025-Security_Logging_and_Alerting_Failures/?featured_on=talkpython" target="_blank" >owasp.org</a><br/> <strong>A10 Mishandling of Exceptional Conditions</strong>: <a href="https://owasp.org/Top10/2025/A10_2025-Mishandling_of_Exceptional_Conditions/?featured_on=talkpython" target="_blank" >owasp.org</a><br/> <strong>https://github.com/KeygraphHQ/shannon</strong>: <a href="https://github.com/KeygraphHQ/shannon?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>anthropic.com/news/mozilla-firefox-security</strong>: <a href="https://www.anthropic.com/news/mozilla-firefox-security?featured_on=talkpython" target="_blank" >www.anthropic.com</a><br/> <strong>generalpurpose.com/the-distillation/claude-mythos-what-it-means-for-your-business</strong>: <a href="https://www.generalpurpose.com/the-distillation/claude-mythos-what-it-means-for-your-business?featured_on=talkpython" target="_blank" >www.generalpurpose.com</a><br/> <strong>Python Example Concepts</strong>: <a href="https://blobs.talkpython.fm/owasp-top-10-2025-python-example-concepts.html" target="_blank" >blobs.talkpython.fm</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=ffid3jWA0JE" target="_blank" >youtube.com</a><br/> <strong>Episode #545 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/545/owasp-top-10-2025-list-for-python-devs#takeaways-anchor" target="_blank" >talkpython.fm/545</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/545/owasp-top-10-2025-list-for-python-devs" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
Django Weblog
New Technical Governance - request for community feedback
Hello Django community,
The Steering Council is excited to share our proposed new technical governance and ask for your feedback. Last year we suspended the formal voting process of the Steering Council. The updates we’re proposing would bring how we’ve been operating into alignment with the written governance.
From the motivation section:
This is a revisitation of Django's technical governance in which a simplification and reduction was made to make it more approachable to more people. The goals of these changes are the following:
- Make it easier to enact our governance.
- Make it easier for others to understand our governance.
- Make the governance more flexible, allowing more action with less procedure.
You can read DEP 0019 here.
Adoption plan
The goal is to have this governance accepted and in place by 2026-07-01. Our timeline is as follows, but may change depending on feedback.
- 2026-04-16: Announce new technical governance, solicit feedback
- 2026-05-07: Merge in minor feedback changes
- 2026-05-28: Resolve major feedback concerns
- 2026-06-11: Steering Council and DSF Board vote on and approve DEP
What we need from you
We would like to know if we are achieving our goals with this document. For example, do you feel that this makes our governance easier to understand, do you feel like you have a better understanding of who is eligible to run for the Steering Council, is it clear how Django operates from a process perspective?
Beyond that, if you have other feedback around the changes, please share it. This has gone through a high degree of review from the Steering Council and Board over the past 5 months, but that doesn’t mean there aren't areas where it can be improved.
Anyone can participate in this process on the Forum thread here.
PyCharm & Django annual fundraiser
For another year, we are thrilled to partner with our friends at JetBrains on the annual "Buy PyCharm, Support Django" campaign. This is the first of two fundraisers we're running with JetBrains this year, and it's one of the most impactful ways the community can support the Django Software Foundation.
"JetBrains is a cornerstone in the Django community, consistently helping us understand our evolving landscape. Their annual survey provides invaluable insights into the community's needs, trends, and tools, ensuring we stay on the pulse of what matters most."
Jeff Triplett, President, Django Software Foundation
Your support of this campaign helps fund key initiatives such as:
- Django Fellows: Ensuring the rapid development and maintenance of Django.
- Djangonaut Space: Onboarding new contributors to the Django project.
- Django Girls: Making the Django community accessible to programming beginners around the world.
- International events and conferences: Supporting DjangoCons, one-day events, meetups, and other community gatherings around the world.
How the campaign works
From today to May 1, when you purchase PyCharm at a 30% discount through our special campaign link, JetBrains will donate an equal amount to the Django Software Foundation. You get a professional IDE that's trusted by Django developers worldwide, and the DSF receives a matched contribution.
Get 30% off PyCharm, Support Django
Thank you, JetBrains
Beyond this campaign, JetBrains contributes to the Django ecosystem in ways that are easy to overlook but hard to overstate. Their Django Developers Survey, State of Django report, and broader Python Developers Survey give the entire community a clearer picture of where Django and Python are heading each year.
"JetBrains is one of our most generous fundraising partners year after year, helping us sustain and grow the Django ecosystem. We deeply appreciate their commitment, leadership, and collaboration."
Thank you to JetBrains for another year of partnership, and thank you to everyone who participates in this campaign. Together, we can ensure the continued success and growth of the framework we all rely on.
Other ways to donate
If you would like to donate in another way, especially if you are already a PyCharm customer, here are other ways to donate to the DSF:
- On our website via credit card
- Via GitHub Sponsors
- Benevity Workplace Giving Program - If your employer participates, you can make donations to the DSF via payroll deduction.
- For those able to make a larger donation as corporate sponsors ($2000+), check out our corporate sponsors form
Python Software Foundation
Announcing Python Software Foundation Fellow Members for Q4 2025! 🎉
The PSF is pleased to announce its fourth batch of PSF Fellows for 2025! Let us welcome the new PSF Fellows for Q4! The following people continue to do amazing things for the Python community:
Chris Brousseau
Website, LinkedIn, GitHub, Mastodon, X, PyBay, PyBay GitHub
Dave Forgac
Website, Mastodon, GitHub, LinkedIn
Inessa Pawson
James Abel
Website, LinkedIn, GitHub, Bluesky
Karen Dalton
Mia Bajić
Tatiana Andrea Delgadillo Garzofino
Website, GitHub, LinkedIn, Instagram
Thank you for your continued contributions. We have added you to our Fellows Roster.
The above members help support the Python ecosystem by being phenomenal leaders, sustaining the growth of the Python scientific community, maintaining virtual Python communities, maintaining Python libraries, creating educational material, organizing Python events and conferences, starting Python communities in local regions, and overall being great mentors in our community. Each of them continues to help make Python more accessible around the world. To learn more about the new Fellow members, check out their links above.
Let's continue recognizing Pythonistas all over the world for their impact on our community. The criteria for Fellow members is available on our PSF Fellow Membership page. If you would like to nominate someone to be a PSF Fellow, please send a description of their Python accomplishments and their email address to psf-fellow at python.org. We are accepting nominations for Quarter 1 of 2026 through February 20th, 2026.
Are you a PSF Fellow and want to help the Work Group review nominations? Contact us at psf-fellow at python.org.
Real Python
Quiz: Welcome to Real Python!
In this quiz, you’ll test your understanding of Welcome to Real Python!
By working through this quiz, you’ll revisit key platform features like video courses, written tutorials, interactive quizzes, Learning Paths, and the Slack community.
You’ll also review strategies for learning effectively, including immersion, daily progress, and building a habit.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

