Planet Python
Last update: April 24, 2026 10:43 AM UTC
April 24, 2026
The Python Coding Stack
Doubling Down on Python in The Age of AI
If you’ve been wondering where I’ve been, yes, it’s been quieter than usual around here. No dramatic reason. Just life, work, the usual stuff.
But one thing kept catching my attention: all the noise outside. Everyone’s talking about AI writing code, agents shipping products, the death of programming. And every so often, someone asks me — usually with that slightly guilty look of someone who thinks they’re about to insult my livelihood — “but do you really still need to learn Python? In this age?”
So I’ve been thinking about it. Properly. And I wanted to share where I’ve landed, because I think the answer matters and it’s different from what some hot takes suggest.
Do I still need to learn to code?
The way we write computer programs is changing. I don’t have a crystal ball for what programming looks like in five or ten years.
But here’s the thing: neither does anyone else.
Here’s what I know from watching others and experimenting with AI tools in my own work: right now, the people getting the most out of AI are the ones who already know how to code.
Here’s the hierarchy as I see it today:
Where we stand today:
No coding knowledge = Little benefit from AI
Little coding knowledge = Intermediate benefit from AI
Intermediate coding knowledge = Great benefit from AI
Great coding knowledge = Superpower-like benefit from AI
We’re in an era where some coding knowledge takes you much further than it could have a few years ago. That’s not an argument against learning to code. It’s an argument for it.
“But what about those vibe coding people? They seem to be shipping things.” Some are. I’ll be honest about that.
The projects I’ve seen from pure vibe coders tend to be smaller, tend to follow well-trodden patterns, and often hit a ceiling when something goes slightly wrong or slightly off-piste. Which is fine for a side project. I just created a useful dashboard to help me organise my day the way I want to using this approach.
But it tells you something: the AI does the heavy lifting on the known stuff. The moment something needs genuine thinking, you need the human who knows what’s going on beneath the surface.
AI-Assisted Human Coding and Human-Assisted AI Coding
Most serious work right now is a partnership.
Sometimes it’s AI-assisted human coding. The human drives, AI assists.
Sometimes it’s human-assisted AI programming. The AI writes most of the code, but the human knows what to ask for, how to steer it toward good design, how to evaluate whether the output actually makes sense.
Even when the coding looks like it’s done by AI, the person prompting and reviewing it is generally an experienced programmer. They’ve learned enough Python to know what’s reasonable, what’s a red flag, and when the AI is confidently wrong.
I’ve been experimenting with agentic AI over the past few weeks. I’ve tackled side projects I’d never have had the time for before. I didn’t have the time to start then, let alone finish them. Some of that output will show up in other places — stay tuned! But not here. The Python Coding Stack is the place for my writing.
The Programming Mindset When Talking to AI Agents
Here’s one thing I noticed. Talking to AI agents isn’t like talking to humans. But it isn’t like talking to computers (a.k.a. programming) either.
You need both qualities at the same time.
You need the clarity and communication skills you’d use with a person — explaining context, setting direction, knowing what matters, using clear language.
And you need the precision you’d use when programming — no ambiguity, clear intent, structure.
A good programmer who’s also a good communicator is the best human to work with AI agents.
That’s not coincidental. The same thought habits that make you an effective programmer also make you an effective prompter and reviewer when AI is involved.
I’ll share some of the prompts I’m using in a future post and analyse them to discuss why I wrote what I wrote. Learning to code well gives you an unfair advantage in this new world.
There’s Never Been a Better Time to Learn Python
Here’s what I’ve convinced myself of after all this. There’s never been a better time to learn Python.
A few years ago, you needed to reach an intermediate-to-advanced level before you could do something genuinely useful with Python. The bar was high. Now, with AI assistance, the bar is lower. Less Python knowledge takes you further than ever.
What used to need expertise can now be explored with curiosity and a bit of intermediate-ish-level Python.
That’s not replacing deeper learning. It’s making the entry point more accessible. And once you’re in, you can go as deep as you want.
So yes, I’ll keep coding in Python. Sometimes with a bit of help from AI. Sometimes with a lot of help from AI.
And yes, I’ll keep writing about Python here, as I’ve always done.
The Fun Factor
Here’s the thing we don’t talk about enough when discussing programming. It’s fun. It’s challenging. It’s rewarding. It’s fulfilling. It’s stimulating. It keeps my brain active.
I code because I enjoy it. I’ll keep writing about Python because I enojoy that, too,, and because I find value in sharing here — for myself and, hopefully, for you too.
Normal service resumes here. More Python posts coming. And maybe, just maybe, some of the AI things I’m learning will make their way in, too.
Psst–did you know you can become a premium member to be a part of The Club? It would mean so much to me!
Quick question:
Photo by Maksim Goncharenok
Join The Club, the exclusive area for paid subscribers for more Python posts, videos, a members’ forum, and more.
You can also support this publication by making a one-off contribution of any amount you wish.
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Python GUIs
Streamlit Widgets — An Overview of Commonly Used Widgets in Streamlit
Streamlit is a powerful Python library designed to build interactive web apps with minimal code. One of its core features is an extensive collection of widgets that allow users to interact with the app in various ways, such as providing inputs, triggering actions, or visualizing data. Streamlit makes it easy to create these elements with simple, intuitive syntax.
In the previous tutorial, we saw how to get started with Streamlit and run it on your local host. Here, we'll cover the main widgets available in Streamlit, explaining how they work, and how to customize them using various examples.
Getting started with Widgets Streamlit
Widgets in Streamlit are simple yet customizable. By combining multiple widgets together in different layouts, you can create interactive dashboards, data visualizations, and forms. Whether you want to include buttons, sliders, checkboxes, or display tables and plots, Streamlit offers a wide range of widgets that cater to different needs. Adding a widget to your Streamlit app is as easy as calling a single function, and customizing its behavior requires only minimal code.
In this guide, we will explore the various widgets that Streamlit offers, from basic input elements like text boxes and radio buttons to more complex visual components like plots and tables. We'll also dive into how to customize the behavior of these widgets to suit your app's specific requirements, such as adjusting slider ranges, modifying button text, and adding captions to images. By the end of this article, you'll have a solid understanding of how to leverage Streamlit widgets to enhance the interactivity and functionality of your applications.
Let's start by exploring the basic widgets Streamlit provides and how you can easily integrate them into your app. Make sure you have installed the streamlit module on your system and imported it on your Python file.
Buttons in Streamlit
Buttons are one of the most essential and commonly used components in any interactive application. In Streamlit, the st.button() widget provides an easy and effective way to allow users to trigger actions, interact with your app, or make decisions. With just a few lines of Python code, you can integrate buttons into your Streamlit app to perform tasks like data processing, changing app state, or displaying content.
The st.button() widget creates a clickable button on the interface. When clicked, it returns True, which can be used to trigger specific actions. If the button is not clicked, it returns False.
The basic syntax for creating a button in Streamlit is given below:
import streamlit as st
if st.button('Click Me'):
st.write("Button clicked!")
A Streamlit button widget
In this simple example, the button's label is "Click Me". When clicked, the app prints "Button clicked!" to the interface.
The most common customization for a button is its label — the text that appears on the button. The label is the first argument you pass to the st.button() function. For example, we can change the "Click Me" to a "Submit" button.
import streamlit as st
if st.button('Submit'):
st.write("Form submitted successfully!")
A Streamlit button widget with a different label
You can customize this label to suit the action you want to convey to the user. Whether it's Submit, Cancel, Run, or any custom text, Streamlit will display it as the button text
Moreover, Streamlit makes it simple to handle multiple buttons on the same page. You can define multiple st.button() widgets, each with its own label and action. Here is an example of how we can add multiple buttons.
import streamlit as st
if st.button('Button A'):
st.write("Button A clicked!")
if st.button('Button B'):
st.write("Button B clicked!")
Multiple Streamlit buttons
Similarly, you can add as many buttons as you wish.
Checkboxes in Streamlit
Checkboxes are a fundamental widget in Streamlit that allows users to toggle between two states: checked True or unchecked False. They are ideal for scenarios where you want users to make binary choices, such as showing or hiding content, enabling or disabling features, or making Yes/No decisions. It can also be used to select multiple options as the same time as well. The st.checkbox() widget is incredibly versatile and easy to implement, making it a key component in creating interactive applications.
As mentioned, the st.checkbox() widget creates a simple checkbox in your Streamlit app. When a user checks the box, it returns True, and when the box is unchecked, it returns False. Here is a basic example of checkboxes in Streamlit.
show_text = st.checkbox('Show text')
if show_text:
st.write("You checked the box!")
Streamlit checkbox
In this example, the checkbox is labeled "Show text". When the user checks the box, the app displays "You checked the box!" on the interface.
The most common customization for a checkbox is its label, which appears next to the checkbox itself. This label should clearly indicate what action will occur when the checkbox is checked.
subscribe = st.checkbox('Subscribe to our newsletter')
if subscribe:
st.write("Thanks for subscribing!")
Streamlit checkbox with custom label
Here, the checkbox label is customized to "Subscribe to our newsletter", and when the user checks it, a thank-you message is displayed.
By default, checkboxes in Streamlit are unchecked (False). However, you can change this by setting the value parameter to True, so that the checkbox is pre-checked when the app loads.
subscribe = st.checkbox('Subscribe to our newsletter', value=True)
if subscribe:
st.write("Thanks for subscribing!")
In this case, the checkbox is checked by default, and the welcome message is shown immediately when the app starts.
Furthermore, Streamlit makes it easy to handle multiple checkboxes, each controlling different parts of your app. You can create several checkboxes, and based on their states, you can conditionally display content or execute logic.
option1 = st.checkbox('Enable Feature 1')
option2 = st.checkbox('Enable Feature 2')
if option1:
st.write("Feature 1 is enabled!")
if option2:
st.write("Feature 2 is enabled!")
Multiple Streamlit checkboxes
Sometimes, you may need to generate checkboxes dynamically, especially if the number of checkboxes depends on user input or the result of some computation.
options = ['Apple', 'Banana', 'Cherry']
selected = []
for fruit in options:
if st.checkbox(fruit):
selected.append(fruit)
st.write(f'Selected fruits: {", ".join(selected)}')
Getting Streamlit checkboxes selection state
In this example a list of fruit names is used to dynamically create a set of checkboxes. When a user checks a box, the corresponding fruit is added to a list, and the selected fruits are displayed.
Radio buttons in Streamlit
Radio buttons are an essential widget in Streamlit that allow users to select a single option from a predefined list of choices. They are perfect for scenarios where only one selection can be made at a time, such as selecting a category, choosing between modes, or answering questions. The st.radio() widget provides an intuitive and simple way for users to interact with your Streamlit application.
The st.radio() widget creates a list of radio buttons where the user can select only one option at a time. It returns the selected option, which can be used to drive various actions in the app. Let us have a look at a very simple example of a radio button.
choice = st.radio('Choose an option:', ['Option 1', 'Option 2', 'Option 3'])
st.write(f'You selected: {choice}')
Streamlit radio buttons
In this example:
- A label "Choose an option:" is provided.
- The user can select one of three options: "Option 1", "Option 2", or "Option 3".
- The app displays the selected option below the radio buttons.
By default, the first option in the list of radio buttons is selected. However, you can customize this by setting the index parameter, which specifies which option should be selected when the app loads.
travel_mode = st.radio('Preferred mode of travel:', ['Car', 'Bike', 'Plane'], index=2)
st.write(f'You selected: {travel_mode}')
Streamlit radio buttons with default selection
The index=2 means the third option ("Plane") is selected by default when the app is loaded.
Select box in Streamlit
A select box in Streamlit is a widget that lets users choose a single option from a dropdown list. This widget is perfect for situations where you have a predefined list of options but want to save space on your interface by not displaying all the options upfront. The st.selectbox() widget is highly customizable and can be used for anything from simple selections to dynamically populated lists. It's ideal for situations where you want to offer multiple choices, but only display the currently selected item.
Let us create a simple dropdown menu with three options.
option = st.selectbox('Choose an option:', ['Option 1', 'Option 2', 'Option 3'])
st.write(f'You selected: {option}')
Streamlit select box
In this example:
- A label "Choose an option:" is provided.
- The user can select one of the options from the dropdown list: "Option 1", "Option 2", or "Option 3".
- The app displays the selected option.
By default, the first item in the list of options is selected in a select box. However, you can specify a different default selection by setting the index parameter. The index corresponds to the zero-based position of the option in the list.
city = st.selectbox('Select your city:', ['New York', 'London', 'Paris', 'Tokyo'], index=2)
st.write(f'You selected: {city}')
When you run this code, you will notice that, Paris will be selected as city because it is at index 2.
Slider in Streamlit
Sliders are a popular widget in Streamlit that allows users to select values by dragging a handle across a range. They are perfect for collecting numerical inputs or setting parameters like dates, time, and ranges. Streamlit's st.slider() widget provides a flexible and easy-to-use interface for adding sliders to your app, enabling users to interactively choose values with precision. It can handle integers, floats, dates, and times, making it highly versatile for various use cases.
Let us create a simple slider where a user can select any number from 0 to 100.
value = st.slider('Select a value:', 0, 100)
st.write(f'You selected: {value}')
Streamlit slider widget
By default, the slider handle is set to the minimum value of the range, but you can specify a default value by providing a value argument. This is useful when you want the slider to start at a specific position.
temperature = st.slider('Set the temperature:', -50, 50, value=20)
st.write(f'Temperature set to: {temperature}°C')
Streamlit slider widget with default value
By default, the 20 will be selected as set temperature.
Sliders in Streamlit can handle both integers and floating-point numbers. To use floating-point values, you simply specify a range with float values. You can also control the step size between values using the step parameter.
price = st.slider('Select a price:', 0.0, 1000.0, step=0.5)
st.write(f'Price selected: ${price}')
Streamlit float slider widget
In this example:
- The
sliderlets the user select a price between 0.0 and 1000.0. - The
step=0.5ensures that the slider increments or decrements by 0.5 units.
Streamlit's slider widget also allows users to select a range of values, which is particularly useful when you need two values (e.g., a start and end date, or a minimum and maximum range). To do this, provide a tuple as the value argument.
salary_range = st.slider('Select a salary range:', 20000, 100000, (30000, 80000))
st.write(f'Selected salary range: ${salary_range[0]} - ${salary_range[1]}')
Streamlit range slider widget
Here, the slider has a range from 20000 to 100000. The user can select a minimum and maximum value for the salary range (initially set between 30000 and 80000).
Apart from that, Streamlit sliders can also handle date and time values, which is especially useful when users need to select a specific day or time range. You can create sliders with datetime.date and datetime.time objects to allow users to make date-based selections.
import streamlit as st
import datetime
date = st.slider('Select a date:', datetime.date(2020, 1, 1), datetime.date(2024, 12, 31), value=datetime.date(2023, 1, 1))
Streamlit date slider
In this example, the user can select a date between January 1, 2020, and December 31, 2024. The default value is set to January 1, 2023.
You can use sliders with any other widgets or functions to make it more dynamic and interactive. For example, we can combine them with conditional logic to adjust the behavior or content of an app based on the selected value. This is particularly useful for creating dynamic and interactive experiences.
rating = st.slider('Rate our service:', 1, 5)
if rating <= 2:
st.write('We are sorry to hear that. Please let us know how we can improve.')
else:
st.write('Thank you for your feedback!')
Streamlit slider interactivity
The slider allows users to rate a service between 1 and 5. Based on the rating, different messages are displayed.
Different input options in Streamlit
In Streamlit, input options are essential for creating interactive applications, allowing users to interact with data and visualizations dynamically. Streamlit offers a variety of input widgets that cater to different types of data and user interactions. Below are the main input options that we will be discussing in this section:
- Text Input: The
st.text_input()widget allows users to enter single-line text data. It's useful for collecting short information like names, email addresses, or any single-line text input. - Text Area: For multi-line input,
st.text_area()is the preferred choice. It provides users with a larger space for entering longer content, like descriptions, notes, or code snippets. - Number Input: Streamlit provides the
st.number_input()widget for numerical inputs. Users can specify integer or float values within a defined range and step size, which makes it suitable for settings like entering age, prices, or percentages. - Date and Time Input: For handling date and time, Streamlit provides the
st.date_input()andst.time_input()widgets, allowing users to pick dates and times easily. This is useful for scheduling or filtering data by time ranges.
Now, let us discuss each of these options in details by taking examples.
Date and Time Input
Among its many features, Streamlit provides robust support for handling date and time inputs, allowing developers to easily integrate date and time pickers into their applications. This is useful in a wide variety of contexts, such as scheduling applications, time series analysis, filtering data based on time ranges, or tracking events.
The st.date_input() widget in Streamlit provides a simple and intuitive interface for users to select dates. This widget can be used to input a single date or a range of dates. The basic syntax of date_input() function is given below will possible parameter values.
st.date_input("Date", value=None, min_value=None, max_value=None, key=None, help=None, on_change=None)
Streamlit date input
Let us explore the parameters in turn:
label: The label to display alongside the widget.value: The default date(s) to show in the widget. This can be a single date or a tuple of two dates for range selection. Defaults to today's date.min_value: The earliest date that can be selected. Defaults to no minimum.max_value: The latest date that can be selected. Defaults to no maximum.key: An optional key that uniquely identifies this widget.help: A tooltip that displays when the user hovers over the widget.on_change: A callback function that runs when the input changes.
In addition to that, you can also use st.date_input() to allow users to pick a range of dates by passing a tuple of two datetime.date objects as the default value.
import streamlit as st
import datetime
# Date range input
start_date = datetime.date(2023, 9, 1)
end_date = datetime.date(2023, 9, 30)
date_range = st.date_input("Select a date range", (start_date, end_date))
st.write(f"Start date: {date_range[0]}")
st.write(f"End date: {date_range[1]}")
Streamlit date range input
In this case, the user can select a range of dates. The widget will return a tuple containing the start and end dates.
On the other hand, the st.time_input() widget allows users to select a specific time. This widget is useful in scenarios like scheduling events or setting alarms. Here is the simple syntax off time_input() function with its possible parameter values.
st.time_input(label="Time", value=None, key=None, help=None, on_change=None)
Streamlit time input
The parameters in the time_input functions:
label: The label displayed next to the widget.value: The default time shown in the widget. This can be a datetime.time object. Defaults to the current time.key: An optional key that uniquely identifies the widget.help: Tooltip displayed when the user hovers over the widget.on_change: A callback function that runs when the input changes.
In many applications, you'll need both date and time inputs together. Although Streamlit doesn't provide a single widget for selecting both date and time, you can combine the st.date_input() and st.time_input() widgets to achieve this functionality.
import streamlit as st
import datetime
# Date input
date = st.date_input("Pick a date", datetime.date.today())
# Time input
time = st.time_input("Pick a time", datetime.time(9, 00))
# Combine date and time
selected_datetime = datetime.datetime.combine(date, time)
st.write(f"Selected date and time: {selected_datetime}")
Combining Streamlit widgets for selecting date and time
This code lets the user select both a date and a time, then combines them into a single datetime.datetime object.
Text and area input
One of the essential features of any web app is gathering user input, and Streamlit provides several widgets to capture user input easily. Among them, text input widgets play a crucial role in allowing users to input free-form text data. In this article, we will dive into Streamlit's text input widgets and explore their capabilities, practical applications, and customization options.
Streamlit offers two primary widgets for capturing text input:
- Single-Line Text Input: Captured using the
st.text_input()widget. - Multi-Line Text Area: Captured using the
st.text_area()widget.
Both widgets are used to gather text-based input from users but differ in the amount of text they are designed to handle. Let's take a closer look at each of these widgets.
The st.text_input() widget allows users to input a single line of text. This is ideal for cases where you want to gather short responses such as names, email addresses, usernames, search queries, or small pieces of data.
# Single-line text input
name = st.text_input("Enter your name")
st.write(f"Hello, {name}!")
Streamlit text input widget
This example creates a simple input box where users can enter their names. The app then displays a message using the entered text.
In some cases, we may need to limit the total number of characters entered by the user. So, we can use the max_charsparameter as shown below:
# Text input with character limit
username = st.text_input("Enter your username", max_chars=15)
st.write(f"Your username is: {username}")
This is similar to the previous example, but this time the user is not allowed to enter characters more than 15. In the text_input() function, we can also specify the type of text we want the user to enter. For example, if we want a user to enter an email and password, we can specify those as shown below:
email = st.text_input("Enter your email")
st.write(f"Email entered: {email}")
password = st.text_input("Enter your password", type='password')
st.write(f"Password length: {len(password)} characters")
Streamlit password input widget
As shown above, when we enter the password, it will not be visible.
The st.text_area() widget is ideal when you need to capture longer inputs or multi-line text, such as feedback, code snippets, or detailed descriptions. It provides users with a resizable text area where they can enter more extensive information
# Multi-line text area
feedback = st.text_area("Your feedback", "Enter your comments here...")
st.write(f"Your feedback: {feedback}")
Streamlit text area for text input
In this example, the text area allows the user to enter multiple lines of text. The entered feedback is then displayed. In a similar way as we did before, you can limit the max number of characters by assigning a value to the max_chars parameter.
Number Input in Streamlit
The st.number_input() widget allows users to input numbers, offering several customization options like setting minimum and maximum values, adjusting step increments, and choosing between integers and floating-point numbers. This widget is particularly useful for inputs like prices, quantities, percentages, or any scenario where numerical precision is required.
The simplest form of the st.number_input() widget involves asking the user to input a number without setting any constraints like minimum or maximum values.
age = st.number_input("Enter your age")
st.write(f"Your age is: {age}")
Streamlit numeric input
In this example, the widget accepts any number, and the user's input is displayed back on the screen.
You can restrict the input to a certain range by specifying the min_value and max_value parameters. This is particularly useful when you need to validate the input against specific boundaries.
# Number input with a range
rating = st.number_input("Rate your experience", min_value=1, max_value=5)
st.write(f"Your rating is: {rating}")
Numeric input with validation
In this example, the user can only select a rating between 1 and 5, ensuring valid input. You can set a default value that appears in the input box when the app first loads, and you can also define how much the value should increase or decrease when the user interacts with the widget.
# Number input with default value and step size
quantity = st.number_input("Select quantity", min_value=0, max_value=100, value=10, step=5)
st.write(f"You have selected: {quantity} units")
Numeric input with validation and step size
In this example, the widget starts with a default value of 10, and the user can adjust the value in increments of 5, from 0 to 100.
File uploader in Streamlit
st.file_uploader() widget, which allows users to upload files directly into your Streamlit app. This functionality is vital in many applications, such as data analysis tools, machine learning models, and document processing systems. This widget is a convenient tool for enabling users to upload files of various types, such as text files, CSVs, images, PDFs, and more. Once uploaded, the files can be processed or analyzed directly within the Streamlit app.
The simplest use case of st.file_uploader() is to upload a single file of a specific type.
import pandas as pd
# Single file uploader for CSV files
uploaded_file = st.file_uploader("Upload a CSV file", type="csv")
if uploaded_file is not None:
df = pd.read_csv(uploaded_file)
st.write(df)
Upload CSV files
In this example, the user can upload a CSV file, and the app reads and displays the contents using the pandas library.
We can also upload images and show them in our web app. For images, we need to specify the type of images as shown in the example below:
import streamlit as st
from PIL import Image
# Upload an image file
uploaded_image = st.file_uploader("Upload an image", type=["png", "jpg", "jpeg"])
if uploaded_image is not None:
image = Image.open(uploaded_image)
st.image(image, caption="Uploaded Image", use_column_width=True)
Upload image files
Here, users can upload image files in formats like PNG, JPG, or JPEG. Once uploaded, the app uses the Pillow library to open and display the image.
Furthermore, you can use the accept_multiple_files parameter to allow users to upload several files at once. This is useful when handling bulk uploads or cases where multiple files need to be processed together.
# Multiple file uploader for text files
uploaded_files = st.file_uploader("Upload multiple text files", type="txt", accept_multiple_files=True)
if uploaded_files:
for uploaded_file in uploaded_files:
st.write(f"File name: {uploaded_file.name}")
content = uploaded_file.read().decode("utf-8")
st.write(content)
Uploading multiple files
In this example, the user can upload multiple text files. The app reads and displays the contents of each file.
Streamlit also allows you to validate uploaded files based on specific conditions, such as file size, format, or content.
uploaded_file = st.file_uploader("Upload a file")
if uploaded_file is not None:
# Check file size (less than 2 MB)
file_size = uploaded_file.size
if file_size > 2 * 1024 * 1024:
st.error("File size exceeds 2 MB limit!")
else:
st.success("File uploaded successfully.")
st.write(f"File size: {file_size} bytes")
Uploading file too large
As you can see, the file was not uploaded because it exceeds the max size.
Conclusion
Streamlit's widget system is a powerful and intuitive way to add interactivity to web applications. From simple text and number inputs to more complex file uploaders and sliders, Streamlit provides a wide range of widgets that allow users to seamlessly interact with your app. These widgets can be easily integrated into data-driven applications, enabling users to input data, upload files, and adjust parameters in real-time.
Streamlit widgets are designed to be highly customizable, offering various configuration options like minimum and maximum values, step sizes, file type restrictions, and dynamic callbacks. With minimal code, developers can create sophisticated, interactive applications that enhance user experience and make complex workflows more accessible.
April 23, 2026
PyCon
Asking the Key Questions: Q&A with the PyCon US 2026 keynote speakers: Rachell Calhoun and Tim Schilling
Welcome to our annual blog series where we're asking each of our PyConUS 2026 keynote speakers about their journey into tech, how excited they are for Pycon US and any tips they can provide for an awesome conference experience!
Thank you to Rachell and Tim for this interview! You can learn more about their keynote on the PyConUS Keynote Speakers page and you can also attend their meet and greet at the PSF Booth in the Expo Hall on Friday May 15 from 1 to 2pm PT.
Without giving any too many spoilers, tell us what your keynote is about?
Tim: Did Rachell answer this question already? Can I cheat off her response? No?
Hmm... Well, it's about Djangonaut Space, a contributor mentorship program for members of the Django community. It'll talk about why the founders created it, what it does, how it works, and why it works.
If you're a part of an open-source community or want to be a part of an open-source community, you may find the talk interesting. We've tackled some really hard freaking problems. Improving diversity, finding regular contributors, and helping them grow into community leaders.
When we sat down to discuss what we wanted to speak about (thanks again, Jon, for advocating for us!), we decided we wanted to focus on the human element. We lean pretty hard on that in our community, and our keynote reflects that.
So, what did Rachell say?
Rachell: Tim and I want to talk about what happens when you invest in the people behind the commits. We're going to take you inside our open source mentorship program, Djangonaut Space. The success stories, the ripple effects, and how one person's growth becomes someone else's opportunity. Oh yeah, and there's some open source in there too!
How did you get started in tech/Python? Did you have a friend or a mentor that helped you?
Rachell: I had been teaching English for a while in South Korea and I wanted to do something else because I had kind of hit a plateau with where I was. So I tried to get a job at an e-publishing company that wanted someone with English teaching experience, some tech, and some business. I had one out of three.
I remember one of the questions they asked was if I knew what HTML was. I said it had something to do with the web. I had no idea beyond that. Kind of thankfully, I did not get that job, but it was the moment I realized tech touches everything, so I started learning.
Did I have mentors and friends? Yes, lots, and I would not be here without them. The most impactful learning experience from one was when I asked a friend a question and he said “I don’t know, let’s google it”. And I was shocked that he didn’t know everything. This normalized not knowing everything and made me feel a lot more comfortable while I was learning and even today! Tim: As a child, like a 5-year-old child, my mom was supportive of me messing around on the home computer. I still have this fuzzy memory of me at school "fixing" the computer in kindergarten. In reality, I was just unplugging and plugging things back in, but hey, it worked!
I got into programming in college when I was accidentally put into the computer engineering program. I noticed it about two weeks before courses started but figured it couldn't be too much different than the business school's management information systems degree. After my freshman year, I got an internship at a local company where I learned so much about web development, the software development lifecycle, and working in a corporate environment. That was incredibly helpful to my development.
A year after college, I decided to build my own SaaS (and move across the country). That's when I finally picked up Python and Django. I got started with "Learn Python the Hard Way" and was lucky enough to work with Greg Newman on a project early on.
Though to specifically answer your question, outside the few years I worked in a corporate environment with a manager, I haven't had a stereotypical mentor. I've absorbed knowledge and wisdom from friends, colleagues, and the old standby, the trial and error approach.
What do you think the most important work you’ve ever done is? Or if you think it might still be in the future, can you tell us something about your plans?
Rachell: Hands down it's helping people find their opportunity for upward mobility. It's so easy to get in the weeds with a language, a framework, or even industry specific stuff, especially at conferences. But the point I always come back to with any community work that I do or have done is that this work is giving people their opportunity for upward mobility in a world where that is extremely difficult.
More important than Python, or this package, or a conference talk is helping people be financially stable. Housing, food on the table, health insurance, not constantly stressed about making ends meet. People can't thrive and volunteer their time if they are just surviving.
For me personally, I was a late career transitioner, and the upward mobility I experienced didn’t just help me but also my extended family. It's so impactful. I even need reminders sometimes of how impactful community work can be because I get so deep into the details about, ya know, “what is the exact hex of the logo on this t-shirt for an event”.
I know folks who have directly benefited from the community organizing I've been a part of in Django Girls, Djangonaut Space, and even just personally. Some have transitioned careers, others found a job after a long time struggling. A good friend attributes the ability to have a kid (a very cute one!) to me because without the career transition into tech, she wouldn't have been able to afford to raise one.
And this motivates me and keeps me organizing. It's not everyone that shows up to a Djangonaut Space session that is going to have some life-changing experience, but maybe one will. And that's enough. Tim: I'm a middle child, so I have literally grown up with a complex to please people, mediate situations, and help things move forward. It's also probably why I crave praise but absolutely hate getting it. I'm pretty sure that's why my answer to this is helping people contribute to open-source software.
It's something I've been doing in one way or another for a while. In 2020, my efforts picked up quite a bit with Underdog Devs and then getting way more involved in the Django community.
I think the reason why I find it important was solidified when I read my friend Eric Matthes' post "Coding is Political". Programming, software development, and software engineering are something anyone can learn; you don't need a degree for them, and you can get paid a lot to do them. Contributing to open source isn't the only way to learn these skills, but it's freely accessible and provides a benefit to others.
If I can help cultivate organizational cultures and systems that support more people contributing to open source so they learn skills that help them get paid, well, that'd be pretty cool.
Have you been to PyCon US before? What are you looking forward to?
Rachell: I’ve been to PyCon US a few times, but it’s been a while! It’ll be nice to be back. I sometimes feel a bit out of my depth because there are just so many people, but I counter that with finding pockets of smaller spaces to really connect with people. Which is why I like the open spaces, they’re so fun and I always meet new people.
Tim: No, this will be my first PyCon US. Back in December, I was so excited to come to my first PyCon, blend into the background, and enjoy a new conference purely as an attendee. In fact, I had bought my plane tickets, lodging, and my conference tickets. Then Elaine and Jon invited us to keynote, which threw all those plans right out the window.
What I'm looking forward to is meeting a bunch of new people and experiencing the PyCon US culture. I'm excited to see the similarities and differences between DjangoCon US and PyCon US.
Do you have any advice for first-time conference goers or any general conference tips?
Rachell: Think carefully about what kind of experience you want, and what you want to get out of the conference. If you want to meet people, for example, a great way to do that is to volunteer, or find some interesting open spaces to attend. I love volunteering at registration because you get to see and meet so many people and they have to come to you! It makes saying hi super easy.
If you are interested in certain topics, find those talks and go there! Other people attending the talk are likely just as interested.
Lastly, drink a lot of water, take breaks when you need them. For example, go outside for a short walk to get some fresh air, it can really recharge you in a mid-day energy slump.
Tim: In general, eat some protein in the morning. I've been contemplating bringing protein powder to supplement my breakfast. Though this may be a me thing since I tend to not eat when I get nervous.
The better advice is to reflect on your goals for the conference before the conference. Kojo Idrissa shared this wisdom at a DjangoCon US and it made so much sense. By knowing what you want to come away from the conference with, you can put yourself in a position to make that happen. For example, at that event, I wanted to get to know people. So I hung out in the common areas just before dinner. I ate dinner with different groups each night with zero planning. One of those nights, I was in the back of a van with the hosts of my favorite podcast.
Can you tell us about an open source or open culture project that you think not enough people know about?
Rachell: Have you heard of Outreachy? It's an open source mentorship program that pays!
"Outreachy provides internships in open source. Outreachy provides internships to anyone from any background who faces underrepresentation, systemic bias, or discrimination in the technical industry where they are living."
Djangonaut Space was not built in a vacuum, and this was definitely one of the organizations we pulled inspiration from in creating it. Although we aren’t there yet, I think paying people for their time is amazing and really important in lowering the barriers to participating in open source. Working for free is a huge barrier. Tim: Yes, I can! I'd love to shout out the Django Commons organization. It's a home for community-maintained Django packages. It seeks to improve the maintenance experience for package maintainers.
What excites me about this organization is that it's working to provide a framework for making the Django ecosystem (and by extension the Python ecosystem) more robust. Maintaining an open-source package can feel like a lonely endeavor at times, but this is a community that wants to support you.
It does so by:
- Providing a home for community-maintained packages and supporting easy transitions of maintainers
- Managing teams and permissions for contributors, maintainers, and administrators of each package
- Having multiple organization administrators (we just brought new people on in April!)
- Automate actions as much as possible
- Provide best practices for packages
- Providing a mechanism for being paid for open-source work
If you have to maintain a Python or Django package and are looking for a more community-based maintenance approach, consider transitioning your project to Django Commons. Or if you're looking to get started contributing to open source and find the larger projects a bit intimidating, consider contributing to one of our packages!
Python Software Foundation
Announcing Python Software Foundation Fellow Members for Q1 2026! 🎉
The PSF is pleased to announce its first batch of PSF Fellows for 2026. Let us welcome the new PSF Fellows for Q1! The following people continue to do amazing things for the Python community:
Bill Deegan
El-karece Asiedu
(James) Kanin Kearpimy
Jonas Obrist
Kristen McIntyre
Lucie Anglade
Phebe Polk
Philippe Gagnon
Sarah Kuchinsky
Simon Charette
Sony Valdez
Stan Ulbrych
Steve Yonkeu
Thank you for your continued contributions. We have added you to our Fellows Roster.
The above members help support the Python ecosystem by being phenomenal leaders, sustaining the growth of the Python scientific community, maintaining virtual Python communities, maintaining Python libraries, creating educational material, organizing Python events and conferences, starting Python communities in local regions, and overall being great mentors in our community. Each of them continues to help make Python more accessible around the world. To learn more about the new Fellow members, check out their links above.
Let's continue recognizing Pythonistas all over the world for their impact on our community. The criteria for Fellow members is available on our PSF Fellow Membership page. If you would like to nominate someone to be a PSF Fellow, please send a description of their Python accomplishments and their email address to psf-fellow at python.org. We are accepting nominations for Quarter 2 of 2026 through May 20th, 2026.
Are you a PSF Fellow and want to help the Work Group review nominations? Contact us at psf-fellow at python.org.
EuroPython
April Newsletter: First Keynote Speakers Announcements
Hi all Pythonistas! 👋
As the spring sun charges us all with new energy, the EuroPython team has been busy translating that spring buzz into exciting progress with the conference. But don’t take our word for it—see for yourself:
🗣️ Keynote Speakers
We are thrilled to announce that these three speakers will be returning to EuroPython:
Guido van Rossum, Creator of PythonGuido van Rossum created Python in 1990 while working at CWI in Amsterdam. He was the language&aposs BDFL until he stepped down in 2018. He has held various tech jobs, including Senior Staff Engineer at Google and Principal Engineer at Dropbox. He is currently a Distinguished Engineer at Microsoft, where he is still actively involved in Python&aposs development. Born and raised in the Netherlands, he moved to the US in 1995 and currently lives with his family in the Bay Area.
Pablo Galindo Salgado, CPython Core DeveloperPablo Galindo Salgado works in the Python team at Hudson River Trading. He is a CPython core developer and a theoretical physicist specialising in general relativity and black hole physics. He serves on the Python Steering Council, having been re-elected for his 6th term in 2026, and was the release manager for Python 3.10 and 3.11. He also has a cat, though it does not code.
Łukasz Langa, Creator of BlackŁukasz Langa is a failed comedian. Wannabe musician. Python guy at Meta. Co-host of the core.py podcast. Former CPython Developer in Residence at the Python Software Foundation. Former Python release manager. Creator of Black.
Stay tuned for more information about EuroPython 2026 keynotes!
💸 Financial Aid
There is still time to apply for Financial Aid to attend EuroPython 2026, whether you want to attend in-person, or remotely. The first round of applications is now closed, however you have until 11 May to submit to the second round.
We strongly encourage anyone who needs support to attend the event to apply. Results from the first round of applications should be hitting inboxes very soon - but don’t worry, if you were not accepted, we will automatically consider you for the second round. 🤗
👉 For full details, including how to apply, visit https://ep2026.europython.eu/finaid/
✉️ Visa Support Letters
If you are attending EuroPython in person this year and you need a visa to enter Poland, then we are able to provide a letter in support of your visa application. Poland is part of the EU and Schengen Area, and we recommend referring to the official Polish government guidance to verify entry requirements before confirming your travel arrangements.
If you do need a visa, please book an appointment to obtain one as soon as possible, and let us know at least one week before your appointment so that we can prepare the letter for you.
👉 To request a visa support letter, and for links to official guidance on entry to Poland, please visit https://ep2026.europython.eu/visa/
🚀 Startup Row: A New Opportunity for Startups
New this year, Startup Row gives early-stage companies a focused way to get in front of a highly engaged developer audience. Enjoy a 3-day exhibition space to share what you’re building and connect with the Python community. Spots are limited—secure yours early.
👉 Learn more & get in touch: https://ep2026.europython.eu/sponsorship/sponsor/
👩🏫 Speaker Mentorship Program: Orientation
Do you want to improve your stage presence, learn how to confidently handle audience questions, and make sure that your talk is as engaging as possible? Our experienced community members are keen to support you! 💚
EuroPython 2026 Speaker Mentorship Programme orientation meetingThe Speaker Mentorship Team is running an online workshop on the 3 June 2026, at 18:00 CEST. Whilst newer speakers are particularly encouraged to attend, we’ve designed the session for people of all experience levels—and we welcome speakers from other conferences, too! 🐍
👉 Register now to confirm your place: https://forms.gle/uZKwuAiBkUSmx7gn7
💰 Sponsorship: Packages Selling Out
While the Gold and Platinum sponsorship packages are almost sold out (wow!), we&aposve still a range of add-ons to help your company really connect with the EuroPython audience:
- Social Event Sponsor
- Hackathon Sponsor
- Speaker Dinner Sponsor
- Financial Aid Sponsor
🌐 More information at: https://ep2026.europython.eu/sponsorship/sponsor/#optional-add-ons
👉 Contact us at sponsoring@europython.eu
🤝 Community Partners
Here&aposs the latest from our partners:
Warsaw Python Pizza
Warsaw Python Pizza is a community-driven micro conference for Python enthusiasts featuring short, practical talks and great pizza. It will take place on 9 May, 2026, at PJAIT, Koszykowa 86 in Warsaw. Ticket sales are planned to start on April 24, and they expect to announce the speaker lineup on April 27.
👉 Reach Warsaw Python Pizza: warsawpythonpizza@gmail.com
🌐 For more info visit https://warsaw.python.pizza/
PyData Trójmiasto
PyData Trójmiasto is an event that brings together AI/ML enthusiasts. Originally based in the Gdańsk area, currently it is held monthly in Gdynia. From the very beginning, it has focused on building a local community through the exchange of knowledge and experience. Organizers are actively looking for speakers and sponsors for future editions.
👉 Get in touch with PyData Trójmiasto via kontakt@pydata-trojmiasto.pl or their social media.
🌐 Join them on Meetup: https://www.meetup.com/PyData-Trojmiasto
DevOpsDays Kraków 2026
DevOpsDays Kraków 2026 is back on 4 July at ECHO Miasta, Kraków — the Call for Proposals is open until 10 May.
👉 Submit a 30-minute talk or a 5-minute lightning talk at https://devopsdays.org/krakow. Real stories from production beat polished decks every time.
🔗 You can find more information about EuroPython 2026 Community Partners at https://ep2026.europython.eu/community-partners/
📣 Community Outreach: From Lithuania To Texas
April’s been a proper whirlwind for the community, and we’ve been right there cheering on our fellow organisers:
DjangoCon Europe
Several members of the EuroPython Society attended DjangoCon Europe in Athens between the 15 and 19 April. This was the first DjangoCon Europe to be held in Greece, and the location of the conference - immediately adjacent to the Lyceum of Aristotle and the National Gardens - felt like the perfect backdrop for three incredible days of talks. ✨
Andrew Northall, member of the EuroPython 2026 Communications Team, promoting EuroPython at DjangoCon Europe 2026PyCon DE & PyData
We headed over to Darmstadt to support the community at the joint PyCon DE & PyData conference. This four-day event was packed with talks, masterclasses, sprints, and PyLadies sessions. EuroPython Society was one of the Diversity Sponsors of the conference.
PyCon Lithuania
We also supported the 15th edition of PyCon Lithuania, held in Vilnius from 8 to 10 May. It’s a three-day event and the largest Python and PyData gathering in the Baltic and Nordic regions, with over 600 participants.
PyTexas
Last but not least, we have also visited PyTexas, a three-day conference located in Austin, which celebrated its 20th anniversary. In a lightning talk, we have invited attendees to join us in the City of Castle and Dragons in July to enjoy pierogi, meeting European friends, and getting to know the lovely city.
That EuroPython Society banner made the trip all the way to Austin for PyTexas with Ege Akman🎁 Sponsors Spotlight
We&aposd like to thank Manychat for sponsoring EuroPython.
Manychat builds AI-powered chat automation for 1M+ creators and brands at real production scale.
View job openings at Manychat👋 Stay Connected
Follow us on social media and subscribe to our newsletter for all the updates:
👉 Sign up for the newsletter: https://blog.europython.eu/portal/signup
- LinkedIn: https://www.linkedin.com/company/europython/
- X/Twitter: https://x.com/europython
- Mastodon: https://fosstodon.org/@europython
- Bluesky: https://bsky.app/profile/europython.eu
- Instagram: https://www.instagram.com/europython/
- YouTube: https://www.youtube.com/@EuroPythonConference
Tickets are going on sale next week, and we’ll send you an email with the link. Our Programme team’s busy finalising all the talks, workshops, and bits and bobs, so we can share the full lineup with you soon. We have a few exciting months ahead of us. See you all in Kraków! 🐍❤️
Cheers,
The EuroPython Team
Subscribe to the EuroPython Blog
The official blog of everything & anything EuroPython! EuroPython 2026 13-19 July, Kraków
No spam. Unsubscribe anytime.
Armin Ronacher
Equity for Europeans
If you spend enough time in US business or finance conversations, one word keeps showing up: equity.
Coming from a German-speaking, central European background, I found it surprisingly hard to fully internalize what that word means. More than that, I find it very hard to talk with other Europeans about it. Worst of all it’s almost impossible to explain it in German without either sounding overly technical or losing an important part of the meaning.
This post is in English, but it is written mostly for readers in Germany, Austria, and Switzerland, and more broadly for people from continental Europe. I move between “German-speaking” and “continental European” a bit. They are not the same thing, of course, but many continental European countries share a civil-law background that differs sharply from the English common-law and equity tradition. The words differ by language and jurisdiction, but the conceptual gap I am interested in shows up in similar ways.
In US usage, the word “equity” appears everywhere:
- real estate: “build equity in your home”
- startups: “employees get equity”
- public markets: “equity investors”
- private deals: “take an equity stake”
- personal finance: “negative equity in a car”
- social policy: “diversity, equity, and inclusion”
If you try to translate this into German, you have to choose words. Of course we can say Eigenkapital, Beteiligung, Anteil, Vermögen, Nettovermögen, or sometimes Substanzwert. In narrow contexts, each can be correct, but none of them carries the full concept. I find that gap interesting, because language affects default behavior and how we think about things.
One Word, Shared Meanings
In the English language, “equity” often carries multiple things at once. I believe the following ones to be the most important ones:
- A legal-fairness dimension: historically tied to equity in law
- A financial-accounting dimension: residual value after debt
- A cultural dimension: ownership as a path to wealth and agency
If you open Wikipedia, you will find many more distinct meanings of equity, but they all relate to much the same concept, just from different angles.
German, on the other hand, can express each of these layers precisely, including the subtleties within each, but it uses different words and there is no common, everyday umbrella word that naturally bundles all three.
When a concept has one short, reusable, positive word, people can move it across contexts very easily. When the concept is split into technical fragments, it tends to stay technical, and people do not necessarily think of these things as related at all in a continental European context.
How Equity Got Here
What is hard for Europeans to understand is how the financial meaning of equity appeared, because it did not appear out of nowhere. The word’s original meaning comes from fairness or impartiality, and it made it to modern English via Old French and Latin (equité / aequitas).
Historically, English law had separate traditions: common law courts and courts of equity (especially the Court of Chancery). Equity in law was about fairness, conscience, and remedies where strict common law rules were too rigid. Take mortgages for instance: in older English practice, a mortgage could transfer title as security. Under strict common law, missing a deadline could mean losing the property entirely. Courts of equity developed the “equity of redemption”: a borrower could still redeem by paying what was owed.
That equitable interest became foundational for how ownership and claims were understood. In finance, equity came to mean not just a number, but a claim: the residual owner’s stake after prior claims are satisfied.
The European Split
German and continental European legal development took a different path. Civil law systems did not build the same separate institutional track of “equity courts” versus common law courts. Fairness principles absolutely exist, but inside the codified system, not as a parallel jurisdiction with its own language and mythology.
As a result, German vocabulary has many different words, and they are highly domain-specific. There are equivalents in other languages, and to some degree they exist in English too:
- company balance sheet: Eigenkapital
- ownership share: Beteiligung, Anteil
- unrealized asset value: stille Reserven
- household wealth: Vermögen, Nettovermögen
- investment action: Anlage, Investition
- residual net assets: Reinvermögen
This precision is useful for legal drafting and accounting. But it also means we have less of the shared mental package that many Americans get from “equity”: own a piece, carry risk, participate in upside, build wealth.
Schuld Is Not Just Debt
There is another linguistic oddity worth noting: in German, “Schuld” can mean both debt/liability and guilt, and I think that too has changed how we think about equity.
“Schuld” in everyday language makes debt feel more morally charged than it does in the US. Indebtedness is often framed as a burden, and it is not thought of as a tool at all.
US financial language, by contrast, often frames debt more instrumentally and pairs it with an explicit positive counterpart: equity. Equity is what is yours after debt, what can appreciate, what can be transferred, and what can give you control.
In American financial language, debt is not as morally burdened, and equity is more than the absence of debt: it is the positive claim on the balance sheet — ownership, optionality, control, and upside.
Practical Matters
If you grew up with German-speaking framing, many US statements around equity can sound ideological or naive. From a continental European lens, they can sound like imported jargon or hollow. But if we ignore the concept, we lose something practical:
- We discuss salaries in cash terms but under-discuss ownership.
- We treat employee participation as exotic instead of normal.
- We under-explain compounding and intergenerational transfer.
- We miss a language for talking about agency through ownership.
I am not saying German-speaking Europeans are incapable of this mindset. Obviously we are not. But we clearly tend to think about these things differently.
Normalize Equity
When you hear “equity,” it helps to think of it as a rightful stake. Historically, it is connected to fairness and the recognition of a claim where strict rules would be too rigid. Financially, it is the part that remains after prior obligations. Culturally, it is something that can grow into control, agency, and upside.
That is not a perfect definition, but it captures why the term is so sticky in American discourse. It combines a present claim with a future possibility. It is not just what remains after debt; it is the part that can grow, compound, and give you agency.
If Europeans want to talk more seriously about entrepreneurship, retirement, housing, and wealth building, we would benefit from a stronger everyday vocabulary for exactly this idea. We need a longing for equity so that ownership does not remain something for founders, lawyers, accountants, and wealthy families, but becomes a normal part of how people think about work, risk, and their future.
Not because we should imitate America, but because this mental model helps people make clearer decisions about ownership, incentives, and long-term agency. For Europe, that shift feels long overdue.
April 22, 2026
Kay Hayen
Nuitka Release 4.0
This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, “download now”.
This release is a major release with many new features and the long wanted improvements for scalability of the Python compilation.
Bug Fixes
Accelerated: The enhanced detection for uninstalled Anaconda and WinPython was not fully working. (Fixed in 2.8.1 already.)
Onefile: Fixed an issue in DLL mode where signal handlers were not being registered, which could prevent proper program termination on signals like CTRL-C. (Fixed in 2.8.1 already.)
Windows: Fixed incorrect handling of forward slashes in cache directory paths, which caused issues with Nuitka-Action. (Fixed in 2.8.1 already.)
UI: The
--output-diroption was not being honored in accelerated mode when--output-filenamewas also provided. (Fixed in 2.8.2 already.)UI: The
--output-filenameoption help said it wouldn’t work for standalone mode when in fact it did for a while already. (Fixed in 2.8.2 already.)Onefile: On Windows, fixed a crash when using
--output-dirwhere it was checking for the wrong folder to exist. (Fixed in 2.8.2 already.)macOS: Fixed a crash that could occur when many package-specific directories were used, which could lead to the
otoolcommand line being too long. (Fixed in 2.8.2 already.)Standalone: For the “Python Build Standalone” flavor, ensured that debug builds correctly recognize all their specific built-in modules, preventing potential errors. (Fixed in 2.8.4 already.)
macOS: Fixed an issue where
$ORIGINr-paths were set but ended up unused, which in some cases caused errors by exhausting the header space and preventing the build entirely. (Fixed in 2.8.5 already.)macOS: Fixed an issue to ensure the system
xattrbinary is used.Otherwise, using
arch -x86_64 pythonfor compilation could fail when some packages are installed that providexattras well, because that might be anarm64binary only and would not work. (Fixed in 2.8.5 already.)UI: Fixed a misleading typo in the rejection message for unsupported Python 3.13.4. (Fixed in 2.8.5 already.)
Accelerated: The runner scripts
.cmdor.shnow are also placed respecting the--output-filenameand--output-diroptions. (Fixed in 2.8.5 already.)Plugins: Ensured that plugins detected by namespace usage are also activated in module mode. (Fixed in 2.8.5 already.)
Standalone: Fixed an issue where non-existent packages listed in
top_level.txtfiles could cause errors during metadata collection. (Fixed in 2.8.6 already.)Standalone: Corrected the classification of the
sitemodule, which was previously treated as a standard library module in some cases. (Fixed in 2.8.6 already.)Windows: Ensured that temporary link libraries and export files created during compilation are properly deleted, preventing them from being included in the standalone distribution. (Fixed in 2.8.6 already.)
Python3.14: Adapted to core changes by no longer inlining
haclcode for this version. (Fixed in 2.8.6 already.)Python 3.14: Follow allocator changes and immortal flags changes.
Python 3.14: Follow GC changes for compiled frames as well.
Python 3.14: Catch attempts to clear a compiled suspended frame object.
Fixed a potential mis-optimization for uses of
locals()when transforming the variable name reference call. (Fixed in 2.8.6 already.)Module: Fixed
pkgutil.iter_modulesnot working when loading a module into a namespace. (Fixed in 2.8.7 already.)Reports: Fixed a crash when creating the compilation report before the source directory is created. (Fixed in 2.8.7 already.)
Standalone: Fixed ignoring of non-existent packages from
top_level.txtfor metadata. (Fixed in 2.8.7 already.)UI: The
--no-progress-baroption was not disabling the Scons progress bars. (Fixed in 2.8.7 already.)UI: Fixed an exception in the
tqdmprogress bar during process shutdown. (Fixed in 2.8.7 already.)Windows: Fixed incorrect
sys.executablevalue in onefile DLL mode. (Fixed in 2.8.9 already.)Python3.14: Added missing implicit dependency for
_ctypeson Windows. (Fixed in 2.8.9 already.)Python3.13+: Fixed missing export of
PyInterpreter_*API.Python3.14: Adapted to change in evaluation order of
__exit__and__enter__.Multiprocessing: Fixed issue where
sys.argvwas not yet corrected whenargparsewas used early in spawned processes.Scons: Fixed an issue where Zig was not used as a fallback when MinGW64 was present but unusable.
Windows: Made onefile binary work on systems without runtime DLLs installed as well.
Scons: Made tracing robust against threaded outputs.
Python3.12+: Enhanced workaround for loading of extension modules with sub-packages to cover more cases.
Scons: Fixed missing Zig version output.
Scons: Fixed Zig detection to enforce PATH or CC usage on macOS instead of download, since it’s not available.
UI: Fixed normalization of user paths, improving macOS support for reporting.
Linux: Fixed the workaround for the
memsetzero length warning, which was wrongly applied to Clang. Only GCC requires it, and Clang complained about it.Linux: More robust fallback to
g++whengccis too old for C11 support.Compatibility: Fixed a bug where
delof a subscript could cause wrong runtime behavior due to missing control flow escape annotations for the subscript value itself and the index.macOS: Fixed an issue where
Info.plistuser-facing entitlements keys mapping to multiple internal entitlements were not handled correctly.UI: Ensured tracing uses at least 80 characters for very narrow terminals to maintain readability.
Compatibility: Fixed an issue where nested loops could have incorrect traces, potentially leading to mis-optimizations.
Linux: Fixed an issue where
_XOPEN_SOURCEwas mistakenly appended for Clang, causing warnings.Scons: Improved passed variables handling by detecting
Noneor invalid types earlier.Fixed a bug where propagating class dictionaries needed extra micro passes to ensure proper optimization of their traces for the new variables.
Scons: Fixed an issue with process spawning when using
rusagecapture.Scons: Followed the file closing behavior of the standard communicate closer to avoid potential hangs.
Package Support
Anti-Bloat: Avoided a warning during program shutdown when using a compiled
xgboostpackage. (Fixed in 2.8.1 already.)Standalone: Added support for the
oracledbpackage. (Fixed in 2.8.2 already.)macOS: Added support for newer
PySide6versions. (Fixed in 2.8.4 already.)Standalone: Added support for including more metadata for the
transformerspackage. (Fixed in 2.8.5 already.)Standalone: Metadata from Nuitka Package Configuration is now only included if the corresponding package is part of the compilation. (Fixed in 2.8.5 already.)
Standalone: Added support for the
win32ctypespackage. (Fixed in 2.8.6 already.)Standalone: Added support for newer versions of the
daskpackage. (Fixed in 2.8.6 already.)Standalone: Added support for the
dataparserpackage. (Added in 2.8.7 already.)Standalone: Added support for
puremagic,pygments.lexersandtomliin standalone mode.Standalone: Added automatic detection of
mypycruntime dependencies, no need to manually configure that anymore. Also our configuration was often only correct for a single OS, and single upstream versions which is now fixed for packages having it before.Standalone: Added support for the newer
av(PyAV) package version.Standalone: Added support for the
sentry_sdk,jedi,parso, andline_profilerpackages.Standalone: Added support for newer
pandasversions.
New Features
UI: Added support for
--projectparameter to build using configuration frompyproject.toml(e.g. Poetry, Setuptools).With this, you can simply run
python -m nuitka --project --mode=onefileand it will use thepyproject.tomlorsetup.py/setup.cfgfiles to get the configuration and build the Nuitka binary.Previously Nuitka could only be used for building wheels with
buildpackage, and for building wheels that is still the best way.The
--projectoption is currently compatible withbuildandpoetryand detects the used build system automatically.Zig: Added experimental support for using Zig project’s
zig ccas a C compiler backend for Nuitka. This can be enabled by setting theCCenvironment variable to point to thezigorzig.exeexecutable.Reports: Started capturing
rusagefor OSes that support it.Only POSIX-compliant OSes will do it (Linux, macOS, and all BSD variants), but Android does not.
Not yet part of the actual report, as we need to figure out how to use and present the information.
Scons: Added experimental support for enabling Thin LTO with the Clang compiler.
Standalone: Honor
--nofollow-import-tofor stdlib modules as well.This allows users to manually reduce standard library usage, but it can also cause crashes from extension modules not prepared for the absence of standard library modules.
Onefile: Allowed disabling the onefile timeout and hard killing on CTRL-C entirely by providing
--onefile-child-grace-time=infinity.Scons: Added newer inline copy of Scons which supports Visual Studio 2026. (Added in 2.8.7 already.)
Scons: Allowed using Python versions only partially supported for Nuitka with Scons. (Added in 2.8.7 already.)
UI: Added option
--devel-profile-compilationfor compile time profiling. Also renamed the old runtime profiling option--profileto--debug-profile-runtime, that is however still broken.Reports: Including CPU instr and cycle counters in timing on native Linux.
With appropriate configuration on Linux this allows to get at very precise timing configuration so we can judge even small compile time improvements correctly. We then don’t need many runs to average out noise from other effects.
Don’t use wall clock but process time for steps that are not doing IO like module optimization for more accurate values otherwise, it is however not very accurate still.
Python3.12+: Added support for function type syntax (generics).
Python3.14: Added groundwork for deferred evaluation of function annotations.
Python3.14: Added support for uncompiled generator integration which is crucial for
asynciocorrectness and general usability with modern frameworks.Debugging: Added
--debug-self-forkingto debug fork bombs.Windows: Added
--include-windows-runtime-dllsoption to control inclusion of Windows C runtime DLLs. Defaults toauto.Python 3.14: Added experimental support for deferred annotations.
Plugins: Added option
--qt-debug-pluginsfor debugging Qt plugin loading.DLLs: Added support for DLL tags to potentially control inclusion with more granularity.
macOS: Added support for many more protected resource entitlements (Siri, Bluetooth, HomeKit, etc.) to the bundle details.
Python: Added support for
@nuitka_ignoredecorator to exclude functions from compilation.@nuitka_ignore def my_cpython_func(): # This function is not compiled, but stays bytecode ...
UI: Added support for merging user and standard YAML Nuitka package configurations, currently only including proper merging of implicit imports.
Optimization
Avoid making duplicate hard imports by dropping assignments if the variable was already assigned to the same value.
Found previous assignment traces faster.
The assignment and
delnodes were using functions to find what they already knew from the last micro pass. Theself.variable_tracealready kept track of the previous value trace situation.For matching unescaped traces we will do similar, but it’s not really used right now, so make it only a TODO as that will eventually be very similar.
Also speeds up the first micro pass even more, because it doesn’t have to search and do other things. If no previous trace exists, none is attempted to be used.
Also the common check if no by-name uses or merges of a value occurred was always used inverted and now should be slightly faster to use and allow to short-circuit.
While this accelerated the first micro pass by a lot for per-assignment work, it mainly means to cleanup the design such that traces are easier to re-recognize. And this is a first step with immediate impacts.
Much faster Python passes.
The “Escape” and “Unknown” traces now have their own number spaces. This allows doing some quick checks for a trace without using the actual object, but just its number.
Narrow the scope of variables to the outline scope that uses them, so that they don’t need to be dealt with in merging later code where they don’t ever change anymore and are not used at all.
When checking for unused variables, do not ask the trace collection to filter its traces. Instead it works off the ones attached to the variable already. This avoids a lot of searching work. It also uses a method to decide if a trace constitutes usage rather than a long
elifchain.
Faster variable trace maintenance.
We now trace variables in trace collection as a dictionary per variable with a dictionary of the versions, this is closer to our frequent usage per variable.
That makes it a lot easier to update variables after the tracing is finished to know their users and writers.
Requires a lot less work, but also makes work less memory local such that the performance gain is relatively small despite less work being done.
It also avoids having to maintain a per-variable set for its using scopes.
Decide presence of writing traces for parameter variables faster.
Avoid unnecessary micro passes.
Detect variable references discarded sooner for better micro-pass efficiency. We were spending an extra pass on the whole module to stabilize the variable usage, which can end up being a lot of work.
After a module optimization pass found no changes, we no longer make an extra micro pass to avoid stabilization bugs, but only check against it not happening in debug mode. Depending on the number of micro passes, this can be a relatively high performance gain. For the
telethon.tl.typesmodule this was a 13% performance gain on top.
For “PASS 1” of
telethon.tl.types, which has been one of the known troublemakers with many classes and type annotations, all changes combined improve the compilation time by 1500%.Faster code generation.
Indentation in generated C code is no longer performed to speed up code generation. To restore readability, use the new option
--devel-generate-readable-codewhich will useclang-formatto format the C code.
Recognized module variable usages inside outlined functions that are in a loop, which improves the effectiveness of caching at run-time. (Added in 2.8.6 already.)
Standalone: Partially solved a TODO of minimizing intermediate directories in r-paths of ELF platforms, by only putting them there if the directory they point to will contain DLLs or binaries. This removes unused elements and reduces r-path size.
Windows: Made the caching of external paths effective, which significantly speeds up DLL resolution in subsequent compilations. (Fixed in 2.8.6 already.)
macOS: Removed extended attributes from data files as well, improving performance. (Fixed in 2.8.7 already.)
Scons: Stopped detecting installed MinGW to avoid overhead as it is not supported. (Fixed in 2.8.9 already.)
Scons: Added caching for MSVC information to reduce compilation time and if already available, use that to detect Windows SDK location rather that using
vswhere.exeeach time.Avoid computing large
%string interpolations at compile time. These could cause constants to be included in the binary as a result.Avoid including
importlib._bootstrapandimportlib._bootstrap_externalas they are available as frozen modules.Fixed un-hashable dictionary keys not being properly optimized, forcing runtime handling.
Anti-Bloat
Avoid including
tzdataon non-Windows platforms. (Fixed in 2.8.7 already.)Avoid including
pyparsing.testingin thepyparsingpackage.Added configuration to avoid compiled via C for large generated files for the
sqlfluffpackage.
Organizational
UI: Don’t say
--include-data-files-externaldoesn’t work in standalone mode.It actually has worked for a while, and we since renamed that option, but the help still said it wouldn’t work in standalone mode.
Debugging: Added assertions for code object creation.
We were getting assertions from Python when built with Zig, and these are supposed to provide those as well.
Debugging: In case of tool commands failing, output the too long command line if that was the error given.
Anti-Bloat: Don’t allow custom
nofollowmodes, point the user to the correct option instead. This was never needed, but two ways of providing this user decision make no sense.UI: The help text for
--include-data-files-externalwas updated to reflect that it works in standalone mode. (Fixed in 2.8.5 already.)Release: Use lowercase names for source archives in PyPI uploads. (Fixed in 2.8.7 already.)
Quality: Fixed an issue where “assume yes” was not being passed for downloads in the commit hook.
UI: Improved wording for missing C compiler message.
Debugging: More clear verbose trace for dropped expressions.
Debugging: Output what module had extra changes during debug extra micro pass.
Quality: Manage more development tools (
clang-format, etc.) via private pip space for better consistency and isolation.AI: Enhanced pull request template with directions for AI-driven PRs.
AI: Added agent command
create-mreto assist in creating a minimal reproduction example (MRE).User Manual: Added documentation about redistribution requirements for Python 3.12-3.14.
Quality: Added
--un-pushedargument to auto-format tool for checking only un-pushed changes.Scons: Improved error message to point to Zig support if no C compiler is found.
MonolithPy: Follow rename of our Python fork to MonolithPy to avoid confusion with the Nuitka compiler project itself.
Scons: Prefer English output and warn user for missing English language pack with MSVC in case or outputs being made.
UI: When running non-interactively, print the default response that is assumed for user queries to stdout as well, so it becomes visible in the logs.
UI: Warn when using protected resources options without standalone/bundle mode enabled on macOS.
Reports: Sort DLLs and entry points in compilation reports by destination path for deterministic output.
Quality: Skip files with
spell-checker: disableincodespellchecks.Release: Avoid compiling bytecode for inline copies that are not compatible with the running Python version during install.
Visual Studio: Ignored names in backticks and code blocks in ReST for spelling checks.
Actions: Ensured compilation reports are always recorded, even in case of errors, as they are most useful then.
AI: Added a workflow
create-mreto assist in creating a Minimal Reproducible Example from a larger file triggering a Nuitka bug. This has guidance on avoiding standalone mode and instructions for reducing code to just produce a MRE that is really small.AI: Added a workflow
fix-module-not-found-errorfor solving simpleModuleNotFoundErrorat runtime errors.AI: Added further strategies for Minimal Reproducible Example (MRE) reduction to the agent workflow.
UI: Reject input paths from standard library locations to prevent compiling files from there as main files.
Tests
Added support for
--allwith--max-failuresoption to the test runner to stop after a specified number of failures, or just run all tests and output the failed tests in the end.Also the tests specified can be a glob pattern, to match multiple tests, not just a test name.
Added examples to the help output of the runner to guide the usage of the developers.
Ignore multiline source code outputs of Python3.14 in tracebacks for output comparison, Nuitka won’t do those.
Added test cases for poetry and distutils. Also verify that standalone mode works with
--projectfor the supported build systems.Made the distutils tests cases much more consistent.
Watch: Improved binary name detection from compilation reports for better mode support beyond standalone mode.
Allow downloading tools (like
clang-format) for all test cases.Added options to enforce Zig or Clang usage for C compiling.
Suppress
pipoutput when not running interactively to avoid test output differences.Added
nuitka.formatandnuitka.package_configto self-compilation tests.Added colorization to test comparison diffs if a tty is available.
Avoided using
--nofollow-importsin tests as some Python flavors do not work with it when using--mode=standalone.
Cleanups
Moved options to new
nuitka.optionspackage.Python3.14: Fixed a type mismatch warning seen with MSVC. (Fixed in 2.8.9 already.)
Massive amounts of spelling cleanups. Correct spelling is more and more places allows identification of bugs more immediately, therefore these are very worthwhile.
Code cleanup and style improvements in
ErrorsandOutputDirectoriesmodules.Replaced usages of
os.environ.getwithos.getenvfor consistency and denser code.Moved MSVC re-dist detection to
DllDependenciesWin32.Release: Don’t install
zstandardby default anymore.UI: Tone down complaint about checksum mismatches.
Static source files are now provided by Nuitka directly.
Renamed C function
modulecode_tomodule_code_for consistency.
Summary
This release is finally a break-through for scalability. We will continue the push for scalability in the next release as well, but with more of a focus on the C compilation step, to generate C code that is easier for the backend compiler.
Also, this release finally addresses many usability problems. The
non-deployment hooks for imports not found, that were actively excluded,
are one such thing. The start of --project enables far easier
adoption of Nuitka for existing projects.
Other huge improvements are related to generics, they are now much better support, closing gaps in the Python3.12 support.
The onefile DLL mode as used on Windows is finally perfect and should have no issues anymore, while enabling big future improvements.
Unfortunately 3.14 support is not yet ready and will have to be delayed until the next release.
Real Python
Altair: Declarative Charts With Python
There’s a moment many data analysts know well: you have a new dataset and a clear question, and you open a notebook only to find yourself writing boilerplate axis and figure setup before you’ve even looked at the data. Matplotlib gives you fine-grained control, but that control comes with a cost. Altair takes a completely different approach to data visualization in Python.
Instead of scripting every visual detail, you describe what your data means. This includes specifying which column goes on which axis, what should be colored, and what should be interactive. Altair then generates the visualization.
If you’re wondering whether it’s worth adding another visualization library to your toolkit, here’s how Altair and Matplotlib compare:
| Use Case | Pick Altair | Pick Matplotlib |
|---|---|---|
| Interactive exploratory charts in notebooks | ✅ | — |
| Pixel-precise publication figures or 3D plots | — | ✅ |
Altair generates web-native charts. The output is HTML and JavaScript, which means charts render right in your notebook and can be saved as standalone HTML files or embedded in web pages. It’s not a replacement for Matplotlib, and it doesn’t try to be. Think of them as tools you reach for in different situations.
Get Your Code: Click here to download the free sample code you’ll use to build interactive Python charts the declarative way with Altair.
Take the Quiz: Test your knowledge with our interactive “Altair: Declarative Charts With Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Altair: Declarative Charts With PythonTest your knowledge of Altair, the declarative data visualization library for Python that turns DataFrames into interactive charts.
Start Using Altair in Python
It’s a good idea to install Altair in a dedicated virtual environment. It pulls in several dependencies like pandas and the Vega-Lite renderer, and a virtual environment keeps them from interfering with your other projects. Create one and install Altair with pip:
$ python -m venv altair-venv
$ source altair-venv/bin/activate
(altair-venv) $ python -m pip install altair
This tutorial uses Python 3.14 and Altair 6.0. All the code runs inside a Jupyter notebook, which is the most common environment for interactive data exploration with Altair. If you prefer a different JavaScript-capable environment like VS Code, Google Colab, or JupyterLab, feel free to use that instead. To launch a Jupyter notebook, run the following:
(altair-venv) $ python -m pip install notebook
(altair-venv) $ jupyter notebook
The second command launches the Jupyter Notebook server in your browser. Create a new notebook and enter the following code, which builds a bar chart from a small DataFrame containing daily step counts for one week:
import altair as alt
import pandas as pd
steps = pd.DataFrame({
"Day": ["1-Mon", "2-Tue", "3-Wed", "4-Thu", "5-Fri", "6-Sat", "7-Sun"],
"Steps": [6200, 8400, 7100, 9800, 5500, 9870, 3769],
})
weekly_steps = alt.Chart(steps).mark_bar().encode(
x="Day",
y="Steps",
)
weekly_steps
You should see a bar chart displaying daily step counts:
Step Counts as a Bar Chart
The dataset is intentionally minimal because data isn’t the main focus: it has seven rows for seven days, and two columns for the day name and step count. Notice how the weekly_steps chart is constructed. Every Altair chart follows this same pattern. It’s built from these three building blocks:
- Data: A pandas DataFrame handed to
alt.Chart(). - Mark: The visual shape you want, chosen via
.mark_*(). Here,.mark_bar()draws bars. Other options include.mark_point(),.mark_line(), and.mark_arc(). - Encode: The mapping from data columns to visual properties, declared inside
.encode(). Here,Daygoes to the x-axis andStepsto the y-axis.
This is Altair’s core grammar in action: Data → Mark → Encode. You’ll use it every time.
Read the full article at https://realpython.com/altair-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm
PyCharm for Django Fundraiser: Why Django Matters in the AI Era – And Why We’re Supporting It
Spend a few minutes around developer content, and it’s easy to come away with the impression that web apps now appear to almost write themselves.
Everything that follows – review, verification, refactoring, debugging, and the open-source frameworks that make those apps dependable – gets less attention. AI can speed up code generation, but it does not remove the need for stable foundations. A lot of AI-generated code works because it’s built on top of mature open-source frameworks, libraries, and documentation.
AI can scaffold a web app in thirty seconds. Django is what keeps it running for ten years. That gap is only getting more valuable.
Will Vincent, former Django Board Member, co-host of the Django Chat podcast and co-writer of the weekly Django News newsletter
As AI makes OSS easier to consume, it can also make the work behind it easier to overlook. But OSS still needs support – perhaps more than ever.
PyCharm for Django Fundraiser
PyCharm has supported Django through fundraising campaigns and ongoing collaboration with the Django Software Foundation (DSF). This year, we’re doing it again.
Together with the Django community, this campaign raised $350,000 for Django from 2016 to 2025. That support helps keep Django secure, stable, relevant, and sustainable, while also supporting community programs such as Django Girls and official events. Previous PyCharm fundraisers accounted for approximately 25% of the DSF budget, according to Django’s official blog.
Django is the rare framework that rewards you the longer you use it: mature, dependable, and still innovating. Best-in-class software, matched by one of the most welcoming communities in open source.
Will Vincent, former Django Board Member, co-host of the Django Chat podcast and co-writer of the weekly Django News newsletter
If Django has helped you learn, ship, or maintain real web products, this is a direct way to give back.
You can donate to the Django Software Foundation directly, or you can support Django through this fundraiser and get a tool you’ll rely on every day.
Django’s ‘batteries included’ philosophy was built for humans who wanted to ship fast. Turns out it’s perfect for AI agents too — fewer decisions, fewer dependencies, and fewer ways to go wrong.
Will Vincent, former Django Board Member, co-host of the Django Chat podcast and co-writer of the weekly Django News newsletter
The offer
During this campaign, get 30% off PyCharm Pro, with 100% of the proceeds going to the DSF. Or you can bundle PyCharm Pro with the JetBrains AI Pro plan and get 40% off PyCharm Pro.
This campaign ends in less than two weeks, so act now!
Why PyCharm Pro
Perfect for your workflow
The hard part of modern development is often not writing code from scratch – it’s understanding the whole project well enough to change it safely.
That’s where PyCharm Pro proves its value:
- Navigate and refactor across your entire Django project, from templates to databases.
- Work with databases without leaving the IDE.
- Build and debug Django templates with full awareness of your context.
- Develop frontend code with built-in support for JavaScript, TypeScript, and major frameworks.
- Run and debug remote and Docker-based environments with ease
No editor understands Django like PyCharm does — from template tags to ORM queries to migrations, it sees the whole stack the way you do.
Will Vincent, former Django Board Member, co-host of the Django Chat podcast and co-writer of the weekly Django News newsletter
For Django work, I think PyCharm is one of the best tools available. I use it every day. If you haven’t given it a try, this campaign is a great opportunity – AND it supports the Django Software Foundation!
Sarah Boyce, Django Fellow and Djangonaut Space co-organizer
AI on your terms
If you want AI in PyCharm, you can start with JetBrains AI directly in the IDE. You can also shape it to fit your workflow. Bring your own key, sign in with a supported provider, use third-party or local models, or connect compatible agents such as Claude Code and Codex via ACP.
That gives you more control over how you work with AI, instead of locking you into a single workflow, model, or provider. And if AI isn’t what you need, you can simply turn it off.
Support the framework you use every day
If Django is part of how you build, this purchase can improve your workflow while also investing in the framework behind it.
Happy coding!
Real Python
Quiz: SQLite and SQLAlchemy in Python: Move Your Data Beyond Flat Files
In this quiz, you’ll test your understanding of the concepts in the video course SQLite and SQLAlchemy in Python: Move Your Data Beyond Flat Files.
By working through this quiz, you’ll revisit how Python, SQLite, and SQLAlchemy work together to give your programs reliable data storage. You’ll also check your grasp of primary and foreign keys, SQLAlchemy’s Core and ORM layers, and the many-to-many relationships that tie your data together.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python GUIs
Checkboxes in Table Views with a Custom Model — Show checkboxes for boolean values in PyQt/PySide table views
I have a QTableView with a custom QAbstractTableModel, and I want to add a column of checkboxes. Should I create a custom delegate class for the checkbox, or is there a simpler way to do this?
You can use a custom delegate to draw a checkbox widget, but you don't have to. Qt provides a built-in mechanism for this: Qt.CheckStateRole. By returning Qt.Checked or Qt.Unchecked from your model's data() method, Qt will render a checkbox automatically — no delegate required.
Let's walk through how this works, starting with a simple display and then adding some interactivity.
Displaying checkboxes using Qt.CheckStateRole
The simplest way to add checkboxes to a QTableView is to handle Qt.CheckStateRole in your model's data() method. When Qt asks your model for data with this role, returning Qt.Checked or Qt.Unchecked tells Qt to draw a checkbox in that cell.
Here's a minimal example that shows a checked checkbox in every cell:
def data(self, index, role):
if role == Qt.DisplayRole:
value = self._data[index.row()][index.column()]
return str(value)
if role == Qt.CheckStateRole:
return Qt.Checked
This produces a table where every cell has both text and a checked checkbox:

In a real application, you would return Qt.Checked or Qt.Unchecked based on actual boolean values in your data. You might also restrict checkboxes to a specific column — for example, one that holds True/False values — rather than showing them everywhere.
Making checkboxes toggleable
Displaying checkboxes is a good start, but users will expect to be able to click them. To make checkboxes interactive, you need three things:
- A data store for the check state — a list (or column) that tracks which items are checked.
Qt.ItemIsUserCheckablereturned fromflags()— this tells Qt that the cell supports toggling.- A
setData()implementation forQt.CheckStateRole— this stores the updated state when the user clicks a checkbox.
Let's put all of this together in a complete example.
import sys
from PyQt6 import QtCore, QtGui, QtWidgets
from PyQt6.QtCore import Qt
class TableModel(QtCore.QAbstractTableModel):
def __init__(self, data, checked):
super().__init__()
self._data = data
self._checked = checked
def data(self, index, role):
if role == Qt.ItemDataRole.DisplayRole:
value = self._data[index.row()][index.column()]
return str(value)
if role == Qt.ItemDataRole.CheckStateRole:
checked = self._checked[index.row()][index.column()]
if checked:
return Qt.CheckState.Checked
return Qt.CheckState.Unchecked
def setData(self, index, value, role):
if role == Qt.ItemDataRole.CheckStateRole:
checked = value == Qt.CheckState.Checked.value
self._checked[index.row()][index.column()] = checked
self.dataChanged.emit(index, index, [role])
return True
return False
def rowCount(self, index):
return len(self._data)
def columnCount(self, index):
return len(self._data[0])
def flags(self, index):
return (
Qt.ItemFlag.ItemIsSelectable
| Qt.ItemFlag.ItemIsEnabled
| Qt.ItemFlag.ItemIsUserCheckable
)
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.table = QtWidgets.QTableView()
data = [
[1, 9, 2],
[1, 0, -1],
[3, 5, 2],
[3, 3, 2],
[5, 8, 9],
]
checked = [
[True, True, True],
[False, False, False],
[True, False, False],
[True, False, True],
[False, True, True],
]
self.model = TableModel(data, checked)
self.table.setModel(self.model)
self.setCentralWidget(self.table)
app = QtWidgets.QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
Run this and you'll see a table with checkboxes next to every value. Clicking any checkbox toggles it on and off, and the underlying checked list is updated accordingly.
Storing check state separately
The checked list mirrors the structure of the data list — each cell has a corresponding True or False value. This keeps the boolean check state separate from the data.
You could store it in the same data structure, as a [bool, data_value] nested list, or tuple if you like.
Returning the check state in data()
When Qt asks for Qt.ItemDataRole.CheckStateRole, we look up the boolean value for that cell and return either Qt.CheckState.Checked or Qt.CheckState.Unchecked:
if role == Qt.ItemDataRole.CheckStateRole:
checked = self._checked[index.row()][index.column()]
if checked:
return Qt.CheckState.Checked
return Qt.CheckState.Unchecked
For these return X if True, otherwise return Y type returns you can also use and X if bool else Y expression.
if role == Qt.ItemDataRole.CheckStateRole:
checked = self._checked[index.row()][index.column()]
return Qt.CheckState.Checked if checked else Qt.CheckState.Unchecked
Handling user clicks in setData()
When the user clicks a checkbox, Qt calls setData() with the new value and the Qt.ItemDataRole.CheckStateRole role. We compare the incoming value to Qt.CheckState.Checked.value to determine whether the box was checked or unchecked, then store the result:
def setData(self, index, value, role):
if role == Qt.ItemDataRole.CheckStateRole:
checked = value == Qt.CheckState.Checked.value
self._checked[index.row()][index.column()] = checked
self.dataChanged.emit(index, index, [role])
return True
return False
Notice the self.dataChanged.emit(...) call — this notifies the view that the data has changed so it can redraw the cell. Always emit this signal when you modify data in setData().
Enabling user interaction with flags()
The flags() method tells Qt what the user can do with each cell. Including Qt.ItemFlag.ItemIsUserCheckable is what makes the checkbox clickable:
def flags(self, index):
return (
Qt.ItemFlag.ItemIsSelectable
| Qt.ItemFlag.ItemIsEnabled
| Qt.ItemFlag.ItemIsUserCheckable
)
Without this flag, the checkbox will still appear (because you're returning data for CheckStateRole), but the user won't be able to toggle it.
Showing checkboxes in only one column
In many applications, you only want checkboxes in a specific column. You can achieve this by checking index.column() in your data() and flags() methods. For example, to show checkboxes only in column 2:
def data(self, index, role):
if role == Qt.ItemDataRole.DisplayRole:
value = self._data[index.row()][index.column()]
return str(value)
if role == Qt.ItemDataRole.CheckStateRole:
if index.column() == 2:
checked = self._checked[index.row()]
if checked:
return Qt.CheckState.Checked
return Qt.CheckState.Unchecked
def flags(self, index):
flags = Qt.ItemFlag.ItemIsSelectable | Qt.ItemFlag.ItemIsEnabled
if index.column() == 2:
flags |= Qt.ItemFlag.ItemIsUserCheckable
return flags
In this case, self._checked would be a simple one-dimensional list (one boolean per row) rather than a 2D list.
Summary
To add checkboxes to a QTableView with a custom QAbstractTableModel:
- Handle
Qt.ItemDataRole.CheckStateRoleindata()to display checkboxes based on boolean values. - Return
Qt.ItemFlag.ItemIsUserCheckablefromflags()to make checkboxes interactive. - Implement
setData()forQt.ItemDataRole.CheckStateRoleto store the updated state when the user clicks, and emitdataChangedto keep the view in sync.
This approach works natively with Qt's model/view architecture and avoids the complexity of writing a custom delegate. For a more complete guide to displaying data in table views — including using numpy and pandas data sources — see our QTableView with ModelViews tutorial. If you want to show only an icon without text in specific cells, see how to show only an icon in a QTableView cell. You can also learn how to create your own custom widgets for more advanced UI needs.
For an in-depth guide to building Python GUIs with PyQt6 see my book, Create GUI Applications with Python & Qt6.
April 21, 2026
PyCoder’s Weekly
Issue #731: Visualize ML, Vector DBs, Type Checker Comparison, and More (April 21, 2026)
#731 – APRIL 21, 2026
View in Browser »
Machine Learning Visualized
This is a series of Jupyter notebooks that help visualize the algorithms that are used in machine learning. Learn more about neural networks, regression, k-means clustering, and more.
GAVING HUNG
Vector Databases and Embeddings With ChromaDB
Learn how to use ChromaDB, an open-source vector database, to store embeddings and give context to large language models in Python.
REAL PYTHON course
Wallaby for Python runs Tests as you Type and Streams Results Next to Code, Plus AI Context
Wallaby brings pytest / unittest results, runtime values, coverage, errors, and time-travel debugging into VS Code, so you can fix Python faster and give Copilot, Cursor, or Claude the execution context they need to stop guessing. Try it free, now in beta →
WALLABY TEAM sponsor
Python Type Checker Comparison: Speed and Memory Usage
A benchmark comparison of speed and memory usage across Python type checkers including Pyrefly, Ty, Pyright, and Mypy.
AARON POLLACK
PEP 831: Frame Pointers Everywhere: Enabling System-Level Observability for Python (Draft)
This PEP proposes two things:
PYTHON.ORG
Discussions
Articles & Tutorials
Reassessing the LLM Landscape & Summoning Ghosts
What are the current techniques being employed to improve the performance of LLM-based systems? How is the industry shifting from post-training towards context engineering and multi-agent orchestration? This week on the show, Jodie Burchell, data scientist and Python Advocacy Team Lead at JetBrains, returns to discuss the current AI coding landscape.
REAL PYTHON podcast
Security Best Practices Featuring uv and pip
This collection of security practices explains how to best use your package management tools to help avoid malicious packages. Example: implement a cool-down period; most malicious packages are found quickly, by not installing on the day of a release your chances of getting something bad go down.
GITHUB.COM/LIRANTAL
Beyond Basic RAG: Build Persistent AI Agents
Master next-gen AI with Python notebooks for agentic reasoning, memory engineering, and multi-agent orchestration. Scale apps using production-ready patterns for LangChain, LlamaIndex, and high-performance vector search. Explore & Star on GitHub →
ORACLE sponsor
The Economics of Software Teams
Subtitled “Why Most Engineering Organizations Are Flying Blind”, this article is a breakdown of what software development teams actually cost, what they need to generate to be financially viable, and why most organizations have no visibility into either number.
VIKTOR CESSAN
OWASP Top 10 (2025 List) for Python Devs
The OWASP Top 10 is a list of common security vulnerabilities in code, like SQL injection. The list has recently been updated and Talk Python interviews Tanya Janca to discuss all the big changes this time around.
TALK PYTHON podcast
Textual: An Intro to DOM Queries
The Textual TUI framework uses a tree structure to store all of the widgets on the page. This DOM is query-able, giving you the ability to find widgets on the fly in your code.
MIKE DRISCOLL
Reflecting on 5 Years as the Developer in Residence
Łukasz Langa is stepping down as the Python Software Foundation’s first CPython Developer in residence. This post talks about his experience there and everything accomplished.
PYTHON SOFTWARE FOUNDATION
Decoupling Your Business Logic From the Django ORM
Where should I keep my business logic? This is a perennial topic in Django. This article proposes a continuum of cases, each with increasing complexity.
CARLTON GIBSON
How to Add Features to a Python Project With Codex CLI
Learn how to use Codex CLI to add features to Python projects via the terminal. Master AI-powered coding without needing a browser or IDE plugins.
REAL PYTHON
PyPI Has Completed Its Second Audit
PyPI has completed its second external security audit. This post shows all the things found and what they’re doing about each of them.
MIKE FIEDLER
New Technical Governance: Request for Community Feedback
The Django Steering Council has proposed new governance mechanism and is looking for feedback from the community.
DJANGO SOFTWARE FOUNDATION
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
April 22, 2026
REALPYTHON.COM
The Carpentries
April 22 to April 24, 2026
INSTATS.ORG
AgentCamp Amsterdam 2026
April 23, 2026
MEETUP.COM
North Bay Python 2026
April 25 to April 27, 2026
NORTHBAYPYTHON.ORG
Python Sheffield
April 28, 2026
GOOGLE.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #731.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Tryton News
Tryton Release 8.0
We are proud to announce the 8.0 LTS release of Tryton.
This release provides many bug fixes, performance improvements and some fine tuning.
You can give it a try on the demo server, use the docker image or download it here.
As usual upgrading from previous series is fully supported.
Here is a list of the most noticeable changes:
Changes for the User
Client
There is now a visual hint on the widgets of modified fields. This way the user can see what was modified before saving.
Web
The tabs can now be reordered.
The logout action has been moved in the same menu as the notification. And an entry for the help was also added. This simplifies visually the content of the header.
Accounting
The reconciliation of accounting lines can now be automated by a scheduled task.
The general ledger display only flat balance which is more comprehensive for a flat list of accounts.
We added the option to calculate rounding for cash using the opposite rounding method. This existed already for standard currency rounding but was missing for cash rounding.
It is now possible to define some payment means on the invoice.
They can be set manually or using a rule engine.
The supported payment means for now are by bank transfer and direct debit.
When invoices are created from an external source (like PEPPOL), we check the amounts against the source when the invoice is validated or posted.
This allows to create invoices for which the calculation is different in Tryton than in the source and let the user manually correct them.
Tryton warns now when the user tries to create an over-payment.
Europe
We added all the VAT exception code on the taxes which can be used in electronic invoices.
Belgium
The taxes are now setup with UNECE and VATEX codes which is useful to generate and parse UBL invoice like on PEPPOL.
Document Incoming
The OCR module supports now the payment reference of the supplier. It is stored on the supplier invoice and used when making a payment.
E-document
We added a button on the PEPPOL document to trigger an update of the status. The users do not always want to wait for the scheduled task to update the status.
We enforce that the unit price of any invoice lines sent to PEPPOL is not negative. This is a rule from the PEPPOL network that is better to enforce before posting the invoice.
The UBL invoice template has been extended to render the buyer’s item identification, the allowance and charges, the billing reference, the payment means, VATEX codes and prepaid amounts.
The UBL invoice parser supports now the payment means.
The UN/CEFACT invoice template renders the payment means and the VATEX codes.
The UNECE module stores now the allowance, charge and special service code on the product. And it stores on the payment means the UNCL4461 code.
Incoterm
The incoterm defines now who is responsible for the export and import duties between the buyer and the seller.
Party
These identifiers have been added: Slovenian Corporate Registration Number, Belgian Social Security Number, Spanish Activity Establishment Code, Russian Primary State Registration Number, Mozambique Tax Number, French Trade Registration Number, Azerbaijan Tax Number and Senegal Tax Number.
The identifiers are now formatted to ease the reading.
We added a new menu entry that lists all the party identifiers.
A “Attn” field has been added to the address. This is useful to manage delivery addresses for web shop when the customer is shipping to another party.
Production
The “Cancellation Stock” group can now also cancel running and done productions.
It is not possible to define if the cost from the time sheets must be included in the production cost calculation. This is defined per work center.
Project
The work efforts are now numbered to ease the communication between employees.
An origin field has been added to the work efforts.
Purchasing
The invoice method “on shipment” has been renamed into “on fulfillment” to be more generic.
The quantities to invoice are now calculated for each purchase line. And purchases with at least one line to invoice is marked as “To invoice”.
Quality
We added a reference field to the inspections. This allows to store an external number if the inspection was performed by an external service.
Sales
The invoice method “on shipment” has been renamed into “on fulfillment” to be more generic.
The quantities to ship and to invoice are now calculated for each sale line. And sales with at least one line to ship or to invoice is marked as “To ship” or “To invoice”.
Stock
We added a special group which is allowed to cancel done shipments and moves. This is useful to correct mistakes.
The shipments have now a wizard to ease the creation of package. It simplifies the operation like putting a package inside another package, putting only a quantity of a move into a package etc.
For UPS carrier, the module charges the duties and taxes to the shipper accordingly to the incoterm.
We store now the original planned date of the requested internal shipments and the requested production. This is useful to find late requests.
Shop
The sales stores now the URL of the corresponding order to the web shop.
Shopify
We support now the payment terms from Shopify. When a sale has a payment term, it is always confirmed in Tryton.
The gift cards are now supported with Shopify. So when a gift card is sold on Shopify, no gift card is created on Tryton. And when the gift card is used on Shopify, it appears as a payment from the gift_card gateway.
The actions from Shopify are now logged on the sale order.
The pick-up delivery method is now supported for Shopify order. When the shipment is packed on Tryton, it is marked as prepared for pickup on Shopify.
New Modules
Account Payment Check
The Account Payment Check Module allows managing and printing checks as payments.
Account Stock EU Excise
The Account Stock EU Excise Module is used to generate the excise duties declaration for European countries.
Production Ethanol
The Production Ethanol Module calculates the gain or loss of alcohol volumes in production.
Sale Project Task Module
The Sale Project Task Module adds the option to create tasks when selling services. The fulfillment of the sales is linked to the progression of these tasks.
Stock Ethanol Module
The Stock Ethanol Module helps to track alcohol in warehouses.
Removed Modules
Those modules have been removed:
account_de_skr03account_esaccount_es_siigoogle_maps
You may find alternatives published by the community.
Changes for the System Administrator
Client
Web
The build of the web client does not require bower anymore.
The session is now stored as cookie. This prevents the session to be leaked in case of security issue with our Javascript code.
The web client uses now relative path to perform the server requests. This allows to serve the web client from a sub-directory.
Server
Basic authentication is now supported also for user application. This is useful when the consumer of the user application can not support bearer authentication.
Document Incoming
The Typless modules requires now to define all the fields set on the service in order to generate a complete feedback even for fields that Typless did not recognized.
E-document
It is now possible to setup a webhook for Peppyrus. This allows to receive the PEPPOL invoices as soon as they landed on the inbox.
Inbound Email
The inbound email gains an action to handle replies to chat channels. The text content above a specific line is added as message to the corresponding channel.
Changes for the Developer
This release removes the support for Python 3.9 and adds Python 3.14.
Server
It is now possible to filter the user being notified by a scheduled task using a domain. This is useful for example when the notification applied only to users having access to a specific company.
The report engine can now use MJML as base format and convert it to HTML. This simplifies the creation of email template with compatibility against most common email clients.
The Field.sql_column receives now a tables dictionary and Model as argument.
The Field.sql_column can now be override by a method on the Model named column_<field name>(tables).
This extends the possibilities for more complex type of fields.
With those improvements, we can not support Function fields without getter but only SQL expression via the column_<field name>. Those fields are automatically searchable and sortable without the need to define domain_<field name> nor order_<field name> methods.
A last_modified field has been added to ModelSQL to avoid to duplicate the code write_date or create_date.
The fmany2one field can now be based on Function field.
We have upgraded the PostgreSQL backend to use Psycopg 3. By default Tryton is using server-side binding which allows to remove the limitation on the size of the list of IDs that can be passed by using array for the in operators.
Thus the reduce_ids and grouped_slice (without size) tools has been deprecated and the Database.IN_MAX replaced by backend.MAX_QUERY_PARAMS.
The delete and delete_many methods have been added to the FileStore API which allows to remove the files of Binary fields when they are deleted or updated.
The button states are now checked when executed with check for access. This ensure that a client can not execute a button that should be disable.
A notify_user method has been added to ModelStorage to ease the notification
of a user.
A contextual _log key can be used to force the logging of events even if they do not originate from a user.
New routes have been added to manage the login/logout with cookie.
It is now possible to include sub-directories in the tryton.cfg. This is useful when developing large module to split it into sub-directories.
A new attribute in the XML data allows to define a value as a path relative to the place of the XML file. This feature works in combination with the sub-directories to avoid to repeat the directory name.
The chat channel now send new messages by email new messages to followers who subscribed with emails.
It is now possible to mount the WSGI application under a prefix.
The RPC calls are not prefixed by /rpc/.
A generic REST API has been added as a user application.
It allows to search, retrieve, update and delete any record of a ModelStorage with the access right enforced for the user and to launch any RPC action and report.
The ModelStorage.__json__ method defines the default fields to include in the response based on usage but the client can also explicitly request the fields (with dotted notation).
The context is passed as value of the X-Tryton-Context header encoded in JSON. The language is selected from the Accept-Language header of the client. And the search can be paginated using the Range header.
Naiad
Naiad is a new Python library to access Tryton’s REST API.
Accounting
We removed the default value for the invoice’s type. It must be set explicitly.
An origin invoices field has been added to the invoice.
The Stripe payment module uses now the version 2025-09-30.clover of the API.
E-document
The UBL template filters now the additional documents per MIME type.
Web Shop
Shopify
We replaced the unmaintained ShopifyAPI library by the new shopifyapp.
2 posts - 1 participant
Real Python
Leverage OpenAI's API in Your Python Projects
Python’s openai library provides the tools you need to integrate the ChatGPT API into your Python applications. With it, you can send text prompts to the API and receive AI-generated responses. You can also guide the AI’s behavior with developer role messages and handle both simple text generation and more complex code creation tasks.
After watching this video course, you’ll understand how examples like this work under the hood. You’ll learn the fundamentals of using the ChatGPT API from Python and have code examples you can adapt for your own projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Leverage OpenAI's API in Your Python Projects
In this quiz, you’ll test your understanding of Leverage OpenAI’s API in Your Python Projects.
By working through this quiz, you’ll revisit key concepts like setting up
authentication, sending prompts with the openai library, controlling AI
behavior with role-based messages, and structuring outputs with Pydantic models.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: uv vs pip: Python Packaging and Dependency Management
In this quiz, you’ll test your understanding of uv vs pip: Python Packaging and Dependency Management.
By working through this quiz, you’ll revisit key differences between uv and pip, including package installation speed, dependency management, reproducible environments, and governance.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
April 20, 2026
Real Python
Gemini CLI vs Claude Code: Which to Choose for Python Tasks
When comparing Gemini CLI vs Claude Code, the answer to “which one is better?” is usually it depends. Both tools boost productivity for Python developers, but they have different strengths. Choosing the right one depends on your budget, workflow, and what you value most in generated code.
Gemini CLI, for instance, is known for its generous free tier, while Claude Code is a paid tool known for its production-ready output.
In this tutorial, you’ll explore features such as user experience, performance, code quality, and usage cost to help make that decision easier. The AI coding assistance these tools provide right in your terminal generally makes writing Python code much more seamless, helping you save time and be more productive.
This table highlights the key differences at a glance:
| Use Case | Gemini CLI | Claude Code |
|---|---|---|
| You need free generous usage limits | ✅ | — |
| You need Google Cloud integration | ✅ | — |
| You need faster task completion | — | ✅ |
| You need code close to production quality | — | ✅ |
You can see that Gemini CLI is a promising choice if you’re looking for free usage limits and prefer Google Cloud integration. However, if you want to complete tasks faster, Claude Code has an edge. Both tools produce code of good quality, but Claude Code generates code that is closer to production quality. If you’d like a more thorough comparison, then read on.
Get Your Code: Click here to download the free sample code for the to-do app projects built with Gemini CLI and Claude Code in this tutorial.
Take the Quiz: Test your knowledge with our interactive “Gemini CLI vs Claude Code: Which to Choose for Python Tasks” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Gemini CLI vs Claude Code: Which to Choose for Python TasksCompare Gemini CLI and Claude Code across user experience, performance, code quality, and cost to find the right AI coding tool for you.
Metrics Comparison: Gemini CLI vs Claude Code
To ground the comparisons in hands-on data, both tools are tested using the same prompt throughout this tutorial:
Prompt
Build a CLI-based mini to-do application in Python. It should allow users to create tasks, mark tasks as completed, list tasks with filtering for completed and pending tasks, delete tasks, include error handling, persist tasks to a local JSON file, and include basic unit tests.
For a fair comparison, Gemini CLI is tested on its free tier using Gemini 3 Flash Preview, which is the default model the free tier provides access to. Claude Code is tested on the Pro plan using Claude Sonnet 4.6, which is the model Claude Code primarily uses for everyday interactions on that plan.
Each tool will run this prompt three times. Completion time, token usage, and the quality of the generated code are recorded from the runs and are referenced in the Performance, Code Quality, and Usage Cost sections of this tutorial.
Note: If you want to learn more about these tools so you can compare them yourself, Real Python has you covered. The How to Use Google’s Gemini CLI for AI Code Assistance tutorial covers installation, authentication, and hands-on usage, while the Getting Started With Claude Code video course walks you through setup and core features.
You should also be comfortable using your terminal, since both Gemini CLI and Claude Code are command-line tools.
The table below provides more detailed metrics to help with each comparison:
| Metric | Gemini CLI | Claude Code |
|---|---|---|
| User Experience | Intuitive, browser-based auth, terminal-native | Minimal setup, terminal-native, strong project awareness |
| Performance | Good performance, however slower generation speed | Good performance, code is generated generally faster |
| Code Quality | Solid, better for exploratory tasks | Strong, better for production-grade work |
| Usage Cost | Free tier available; paid plans for heavier use | Requires a paid subscription to get started |
The following sections explore each metric in detail, so you can decide which tool fits your workflow best.
User Experience
When writing Python programs, it helps to be able to comfortably use your tools without dealing with unintuitive interfaces. Both Gemini CLI and Claude Code prioritize a smooth terminal experience, but user experience goes beyond the interface itself—installation, setup, available models, and features offered are also part of it.
Installation and Setup
A few differences exist between Gemini CLI and Claude Code during installation. Gemini CLI requires a Google account for authentication. Claude Code doesn’t need a Google account. Instead, it requires an Anthropic subscription or API key.
Gemini CLI is first installed using npm:
$ npm install -g @google/gemini-cli
You can also install Gemini CLI with Anaconda, MacPorts, or Homebrew, which you can find in the Gemini CLI documentation.
When installing Claude Code, you run the following commands:
Read the full article at https://realpython.com/gemini-cli-vs-claude-code/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Mike Driscoll
Textual – Logging to File and to Textual Console
When you are developing a user interface, it can be valuable to have a log of what’s going on. Creating a log in Textual, a text-based user interface, is even easier than creating one for wxPython or Tkinter. Why? Well, because Textual includes a logger that is compatible with Python’s own logging module, so it’s almost plug-and-play to hook it all up!
You’ll learn how to do this in this short tutorial!
Logging to File and the Console
Textual includes a built-in logging-type handler that you can use with Python’s own logging module called TextualHandler. Python has many built-in logging handler objects that you can use to write to stdout, a file, or even to an email address!
You can hook up multiple handlers to a logger object and write to all of them at once, which gives you a lot of flexibility.
To see how this works in Textual, you will create a very simple application that contains only two buttons. Go ahead and open your favorite Python IDE or text editor and create a new file called log_to_file.py. Then enter the following code into it:
# log_to_file.py
import logging
from textual.app import App, ComposeResult
from textual.logging import TextualHandler
from textual.widgets import Button
class LogExample(App):
def __init__(self) -> None:
super().__init__()
self.logger = logging.getLogger(name="log_example")
self.logger.setLevel(logging.INFO)
file_handler = logging.FileHandler("tui.log")
self.logger.addHandler(file_handler)
formatter = logging.Formatter(("%(asctime)s - %(name)s - %(levelname)s - %(message)s"))
file_handler.setFormatter(formatter)
textual_handler = TextualHandler()
self.logger.addHandler(textual_handler)
def compose(self) -> ComposeResult:
yield Button("Toggle Dark Mode", classes="dark mode")
yield Button("Exit", id="exit")
def on_button_pressed(self, event: Button.Pressed) -> None:
if event.button.id == "exit":
self.logger.info("User exited")
self.exit()
elif event.button.has_class("dark", "mode"):
self.theme = (
"textual-dark" if self.theme == "textual-light" else "textual-light"
)
self.logger.info(f"User toggled app theme to {self.theme}")
if __name__ == "__main__":
app = LogExample()
app.run()
As you can see, you have just two buttons for the user to interact with:
- Toggle Dark Mode – for toggling dark or light mode
- Exit – for exiting the application
No matter which button the user presses, the application will log out something. By default, Textual logs to stdout, but you cannot see it because your application will be on screen. If you want to see the logs, you will need to use one of the Textual Console applications, which is part of Textual’s devtools. If you do not have the dev tools installed, you can do so by running this command:
pip install textual-dev
Now that you have the dev tools handy, open up a new terminal window or tab and run this command:
textual console
To get Textual to send the log messages to console, you need to run your Textual application in developer mode. You will run it in a different terminal than Textual Console!
Here’s the special command:
textual run --dev log_to_file.py
You will see various events and other logged metadata appear in the Textual Console regardless of whether you specifically log to it. However, now if you do call self.log or you use Python’s print() function, you will see those appear in your log.
You will also see your log messages in your log file (tui.log), though it won’t include all the extra stuff that Textual Console displays. You only get what you log explicitly written into your log file.
Wrapping Up
And there you have it. You now know how to use Textual’s own built-in logging handler in conjunction with Python’s logging module. Remember, you can use Textual’s logging handler in addition to one or more of Python’s logging modules. You can format the output any way you want too!
Learn More About Logging
If you want to learn more about logging in Python, you might find my book, Python Logging, helpful.
![]()
Purchase the book today on Gumroad, Leanpub or Amazon!
The post Textual – Logging to File and to Textual Console appeared first on Mouse Vs Python.
Real Python
Quiz: How to Conceptualize Python Fundamentals for Greater Mastery
In this quiz, you’ll test your understanding of How to Conceptualize Python Fundamentals for Greater Mastery.
By working through this quiz, you’ll revisit a framework for forming a clear mental picture of Python concepts, including defining ideas in your own words, finding real-world and software analogies, comparing similar concepts, and learning by teaching.
With this framework in hand, you’ll be better equipped to approach new Python topics with confidence.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Bytes
#477 Lazy, Frozen, and 31% Lighter
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://django-modern-rest.readthedocs.io/en/latest/?featured_on=pythonbytes">Django Modern Rest</a></strong></li> <li><strong>Already playing with Python 3.15</strong></li> <li><strong><a href="https://mkennedy.codes/posts/cutting-python-web-app-memory-over-31-percent/?featured_on=pythonbytes">Cutting Python Web App Memory Over 31%</a></strong></li> <li><strong><a href="https://tryke.dev?featured_on=pythonbytes">tryke - A Rust-based Ptyhon test runner with a Jest-style API</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=WmJtmS5Fn7U' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="477">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a> <strong>Connect with the hosts</strong></li> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky) Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</li> </ul> <p><strong>Michael #1: <a href="https://django-modern-rest.readthedocs.io/en/latest/?featured_on=pythonbytes">Django Modern Rest</a></strong></p> <ul> <li>Modern REST framework for Django with types and async support</li> <li>Supports Pydantic, Attrs, and msgspec</li> <li>Has ai coding support with llms.txt</li> <li>See an example at the <a href="https://django-modern-rest.readthedocs.io/en/latest/pages/getting-started.html#showcase">“showcase” section</a></li> </ul> <p><strong>Brian #2: Already playing with Python 3.15</strong></p> <ul> <li><a href="https://blog.python.org/2026/04/python-3150a8-3144-31313/?featured_on=pythonbytes">3.15.0a8, 2.14.4 and 3.13.13 are out</a> <ul> <li>Hugo von Kemenade</li> </ul></li> <li>beta comes in May, CRs in Sept, and Final planned for October</li> <li>But still, there’s awesome stuff here already, here’s what I’m looking forward to: <ul> <li><a href="https://docs.python.org/3.15/whatsnew/3.15.html#whatsnew315-lazy-imports"><strong>PEP 810</strong></a>: Explicit lazy imports</li> <li><a href="https://docs.python.org/3.15/whatsnew/3.15.html#whatsnew315-frozendict"><strong>PEP 814</strong></a>: <code>frozendict</code> built-in type</li> <li><a href="https://docs.python.org/3.15/whatsnew/3.15.html#whatsnew315-unpacking-in-comprehensions"><strong>PEP 798</strong></a>: Unpacking in comprehensions with <code>*</code> and <code>**</code></li> <li><a href="https://docs.python.org/3.15/whatsnew/3.15.html#whatsnew315-utf8-default"><strong>PEP 686</strong></a>: Python now uses UTF-8 as the default encoding</li> </ul></li> </ul> <p><strong>Michael #3: <a href="https://mkennedy.codes/posts/cutting-python-web-app-memory-over-31-percent/?featured_on=pythonbytes">Cutting Python Web App Memory Over 31%</a></strong></p> <ul> <li>I cut 3.2 GB of memory usage from our Python web apps using five techniques: <ul> <li>async workers</li> <li>import isolation</li> <li>the Raw+DC database pattern</li> <li>local imports for heavy libraries</li> <li>disk-based caching</li> </ul></li> <li><a href="https://mkennedy.codes/posts/cutting-python-web-app-memory-over-31-percent/?featured_on=pythonbytes">See the full article</a> for details.</li> </ul> <p><strong>Brian #4: <a href="https://tryke.dev?featured_on=pythonbytes">tryke - A Rust-based Ptyhon test runner with a Jest-style API</a></strong></p> <ul> <li>Justin Chapman</li> <li>Watch mode, Native async support, Fast test discovery, In-source testing, Support for doctests, Client/server mode for fast editor integrations, Pretty, per-assertion diagnostics, Filtering and marks, Changed mode (like pytest-picked), Concurrent tests, Soft assertions,</li> <li>JSON, JUnit, Dot, and LLM reporters</li> <li>Honestly haven’t tried it yet, but you know, I’m kinda a fan of thinking outside the box with testing strategies so I welcome new ideas.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://aleyan.com/blog/2026-why-arent-we-uv-yet/?featured_on=pythonbytes">Why are’t we uv yet?</a> <ul> <li>Interesting take on the “agents prefer pip”</li> <li>Problem with analysis. <ul> <li>Many projects are libraries and don’t publish uv.lock file</li> <li>Even with uv, it still often seen as a developer preference for non-libarries. You can sitll use uv with requirements.txt</li> </ul></li> </ul></li> <li><a href="https://us.pycon.org/2026/schedule/talks/?featured_on=pythonbytes">PyCon US 2026 talks schedule is up</a> <ul> <li>Interesting that there’s an AI track now. I won’t be attending, but I might have a bot watch the videos and summarize for me. :)</li> </ul></li> <li><a href="https://justinjackson.ca/tech-done-to-us?featured_on=pythonbytes">What has technology done to us?</a> <ul> <li>Justin Jackson</li> </ul></li> <li><a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">Lean TDD new cover</a> <ul> <li>Also, 0.6.1 is so ready for me to start f-ing reading the audio book and get on with this shipping the actual f-ing book and yes I realize I seem like I’m old because I use “f-ing” while typing. Michael:</li> </ul></li> <li><a href="https://docs.python.org/release/3.14.4/whatsnew/changelog.html?featured_on=pythonbytes">Python 3.14.4 is out</a></li> <li><a href="https://github.com/BeanieODM/beanie/releases/tag/2.1.0?featured_on=pythonbytes">Beanie 2.1 release</a></li> </ul> <p><strong>Joke: <a href="https://motherduck.com/humandb/?featured_on=pythonbytes">HumanDB</a> - Blazingly slow. Emotionally consistent.</strong></p>
April 19, 2026
Django Weblog
DSF member of the month - Rob Hudson
For April 2026, we welcome Rob Hudson as our DSF member of the month! ⭐

Rob is the creator of django-debug-toolbar (DDT), tool used by more than 100 000 folks in the world. He introduces Content-Security-Policy (CSP) support in Django and contribute to many open source packages. He has been a DSF member since February 2024.
You can learn more about Rob by visiting Rob's website and his GitHub Profile.
Let’s spend some time getting to know Rob better!
Can you tell us a little about yourself
I'm a backend Python engineer based in Oregon, USA. I studied biochemistry in college, where software was just a curiosity and hobby on the side, but I'm grateful that my curiosity turned into a career in tech. My earliest memory of that curiosity was taking apart my Speak & Spell as a kid to see how it worked and never quite getting it back together again.
How did you start using Django?
I followed the path of the "P"s: Perl, then PHP, then Python. When Ruby on Rails arrived it was getting a lot of attention, but I was already enjoying Python, so when Django was announced I was immediately drawn to it. I started building small apps on my own, then eventually led a broader tech stack modernization at work, a health education company where we were building database-driven learning experiences with quizzes and a choose-your-own-adventure flow through health content. Django, Git, and GitHub all came together around that same time as part of that transition. Fun fact: my GitHub user ID is 1106.
What other framework do you know and if there is anything you would like to have in Django if you had magical powers?
I've been building a few projects with FastAPI lately and have really come to appreciate the type-based approach to validation via Pydantic. The way typing syntax influences the validation logic is something I'd love to see influence Django more over time.
Erlang has a feature called the crash dump: when something goes wrong, the runtime writes out the full state of every process to a file you can open and inspect after the fact. As someone who built a debug toolbar because I wanted to see what was going on under the hood. Being provided a freeze frame of the exact moment things went wrong, full state intact, ready to inspect sounds like magic.
The Rust-based tooling emerging in the Python ecosystem is fascinating to watch. Tools like uv, ruff, and efforts around template engines, JSON encoders, ASGI servers, etc. The potential for significant speed improvements without losing what makes Django Django is an interesting space.
What projects are you working on now?
I have a couple of personal fintech projects I'm playing with, one using FastAPI and one using Django. I've been enjoying exploring and wiring up django-bolt for the Django project. I'm impressed with the speed and developer friendliness.
On the django-debug-toolbar front, I recently contributed a cache storage backend and have a longer term idea to add an API layer and a TUI interface that I'd love to get back to working on someday.
Which Django libraries are your favorite (core or 3rd party)?
Django Debug Toolbar (I may be slightly biased). Beyond that: whitenoise and dj-database-url are great examples of libraries that do one thing well and get out of your way. I'd also add granian, a Rust-based ASGI server. And django-allauth, which I'm somehow only just trying for the first time. For settings management I've cycled through a few libraries over the years and am currently eyeing pydantic-settings for a 12-factor approach to configuration.
What are the top three things in Django that you like?
The community. I've been part of it for a long time and it has a quality that's hard to put into words. It feels close knit, genuinely welcoming to newcomers, and there's a rising tide lifts all boats mentality that I don't think you find everywhere. People care about helping each other succeed. The sprints and hallway track at DjangoCon have been a wonderful extension of that.
The ORM. Coming from writing a lot of raw SQL, I appreciate the syntax of Django's ORM which hits a sweet spot of simplicity and power for most use cases.
Stability, documentation, and the batteries included philosophy. I appreciate a framework that at its core doesn't chase trends, has a predictable release cycle, amazingly well written docs (which makes sense coming from its journalism background), and there's enough built in to get surprisingly far without reaching for third party packages.
You are the creator of Django Debug Toolbar, this tool is really popular! What made you create the tool and publish the package?
The inspiration came from Symfony, a PHP framework that had a debug toolbar built in. At the time, I was evaluating frameworks for a tech stack transition at work and thought, why doesn't Django have one of these? So I started hacking on a simple middleware that collected basic stats and SQL queries and injected the toolbar HTML into the rendered page. The first commit was August 2008.
The SQL piece was personally important. Coming from PHP where I wrote a lot of raw SQL by hand, I wanted to see what the ORM was actually generating.
The nudge to actually release it came at the first DjangoCon in 2008 at Google's headquarters. Cal Henderson gave a keynote called "Why I Hate Django" and showed a screenshot of Pownce's debug toolbar in the page header, then talked about internal tooling at Flickr similar to what the Django debug toolbar has currently. Seeing those motivated me to tweet out what I was working on that same day. Apparently I wasn't the only one who wanted to see what the ORM was doing.
It has been created in 2008, what are your reflections on it after so many years?
Mostly gratitude. I had a young family at the time and life got busy, so I stepped back from active maintenance earlier than I would have liked. Watching it flourish under the maintainers who stepped up has been really wonderful to see. They've improved it, kept up with releases, supported the community, and have done a better job of it than I was in a position to do at the time, so I'm grateful to all who carried the torch.
At this point I contribute to it like any other project, which might sound strange for something I created, but it's grown bigger than my early involvement and that feels right. I still follow along and it makes me happy to see it continuing to grow and evolve.
What I didn't anticipate was what it gave back. It helped launch my career as a Django backend developer and I'm fairly certain it played a role in landing me a job at Mozilla. All from of a middleware I hacked together just to see what the ORM was doing.
Being a maintainer is not always easy despite the fact it can be really amazing. Do you have any advice for current and future maintainers in open source?
For what it's worth, what worked for me was building things for fun and to learn rather than setting out to build something popular. I also didn't worry too much about perfection or polish early on.
If life gets busy or your interests move on, I'd say trust the community. Have fun, and if it stops being fun, find some enthusiastic people who still think it's fun and hand it to them gracefully. That worked out better than I could have hoped in my case.
I'm genuinely curious about how AI changes open source. If simple utilities can be generated on the fly rather than installed as packages, what does that mean for small focused libraries? My hope is that the value of open source was never just the code anyway. The collaboration, the issue discussions, the relationships. AI can generate code but it can't replicate those things.
One thing I've noticed is newer developers using AI to generate patches they don't fully understand and submitting them as contributions. I get the impulse, but I'd encourage using AI as a tool for curiosity rather than a shortcut. Let it suggest a fix, then dig into why it works, ask it questions, iterate, which is something I often do myself.
You have introduced CSP support in Django core, congratulations and thank you for this addition! How did the process of creating this contribution go for you?
I picked up django-csp at Mozilla because it had become unmaintained and was a blocker from upgrading to newer Python and Django versions. What started as a simple maintenance task turned into a bit of a yak shave, but a good one. Getting up to speed on CSP led to ticket triage, which led to a refactor, which eventually led me to a 14 year old Django issue requesting CSP in core. Once the refactor was done I made the mistake of actually reading that 14 year old ticket and then felt personally responsible for it.
The more I worked in the space the clearer the ecosystem problem became. As a third party package, django-csp couldn't provide a standardized API that other packages could reliably depend on. If a third party library needed to add nonces to their own templates, they couldn't assume django-csp was installed. Seeing that friction play out in projects like debug toolbar and Wagtail convinced me that CSP support made sense in core.
Working with the Django fellows through the process was a genuine pleasure and I have enormous respect for what they do. They are patient, kind, and shaped what landed in core immensely. What surprised me most was how much they handle behind the scenes and how gracefully they manage the constant demands on their attention. Huge props to Natalia in particular for guiding a large and complex feature through to completion.
Do you remember your first contribution to open source?
Before Django I'd been tinkering on the web for years. I built tastybrew, an online homebrew recipe calculator and sharing site, partly to scratch my own itch and partly to get deeper with PHP and hosting my own projects. Back then open source collaboration wasn't what it is today. Before GitHub there was Freshmeat, SourceForge, emailed patches, maybe your own server with a tarball to download.
My first Django contribution was a small fix to the password reset view in 2006. Over the next several years there were around 40 or so contributions like docs corrections, admin improvements, email handling, security fixes. Contributing felt natural because the code was open and the community was welcoming.
I joined Mozilla in 2011 and shifted focus for a while. Mozilla was quietly contributing quite a bit back to the Django ecosystem during those years, with many 3rd party Django libraries, like django-csp. One of my favorite open source contributions was when I collaborated with a colleague on a Python DSL for Elasticsearch that eventually became the basis for Elastic's official Python client.
What are your hobbies or what do you do when you’re not working?
Reading, cooking, and getting outside when I can. I try to eat a whole food plant based diet and enjoy cooking in that style. Not sure it counts as a hobby but I enjoy wandering grocery stores, browsing what's new, reading ingredients, curious about flavors, thinking about what I could recreate at home.
Getting away from screens is important to me. Gardening, hiking, camping, long walks, travel when possible. Petrichor after rain. Puzzles while listening to audiobooks or podcasts. I brew oolong tea every day, a quiet ritual where the only notification is my tea timer.
Code has always felt more like curiosity than work to me, so I'm not sure where hobby ends and the rest begins.
Anything else you'd like to share?
If you have a Django codebase that needs some love, I'm available for contract work. I genuinely enjoy the unglamorous stuff: upgrading legacy codebases, adding CSP support, and refactoring for simplicity and long term maintainability. There's something satisfying about stepping back, seeing the bigger picture, and leaving things cleaner than you found them. You can find me on GitHub at robhudson.
Doing this interview was a nice way to reflect on my career. I can see that curiosity and adaptation have been pretty good companions. I'm grateful Django and its community have been a big part of that journey.
Thank you for doing the interview, Rob !
April 18, 2026
EuroPython
Humans of EuroPython: Nikoś (nikoshell)
EuroPython wouldn&apost exist without our dedicated volunteers who work tirelessly behind the scenes. They design our website, set up the call for proposals system, review hundreds of submissions, carefully select talks, coordinate speakers, and handle countless logistical details. Every aspect of the conference reflects their passion and expertise. Thank you for making EuroPython possible! 🎉
Below is our conversation with Nikoshell, who worked on the EuroPython 2025 website as well as a part of Communications & Design and Sponsorship teams.
We&aposre grateful for your work on the conference, Nikoshell!
Nikoś aka Nikoshell, contributor and website developer at EuroPython 2025EP: What was your primary role as a volunteer, and what did a typical day of contributing look like for you?
I quickly found a rhythm. Using a streamlined Linux setup with terminal-first tools, I focused on solving problems instead of fighting my tools. I’d catch the European team early, fix blockers like design assets or sponsor content, ship changes, and get feedback within hours. Morning performance fixes allowed richer assets by afternoon, and sponsor updates became social content automatically.
EP: Had you attended EuroPython before, or was volunteering your first experience with it?
First time organizing. Writing Python for 15+ years is one thing; seeing how a conference this size works is different. One code change could impact thousands. It was the most rewarding Python work I’ve done in years.
EP: What&aposs one task you handled that attendees might not realize happens behind the scenes at EuroPython?
I automated sponsor data into social media graphics, saving hours of repetitive work.
EP: Was there a moment when you felt your contribution really made a difference?
Whenever a fix cleared busywork and let the team focus on creative work.
EP: Is there anything you took away from the experience that you still use today?
Collaboration patterns. Fast, trusting, distributed teams set a new bar. I still use those workflows and stay connected with the team.
EP: What would you say to someone considering volunteering at EuroPython but feeling hesitant?
Time matters less than impact. You gain skills, cleaner workflows, and strong connections. Just be willing to learn and ship.
EP: What connects Capture The Flag competitions (CTFs), AI automated solutions, and volunteering for EuroPython in your opinion?
Same mental model: find bottlenecks, remove friction, ship. I’ve competed in CTFs—Capture The Flag cybersecurity challenges—with my team justCatTheFish (ranked #1 in Poland, top 10 worldwide), contributed to pwndbg, and built security infrastructure. EuroPython felt like a CTF challenge solved with a high-speed, aligned team.
EP: Thank you for your contributions, Nikoshell!
Seth Michael Larson
More thoughts on Nintendo Switch 2 storage prices
Since my last post about Nintendo Switch 2 storage and prices three major things have happened affecting Switch 2 game prices:
- Nintendo published a new digital game pricing strategy where digital first-party games would be priced $10 USD less than physical games. This puts the American game market in line with the rest of the world. We'll see below why this change makes sense.
- microSD Express cards have increased drastically in price. The Lexar 1TB microSDXC card cost $200 USD in July 2025 and today is being sold for $335 USD from the same retailer. This means that “price-per-GB” has increased ~$0.13 for the highest capacity cards.
- Nintendo appears to be manufacturing Switch 2 game cartridges with smaller than the typical 64GB capacity. “MIO: Memories in Orbit” released on a physical cartridge with a $30 price tag. This will hopefully mean fewer games being published to “Game Key cards”, especially smaller or indie games.
I created a small Python script which produces tables of data comparing physical and digital prices comparing different microSD Express cards and their price-per-GB ratios across different Nintendo Switch 2 games.
Mario Kart World
This is the game people think of for the Switch 2, and the $80 USD price tag across both digital and physical provided some sticker shock for many. I did not understand how the $60 USD standard across all games hung on for as long as it did.
The table below which includes both the price of the game and incremental price of storage (depending on which storage device you purchase) to compare the price between physical and digital.
| Edition | Storage | Total Price | Game Price | Storage Price | Game Size |
|---|---|---|---|---|---|
| Physical | Cartridge | $80.00 | $80.00 | --- | --- |
| Digital | Lexar 1TB (Costco) | $83.87 | $80.00 | $3.87 ($0.18/GB) | 22 GB |
| Digital | Lexar 512GB | $86.45 | $80.00 | $6.45 ($0.29/GB) | 22 GB |
| Digital | Lexar 1TB | $87.20 | $80.00 | $7.20 ($0.33/GB) | 22 GB |
| Digital | Lexar 256GB | $87.73 | $80.00 | $7.73 ($0.35/GB) | 22 GB |
| Digital | SanDisk 512GB | $87.73 | $80.00 | $7.73 ($0.35/GB) | 22 GB |
| Digital | SanDisk 256GB | $88.59 | $80.00 | $8.59 ($0.39/GB) | 22 GB |
| Digital | SanDisk 128GB | $92.03 | $80.00 | $12.03 ($0.55/GB) | 22 GB |
Yoshi and the Mysterious Book
Now we look at the first game with the new pricing structure in the USA: “Yoshi and the Mysterious Book”. The game is priced at $70 USD physically and $60 USD digitally. Compared to Mario Kart World where all digital editions were more expensive than physical when storage costs are factored in: almost all digital editions are cheaper for Yoshi!
| Edition | Storage | Total Price | Game Price | Storage Price | Game Size |
|---|---|---|---|---|---|
| Physical | Cartridge | $70.00 | $70.00 | --- | --- |
| Digital | Lexar 1TB (Costco) | $63.62 | $60.00 | $3.62 ($0.18/GB) | 20.6 GB |
| Digital | Lexar 512GB | $66.04 | $60.00 | $6.04 ($0.29/GB) | 20.6 GB |
| Digital | Lexar 1TB | $66.74 | $60.00 | $6.74 ($0.33/GB) | 20.6 GB |
| Digital | Lexar 256GB | $67.24 | $60.00 | $7.24 ($0.35/GB) | 20.6 GB |
| Digital | SanDisk 512GB | $67.24 | $60.00 | $7.24 ($0.35/GB) | 20.6 GB |
| Digital | SanDisk 256GB | $68.05 | $60.00 | $8.05 ($0.39/GB) | 20.6 GB |
| Digital | SanDisk 128GB | $71.27 | $60.00 | $11.27 ($0.55/GB) | 20.6 GB |
MIO: Memories in Orbit
MIO is the cheapest game to date that is published on a non-“Game Key card” cartridge for the Switch 2 at $30 USD physically and $20 USD digitally. The game being only 4GB means the digital edition is much cheaper than the physical edition.
| Edition | Storage | Total Price | Game Price | Storage Price | Game Size |
|---|---|---|---|---|---|
| Physical | Cartridge | $30.00 | $30.00 | --- | --- |
| Digital | Lexar 1TB (Costco) | $20.77 | $20.00 | $0.77 ($0.18/GB) | 4.4 GB |
| Digital | Lexar 512GB | $21.29 | $20.00 | $1.29 ($0.29/GB) | 4.4 GB |
| Digital | Lexar 1TB | $21.44 | $20.00 | $1.44 ($0.33/GB) | 4.4 GB |
| Digital | Lexar 256GB | $21.55 | $20.00 | $1.55 ($0.35/GB) | 4.4 GB |
| Digital | SanDisk 512GB | $21.55 | $20.00 | $1.55 ($0.35/GB) | 4.4 GB |
| Digital | SanDisk 256GB | $21.72 | $20.00 | $1.72 ($0.39/GB) | 4.4 GB |
| Digital | SanDisk 128GB | $22.41 | $20.00 | $2.41 ($0.55/GB) | 4.4 GB |
Final Fantasy VII Remake Intergrade
And finally, we look at FF7 Remake Intergrade, which according to its Nintendo page is planned to be over 90GB total. This massive game size makes the price to store the game a significant percentage the total price of the game.
| Edition | Storage | Total Price | Game Price | Storage Price | Game Size |
|---|---|---|---|---|---|
| Digital | Lexar 1TB (Costco) | $55.89 | $40.00 | $15.89 ($0.18/GB) | 90.4 GB |
| Digital | Lexar 512GB | $66.48 | $40.00 | $26.48 ($0.29/GB) | 90.4 GB |
| Digital | Lexar 1TB | $69.57 | $40.00 | $29.57 ($0.33/GB) | 90.4 GB |
| Digital | Lexar 256GB | $71.78 | $40.00 | $31.78 ($0.35/GB) | 90.4 GB |
| Digital | SanDisk 512GB | $71.78 | $40.00 | $31.78 ($0.35/GB) | 90.4 GB |
| Digital | SanDisk 256GB | $75.31 | $40.00 | $35.31 ($0.39/GB) | 90.4 GB |
| Digital | SanDisk 128GB | $89.44 | $40.00 | $49.44 ($0.55/GB) | 90.4 GB |
It will be interesting seeing how specifically the availability of new cartridge types will change whether companies use Game Key cards for their games. I suspect the pressure to use Game Key cards will still be high as the cost of storage continues to increase for companies and those costs cuts into margins.
None of these tables include the benefits and down-sides of each medium. Many digital game buyers like not having to worry about lost or stolen games while in transit or not having to physically store the boxes and cartridges. Many players may not need to increase their Switch 2 storage if they only play a handful of games. And who knows, maybe the price of storage will decrease in the future?
I hope this information helps you make an informed choice when selecting digital or physical Nintendo Switch 2 games in the future. Happy gaming!
Thanks for keeping RSS alive! ♥
April 17, 2026
Mike Driscoll
Textual – An Intro to DOM Queries (Part I)
In this article, you will learn how to query the DOM in Textual. You will discover that the DOM keeps track of all the widgets in your application. By running queries against the DOM, you can find widgets quickly and update them, too.
You will be learning the following topics related to the DOM:
- The query one method
- Textual queries
You will learn more in the second part of this series next week!
You will soon see the value of working with DOM queries and the power that these queries give you. Let’s get started!
The Query One Method
You will find the query_one() method throughout the Textual documentation and many Textual applications on GitHub. You may use query_one() to retrieve a single widget that matches a CSS selector or a widget type.
You can pass in up to two parameters to query_one():
- The CSS selector
- The widget type
- Or both at the same time
If you pass both, pass the CSS selector first, with the widget type as the second parameter.
Try some of this out. Open up your Python editor and create a file named query_input.py. Then enter this code in it:
# query_input.py
from textual.app import App, ComposeResult
from textual.widgets import Button, Input
class QueryInput(App):
def compose(self) -> ComposeResult:
yield Input()
yield Button("Update Input")
def on_button_pressed(self) -> None:
input_widget = self.query_one(Input)
new_string = f"You entered: {input_widget.value}"
input_widget.value = new_string
if __name__ == "__main__":
app = QueryInput()
app.run()
Your code creates an Input and a Button widget. Enter some text in the Input widget and press the button. Your on_button_pressed() method will get called. You call query_one() and pass it an Input widget. Then, you update the returned Input widget’s value with a new string.
Here is what the application might look like:

Now, you will try writing a new piece of code where you use query_one() with a CSS selector. Create a new file called query_one_same_ids.py and use this code:
# query_one_same_ids.py
from textual.app import App, ComposeResult
from textual.widgets import Button, Label
class QueryApp(App):
def compose(self) -> ComposeResult:
yield Label("Press a button", id="label")
yield Button("Test", id="button")
def on_button_pressed(self) -> None:
widget = self.query_one("#label")
widget.update("You pressed the button!")
if __name__ == "__main__":
app = QueryApp()
app.run()
In this example, you create two widgets with different IDs. Then you use query_one() to select the Label widget and update its text.
If you call query_one() and there are no matches, you will get a NoMatches exception. On the other hand, if there is more than one match, the method will return the first item that does match.
What will the following code do if you put it in your example above?
self.query_one(”#label”, Button)
If you guessed that Textual will raise an exception, you should congratulate yourself. You have good intuition! If the widget matches the CSS selector but not the widget type, then you will get a WrongType exception raised.
Textual Queries
Textual has more than one way to query the DOM. You may also use the query() method, which you can use to query or find multiple widgets. When you call query(), it will return a DOMQuery object, which behaves as a list-like container of widgets.
You can see how this works by writing some code. Create a new Python file named query_all.py and add this code to it:
# query_all.py
from textual.app import App, ComposeResult
from textual.widgets import Button, Label
class QueryApp(App):
def compose(self) -> ComposeResult:
yield Label("Press a button", id="label")
yield Button("Test", id="button")
def on_button_pressed(self) -> None:
widgets = self.query()
s = ""
for widget in widgets:
s += f"{widget}\n"
label = self.query_one("#label")
label.update(s)
if __name__ == "__main__":
app = QueryApp()
app.run()
The idea is to get all the widgets in your application and print them out. Of course, you can’t print out anything when your terminal application is blocking stdout, so instead, you create a string of widgets separated by new lines and update the Label widget.
Here is an example of what you might get if you run the code and press the button on your machine:

You might be surprised by that output. Perhaps you thought you would only see a Label and a Button widget in that list? If so, you forgot that a Screen widget is always lurking in the background. But there are also two more: a ToastRack and a Tooltip widget. These come with all your applications. The ToastRack positions Toast widgets, which you use to display a notification message. A Tooltip is a message that appears when you hover your mouse over a widget.
You do not need to know more about those extra widgets now.
Also note that all query methods can be used on both the App and Widget subclasses, which is very handy.
You can use CSS selectors with query() in much the same way as you can with query_one(). The difference, of course, is that query() always returns an iterable DOMObject.
Let’s pretend you want to get all the Button widgets in your application and iterate over them. Create a new Python script called query_button.py with this code:
# query_buttons.py
from textual.app import App, ComposeResult
from textual.widgets import Button, Label
class QueryApp(App):
def compose(self) -> ComposeResult:
yield Label("Press a button", id="label")
yield Button("One", id="one")
yield Button("Two", id="two")
yield Button("Three")
def on_button_pressed(self) -> None:
s = ""
for widget in self.query("Button"):
s += f"{widget}\n"
label = self.query_one("#label")
label.update(s)
if __name__ == "__main__":
app = QueryApp()
app.run()
Here you are passing in a string, “Button”, to query(). If using query_one, you would use the Button type directly. Regardless, when you run this code and press the button, you will see the following:

That worked great! This time, you queried the DOM and returned all the Button widgets in your application.
What if you wanted to find all the disabled buttons in your code? You can disable widgets using the disabled style flag or the CSS attribute. To find those widgets, you would update the query like this: widgets = self.query("Button.disabled").
The query objects in Textual also provide a results() method that you can use as an alternative way of iterating over the widgets. For example, you can use results() to rewrite the query above that would retrieve all the disabled buttons to be something like this:
widgets = self.query(".disabled").results(Button)
s = ""
for widget in widgets:
s += f"{widget}\n"
This code combines the last example query with the last full code example. Although this latter version is more verbose, you might find it easier to read than the original query for disabled widgets.
Another benefit of using results() is that Python type checkers, such as Mypy, can use it to determine the widget type in the loop. When you do not use results(), then Mypy will only know that you are looping over a Widget object, rather than a Button object.
Wrapping Up
You learned the basics of using Textual’s DOM query methods in this article. You can use these query methods to access one or more widgets in your user interface.
Specifically, you learned about the following:
- The query one method
- Textual queries
Textual is a great way to create a user interface with Python. You should check it out today!
Learn More
The post Textual – An Intro to DOM Queries (Part I) appeared first on Mouse Vs Python.
Real Python
The Real Python Podcast – Episode #291: Reassessing the LLM Landscape & Summoning Ghosts
What are the current techniques being employed to improve the performance of LLM-based systems? How is the industry shifting from post-training towards context engineering and multi-agent orchestration? This week on the show, Jodie Burchell, data scientist and Python Advocacy Team Lead at JetBrains, returns to discuss the current AI coding landscape.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]



