Planet Python
Last update: May 17, 2025 04:42 PM UTC
May 16, 2025
Real Python
The Real Python Podcast – Episode #249: Going Beyond requirements.txt With pylock.toml and PEP 751
What is the best way to record the Python dependencies for the reproducibility of your projects? What advantages will lock files provide for those projects? This week on the show, we welcome back Python Core Developer Brett Cannon to discuss his journey to bring PEP 751 and the pylock.toml file format to the community.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
Our Google Summer of Code 2025 contributors
We’re excited to introduce our Google Summer of Code 2025 contributors!
These amazing folks will be working on impactful projects that will shape Django’s future.\ Meet the contributors 👇
A. Rafey Khan
Project: Django Admin – Add Keyboard Shortcuts & Command Palette. Mentors: Tom Carrick, Apoorv Garg
Rafey will work on making Django Admin faster and more accessible through keyboard-driven workflows. Excited to see this land!
Farhan Ali Raza
Project: Bring django-template-partials into core. Mentor: Carlton Gibson
Farhan will be enhancing Django’s template system by adding first-class support for partials—making componentized templates easier than ever.\
Saurabh K
Project: Automate processes within Django’s contribution workflow. Mentor: Lily Foote
Saurabh will work on streamlining how contributors interact with Django repo—automating repetitive tasks and improving dev experience for all. \ A huge shoutout to our mentors (and Org Admin Bhuvnesh Sharma) and the broader Django community for supporting these contributors! 💚\ \ Let’s make this a summer of learning, building, and collaboration.
Daniel Roy Greenfeld
Farewell to Michael Ryabushkin
Michael Ryabushkin and I met around 2011-2012 through Python community work. I don't remember how we met, instead I remember his presence suddenly there, helping and aiding others.
Michael could be pushy. He was trying to help people reach their full potential. His energy and humor was relentless, I admired his tenacity and giving nature.
While our coding preferences usually clashed, sometimes they matched. Then we would rant together about some tiny detail, those talks plus the silly Tai Chi dance we did are lovely memories I have of Michael.
In 2016 my wife Audrey had emergency surgery. For me that meant sleepless days taking care of her. Suddenly Michael's presence was there. He took shifts, ran errands (including buying a wheelchair), and forced me to sleep. I am forever grateful to Michael for what he did for us.
In early 2020 Audrey and I got last minute approval to use a large conference space to organize an event called PyBeach. Michael heard about it and as always, suddenly his presence was there. He was not just a volunteer at large, but leading the conference with us. Michael and I had our shared code rants, did our silly Tai Chi dance, and he met our baby daughter.
Between the pandemic and us moving from the Los Angeles area I didn't get the chance to see Michael again. I'll miss our rants, our silly Tai Chi dance, and his sudden appearances.
SoCal Python has created a memorial page in Michael's honor.
Brett Cannon
Unravelling t-strings
PEP 750 introduced t-strings for Python 3.14. In fact, they are so new that as of Python 3.14.0b1 there still isn&apost any documentation yet for t-strings. 😅 As such, this blog post will hopefully help explain what exactly t-strings are and what you might use them for by unravelling the syntax and briefly talking about potential uses for t-strings.
What are they?
I like to think of t-strings as a syntactic way to expose the parser used for f-strings. I&aposll explain later what that might be useful for, but for now let&aposs see exactly what t-strings unravel into.
Let&aposs start with an example by trying to use t-strings to mostly replicate f-strings. We will define a function named f_yeah()
which takes a t-string and returns what it would be formatted had it been an f-string (e.g. f"{42}" == f_yeah(t"{42}")
). Here is the example we will be working with and slowly refining:
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
return t_string
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
actual = f_yeah(expected)
assert actual == expected
As of right now, f_yeah()
is just the identity function which takes the actual result of an f-string, which is pretty boring and useless. So let&aposs parse what the t-string would be into its constituent parts:
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
return "".join(t_string)
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
parsed = [
"Hello, ",
"world",
"! Conversions like ",
"&aposworld&apos",
" and format specs like ",
"world ",
" work!",
]
actual = f_yeah(parsed)
assert actual == expected
Here we have split the f-string output into a list of the string parts that make it up, joining it all together with "".join()
. This is actually what the bytecode for f-strings does once it has converted everything in the replacement fields – i.e. what&aposs in the curly braces – into strings.
But this is still not that interesting. We can definitely parse out more information.
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
return "".join(t_string)
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
parsed = [
"Hello, ",
name,
"! Conversions like ",
repr(name),
" and format specs like ",
format(name, "<6"),
" work!",
]
actual = f_yeah(parsed)
assert actual == expected
Now we have substituted the string literals we had for the replacement fields with what Python does behind the scenes with conversions like !r
and format specs like :<6
. As you can see, there are effectively three parts to handling a replacement field:
- Evaluating the Python expression
- Applying any specified conversion (let&aposs say the default is
None
) - Applying any format spec (let&aposs say the default is
""
)
So let&aposs get our "parser" to separate all of that out for us into a tuple of 3 items: value, conversion, and format spec. That way we can have our f_yeah()
function handle the actual formatting of the replacement fields.
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
converters = {func.__name__[0]: func for func in (str, repr, ascii)}
converters[None] = str
parts = []
for part in t_string:
match part:
case (value, conversion, format_spec):
parts.append(format(converters[conversion](value), format_spec))
case str():
parts.append(part)
return "".join(parts)
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
parsed = [
"Hello, ",
(name, None, ""),
"! Conversions like ",
(name, "r", ""),
" and format specs like ",
(name, None, "<6"),
" work!",
]
actual = f_yeah(parsed)
assert actual == expected
Now we have f_yeah()
taking the value from the expression of the replacement field, applying the appropriate conversion, and then passing that on to format()
. This gives us a more useful parsed representation! Since we have the string representation of the expression, we might as well just keep that around even if we don&apost use it in our example (parsers typically don&apost like to throw information away).
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
converters = {func.__name__[0]: func for func in (str, repr, ascii)}
converters[None] = str
parts = []
for part in t_string:
match part:
case (value, _, conversion, format_spec):
parts.append(format(converters[conversion](value), format_spec))
case str():
parts.append(part)
return "".join(parts)
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
parsed = [
"Hello, ",
(name, "name", None, ""),
"! Conversions like ",
(name, "name", "r", ""),
" and format specs like ",
(name, "name", None, "<6"),
" work!",
]
actual = f_yeah(parsed)
assert actual == expected
The next thing we want our parsed output to be is be a bit easier to work with. A 4-item tuple is a bit unwieldy, so let&aposs define a class named Interpolation
that will hold all the relevant details of the replacement field.
class Interpolation:
__match_args__ = ("value", "expression", "conversion", "format_spec")
def __init__(
self,
value,
expression,
conversion=None,
format_spec="",
):
self.value = value
self.expression = expression
self.conversion = conversion
self.format_spec = format_spec
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
converters = {func.__name__[0]: func for func in (str, repr, ascii)}
converters[None] = str
parts = []
for part in t_string:
match part:
case Interpolation(value, _, conversion, format_spec):
parts.append(format(converters[conversion](value), format_spec))
case str():
parts.append(part)
return "".join(parts)
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
parsed = [
"Hello, ",
Interpolation(name, "name"),
"! Conversions like ",
Interpolation(name, "name", "r"),
" and format specs like ",
Interpolation(name, "name", format_spec="<6"),
" work!",
]
actual = f_yeah(parsed)
assert actual == expected
That&aposs better! Now we have an object-oriented structure to our parsed replacement field, which is easier to work with than the 4-item tuple we had before. We can also extend this object-oriented organization to the list we have been using to hold all the parsed data.
class Interpolation:
__match_args__ = ("value", "expression", "conversion", "format_spec")
def __init__(
self,
value,
expression,
conversion=None,
format_spec="",
):
self.value = value
self.expression = expression
self.conversion = conversion
self.format_spec = format_spec
class Template:
def __init__(self, *args):
# There will always be N+1 strings for N interpolations;
# that may mean inserting an empty string at the start or end.
strings = []
interpolations = []
if args and isinstance(args[0], Interpolation):
strings.append("")
for arg in args:
match arg:
case str():
strings.append(arg)
case Interpolation():
interpolations.append(arg)
if args and isinstance(args[-1], Interpolation):
strings.append("")
self._iter = args
self.strings = tuple(strings)
self.interpolations = tuple(interpolations)
@property
def values(self):
return tuple(interpolation.value for interpolation in self.interpolations)
def __iter__(self):
return iter(self._iter)
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
converters = {func.__name__[0]: func for func in (str, repr, ascii)}
converters[None] = str
parts = []
for part in t_string:
match part:
case Interpolation(value, _, conversion, format_spec):
parts.append(format(converters[conversion](value), format_spec))
case str():
parts.append(part)
return "".join(parts)
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
parsed = Template(
"Hello, ",
Interpolation(name, "name"),
"! Conversions like ",
Interpolation(name, "name", "r"),
" and format specs like ",
Interpolation(name, "name", format_spec="<6"),
" work!",
)
actual = f_yeah(parsed)
assert actual == expected
And that&aposs t-strings! We parsed f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
into Template("Hello, ", Interpolation(name, "name"), "! Conversions like ", Interpolation(name, "name", "r"), " and format specs like ", Interpolation(name, "name", format_spec="<6")," work!")
. We were then able to use our f_yeah()
function to convert the t-string into what an equivalent f-string would have looked like. The actual code to use to test this in Python 3.14 with an actual t-string is the following (PEP 750 has its own version of converting a t-string to an f-string which greatly inspired my example):
from string import templatelib
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
converters = {func.__name__[0]: func for func in (str, repr, ascii)}
converters[None] = str
parts = []
for part in t_string:
match part:
case templatelib.Interpolation(value, _, conversion, format_spec):
parts.append(format(converters[conversion](value), format_spec))
case str():
parts.append(part)
return "".join(parts)
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
parsed = t"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
actual = f_yeah(parsed)
assert actual == expected
What are t-strings good for?
As I mentioned earlier, I view t-strings as a syntactic way to get access to the f-string parser. So, what do you usually use a parser with? The stereotypical thing is compiling something. Since we are dealing with strings here, what are some common strings you "compile"? The most common answer are things like SQL statements and HTML: things that require some processing of what you pass into a template to make sure something isn&apost going to go awry. That suggests that you could have a sql()
function that takes a t-string and compiles a SQL statement that avoids SQL injection attacks. Same goes for HTML and JavaScript injection attacks.
Add in logging and you get the common examples. But I suspect that the community is going to come up with some interesting uses of t-strings and their parsed data (e.g. PEP 787 and using t-strings to create the arguments to subprocess.run()
)!
May 15, 2025
First Institute of Reliable Software
New Template Strings in Python 3.14
Template strings (template strings or t-strings) are a new syntax in Python 3.14 that defers interpolation. Explanation, examples, and how to mask secret data when outputting. How to install Python 3.14 to test the new functionality.
Django Weblog
Our new accessibility statement
Happy Global Accessibility Awareness Day! We thought this would be a fitting occasion to announce our brand new Django accessibility statement 🎉
Did you know that according to the WebAIM Million survey, 94.6% of sites have easily-detectable accessibility issues? We all need to work together to build a more inclusive web (also check out our diversity statement if you haven’t already!). There are accessibility gaps in Django itself too. This statement improves transparency, and clearly states our intentions. And we hope it encourages our community and the industry at large to more widely consider accessibility.
How to use this statement
Read it, share it with your friends, or in a procurement context!
- Use it to understand where there are gaps in Django that need to be addressed on projects.
- And opportunities to contribute to Django and related projects ❤️
- Factor it into legal compliance. For example with the European Accessibility Act. Starting June 2025, accessibility becomes a legal requirement for large swaths of the private sector in the European Union.
- Share it with venues for Django events to demonstrate the importance of accessibility for their competitiveness.
How you can help
Take a moment to provide any feedback you might have about the statement on the Django Forum. Let us know if you would prefer additional reporting like an ATAG audit, or VPAT, ACR, or any other acronym. Let us know if you’d like to contribute to the accessibility of the Django community! 🫶
Ned Batchelder
PyCon summer camp
I’m headed to PyCon today, and I’m reminded about how it feels like summer camp, in mostly good ways, but also in a tricky way.
You take some time off from your “real” life, you go somewhere else, you hang out with old friends and meet some new friends. You do different things than in your real life, some are playful, some take real work. These are all good ways it’s like summer camp.
Here’s the tricky thing to watch out for: like summer camp, you can make connections to people or projects that are intense and feel like they could last forever. You make friends at summer camp, or even have semi-romantic crushes on people. You promise to stay in touch, you think it’s the “real thing.” When you get home, you write an email or two, maybe a phone call, but it fades away. The excitement of the summer is overtaken by your autumnal real life again.
PyCon can be the same way, either with people or projects. Not a romance, but the exciting feeling that you want to keep doing the project you started at PyCon, or be a member of some community you hung out with for those days. You want to keep talking about that exciting thing with that person. These are great feelings, but it’s easy to emotionally over-commit to those efforts and then have it fade away once PyCon is over.
How do you know what projects are just crushes, and which are permanent relationships? Maybe it doesn’t matter, and we should just get excited about things.
I know I started at least one effort last year that I thought would be done in a few months, but has since stalled. Now I am headed back to PyCon. Will I become attached to yet more things this time? Is that bad? Should I temper my enthusiasm, or is it fine to light a few fires and accept that some will peter out?
Zato Blog
Using Oracle Database from Python and Zato Services
Using Oracle Database from Python and Zato Services
Overview
Oracle Database remains a cornerstone of enterprise IT, powering mission-critical applications around the world. Integrating Oracle with Python unlocks automation, reporting, and API-based workflows. In this article, you'll learn how to:
- Connect to Oracle Database from Python
- Use Oracle Database in Zato services
- Execute SQL queries and call stored procedures
- Understand the underlying SQL objects
All examples are based on real-world use cases and follow best practices for security and maintainability.
Why Use Oracle Database from Python?
Python is a popular language for automation, integration, and data processing. By connecting Python to Oracle Database, you can:
- Automate business processes
- Build APIs that interact with enterprise data
- Run analytics and reporting jobs
- Integrate with other systems using Zato
Using Oracle Database in Zato Services
SQL connections are configured in the Dashboard, and you can use them directly in your service code.
In all the service below, the logic is split into several dedicated services, each responsible for a specific operation. This separation improves clarity, reusability, and maintainability.
Setting Up: Oracle Database Objects
First, let's start with the basic SQL objects used in our examples:
-- Sample data
INSERT INTO users (user_id, username) VALUES (1, 'john_doe');
INSERT INTO users (user_id, username) VALUES (2, 'jane_smith');
INSERT INTO users (user_id, username) VALUES (3, 'bob_jones');
-- Stored procedure: process_data
CREATE OR REPLACE PROCEDURE process_data (
input_num IN NUMBER,
input_str IN VARCHAR2,
output_num OUT NUMBER,
output_str OUT VARCHAR2
)
AS
BEGIN
output_num := input_num * 2;
output_str := 'Input was: ' || input_str;
END process_data;
/
-- Stored procedure: get_users
CREATE OR REPLACE PROCEDURE get_users (
recordset OUT SYS_REFCURSOR
)
AS
BEGIN
OPEN recordset FOR
SELECT user_id, username
FROM users
ORDER BY user_id;
END get_users;
/
1. Querying All Users
This service retrieves all users from the users
table.
# -*- coding: utf-8 -*-
from zato.server.service import Service
class GetAllUsers(Service):
""" Service to retrieve all users from the database.
"""
def handle(self):
# Obtain a reference to the configured Oracle Database connection
conn = self.out.sql['My Oracle DB']
# Define the SQL query to select all rows from the users table
query = 'select * from users'
# Execute the query; returns a list of dictionaries, one per row
response = conn.execute(query)
# Set the service response to the query result
self.response.payload = response
[
{"user_id":1, "username":"john.doe"},
{"user_id":2, "username":"jane.smith"},
{"user_id":3,"username":"bob.jones"}
]
Explanation:
- The service connects to Oracle using the configured connection.
- It executes a simple SQL query to fetch all user records.
- The result is returned as the service response payload.
2. Querying a Specific User by ID
This service fetches a user by their user_id
using a parameterized query. There are multiple ways to retrieve results depending on whether you expect one or many rows.
# -*- coding: utf-8 -*-
from zato.server.service import Service
class GetUserById(Service):
""" Service to fetch a user by their user_id.
"""
def handle(self):
# Get the Oracle Database connection from the pool
conn = self.out.sql['My Oracle DB']
# Parameterized SQL to prevent injection
query = 'select * from users where user_id = :user_id'
# In a real service, this would be read from incoming JSON
params = {'user_id': 1}
# Execute the query with parameters; returns a list
response = conn.execute(query, params)
# Set the result as the service's response
self.response.payload = response
Explanation:
- The service expects
user_id
in the request payload. - It uses a parameterized query to prevent SQL injection.
- The result is always a list, even if only one row matches.
3. Calling a Stored Procedure with Input and Output Parameters
This service demonstrates how to call an Oracle stored procedure that takes input values and returns output values.
# -*- coding: utf-8 -*-
# Zato
from zato.common.oracledb import NumberIn, NumberOut, StringIn, StringOut
from zato.server.service import Service
class CallProcessData(Service):
""" Service to call a stored procedure with input/output params.
"""
def handle(self):
# Obtain Oracle Database connection
conn = self.out.sql['My Oracle DB']
# Prepare input parameter for NUMBER
in_num = NumberIn(333)
# Prepare input parameter for VARCHAR2
in_str = StringIn('Hello')
# Prepare output parameter for NUMBER (will be written to by the procedure)
out_num = NumberOut()
# Prepare output parameter for VARCHAR2, specify max buffer size (optionally)
out_str = StringOut(size=200)
# Build the parameter list in the order expected by the procedure
params = [in_num, in_str, out_num, out_str]
# Call the stored procedure with the parameters
response = conn.callproc('process_data', params)
# Return the output values as a dictionary in the response
self.response.payload = {
'output_num': out_num.get(),
'output_str': out_str.get()
}
Explanation:
- The service prepares input and output parameters using helper classes.
- It calls the
process_data
procedure with both input and output arguments. - The result includes both output values, returned as a dictionary.
- Note that you always need to provide the parameters for the procedure in the same order as they were declared in the procedure itself.
4. Calling a Procedure Returning Multiple Rows
This service calls a procedure that returns a set of rows (a cursor) and collects the results.
# -*- coding: utf-8 -*-
from zato.common.oracledb import RowsOut
from zato.server.service import Service
class CallGetUsers(Service):
""" Service to call a procedure returning a set of rows.
"""
def handle(self):
# Get Oracle Database connection
conn = self.out.sql['My Oracle DB']
# Prepare a RowsOut object to receive the result set
rows_out = RowsOut()
# Build parameter list for the procedure
params = [rows_out]
# Call the procedure, populating rows
conn.callproc('get_users', params)
# Convert the cursor results to a list of rows
rows = list(rows_out.get())
# Return the list as the service response
self.response.payload = rows
Explanation:
- The service prepares a
RowsOut
object to receive the rows into. That is, the procedure will write rows into this object. - It calls the
get_users
procedure, which populates the rows. - You call
rows_out.get
to get the actual rows from the database. - The rows are converted to a list and returned as the payload.
4. Returning a Single Object
When you know your query will return a single row, you can use conn.one
or conn.one_or_none
for more predictable results:
# -*- coding: utf-8 -*-
from zato.server.service import Service
class GetSingleUserById(Service):
""" # Service to fetch exactly one user or raise if not found/ambiguous.
"""
def handle(self):
# Get the Oracle Database connection
conn = self.out.sql['My Oracle DB']
# Parameterized SQL query
query = 'select * from users where user_id = :user_id'
# In a real service, this would be read from incoming JSON
params = {'user_id': 1}
# conn.one returns a dict if exactly one row, else raises (zero or multiple rows)
result = conn.one(query, params)
# Return the single user as the response
self.response.payload = result
class GetSingleUserOrNoneById(Service):
""" Service to fetch one user, None if not found, or raise an Exception if ambiguous.
"""
def handle(self):
# Get Oracle Database connection
conn = self.out.sql['My Oracle DB']
# SQL with named parameter
query = 'select * from users where user_id = :user_id'
# Extract user_id from payload
params = {'user_id': 1}
# conn.one_or_none returns a dict if one row, None if zero, raises if multiple rows
result = conn.one_or_none(query, params)
# Return dict or None
self.response.payload = result
Explanation:
conn.one(query, params)
will return a single row as a dictionary if exactly one row is found. If no rows or more than one row are returned, it raises an exception.conn.one_or_none(query, params)
will return a single row as a dictionary if one row is found, None if no rows are found, but still raises an exception if more than one row is returned.- Use these methods when you expect either exactly one or zero/one results, and want to handle them cleanly.
Key Concepts Explained
- Connection Management: Zato handles connection pooling and configuration for you. Use
self.out.sql['My Oracle DB']
to get a ready-to-use connection. - Parameterized Queries: Always use parameters (e.g.,
:user_id
) to avoid SQL injection and improve code clarity. - Calling Procedures: Use helper classes (
NumberIn
,StringIn
,NumberOut
,StringOut
,RowsOut
) for input/output arguments and recordsets. - Service Separation: Each service is focused on a single responsibility, making code easier to test and reuse.
Security and Best Practices
- Always use parameterized queries for user input.
- Manage credentials and connection strings securely (never hardcode them in source code).
- Handle exceptions and database errors gracefully in production code.
- Use connection pooling (Zato does this for you) for efficiency.
Summary
Integrating Oracle Database with Python and Zato services gives you powerful tools for building APIs, automating workflows, and connecting enterprise data sources.
Whether you need to run queries, call stored procedures, or expose Oracle data through REST APIs, Zato provides a robust and Pythonic way to do it.
More resources
➤ Python API integration tutorials
➤ What is an integration platform?
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
➤ Open-source iPaaS in Python
Erik Marsja
Pandas: Drop Columns By Name in DataFrames
The post Pandas: Drop Columns By Name in DataFrames appeared first on Erik Marsja.
This blog post will cover Pandas drop columns by name from a single DataFrame and multiple DataFrames. This is a common task when working with large datasets in Python, especially when you want to clean your data or remove unnecessary information. We have previously looked at how to drop duplicated rows in a Pandas DataFrame, and now we will focus on dropping columns by name.
Table of Contents
- How to use Pandas to drop Columns by Name from a Single DataFrame
- Dropping Multiple Columns by Name in a Single DataFrame
- Dropping Columns from Multiple Pandas DataFrames
- Dropping Columns Conditionally from Panda DataFrame Based on Their Names
- Summary
- Resources
How to use Pandas to drop Columns by Name from a Single DataFrame
The simplest scenario is when we have a single DataFrame and want to drop one or more columns by their names. We can do this easily using the drop()
Pandas function. Here is an example:
import pandas as pd
# Create a simple DataFrame
df = pd.DataFrame({
'A': [1, 2, 3],
'B': [4, 5, 6],
'C': [7, 8, 9]
})
# Drop column 'B' by name
df = df.drop(columns=['B'])
print(df)
In the code chunk above, we drop column ‘B’ from the DataFrame df
using the drop()
function. We specify the column to remove by name within the columns
parameter. The operation returns a new DataFrame with the ‘B’ column removed, and the result is assigned back to df
.

Compare it to the original dataframe before column ‘B’ was dropped:

Dropping Multiple Columns by Name in a Single DataFrame
If we must drop multiple columns simultaneously, we can pass a list of column names to the drop()
function. Here is how we can remove multiple columns from a DataFrame:
# Drop columns 'A' and 'C'
df = df.drop(columns=['A', 'C'])
print(df)
In the code above, we removed both columns ‘A’ and ‘C’ from the DataFrame by specifying them in a list. The resulting DataFrame only contains the column ‘B’. Here is the result:

Dropping Columns from Multiple Pandas DataFrames
When working with multiple DataFrames, we might want to drop the same columns by name. We can achieve this by iterating over our DataFrames and applying the drop()
function to each one.
# Create two DataFrames
df1 = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]})
df2 = pd.DataFrame({'A': [10, 11, 12], 'B': [13, 14, 15], 'C': [16, 17, 18]})
# List of DataFrames
dfs = [df1, df2]
# Drop column 'B' from all DataFrames
dfs = [df.drop(columns=['B']) for df in dfs]
# Print the result
for df in dfs:
print(df)
In the code chunk above, we first added our two DataFrames, df1
and df2
, to a list called dfs
to efficiently perform operations on multiple DataFrames at once. Then, using a list comprehension, we drop column ‘B’ from each DataFrame in the list by applying the drop()
function to each one. The result is a new list of DataFrames with the ‘B’ column removed from each.

Dropping Columns Conditionally from Panda DataFrame Based on Their Names
In some cases, we might not know in advance which columns we want to drop but wish to drop columns based on specific conditions. For instance, we might want to drop all columns that contain a particular string or pattern in their name.
# Drop columns whose names contain the letter 'A'
df = df.drop(columns=[col for col in df.columns if 'A' in col])
print(df)
In the code above, we used a list comprehension to identify columns whose names contain the letter ‘A’. We then dropped these columns from the DataFrame.
Summary
In this post, we covered several ways to pandas drop columns by name in both a single DataFrame and across multiple DataFrames. We demonstrated how to remove specific columns, drop multiple columns at once, and even apply conditions for column removal. These techniques are essential for data cleaning and preparation in Python, especially when working with large datasets. By mastering these methods, you can handle your data more efficiently and streamline your data manipulation tasks.
Feel free to share this post if you found it helpful, and leave a comment below if you would like me to cover other aspects of pandas or data manipulation in Python!
Resources
Here are some more Pandas-related tutorials:
- Pandas Tutorial: Renaming Columns in Pandas Dataframe
- A Basic Pandas Dataframe Tutorial for Beginners
- Six Ways to Reverse Pandas dataframe
The post Pandas: Drop Columns By Name in DataFrames appeared first on Erik Marsja.
Django Weblog
DjangoCon Europe and beyond
Credit: DjangoCon Europe 2025 organizers
We had a blast at DjangoCon Europe 2025, and hope you did too! Events like this are essential for our community, delighting both first-timers and seasoned Djangonauts with insights, good vibes, and all-around inspiration. This year’s conference brought together brilliant minds from all corners of the globe. And featured early celebrations of Django’s 20th birthday! ⭐️🎂🎉
After launching in 2005, Django turns 20 in 2025, and the conference was a great occasion for our community to celebrate this. And work on the sustainability of the project together.
We need more code reviews
Our Django Fellow Sarah Boyce kicked off the conference with a call for more contributions – of the reviewing kind. In her words,
Django needs your help. Every day, contributors submit pull requests and update existing PRs, but there aren't enough reviewers to keep up. Learn why Django needs more reviewers and how you can help get changes merged into core.
We need more fundraising
Our Vice President Sarah Abderemane got on stage to encourage more financial support of Django from attendees, showcasing how simple it is to donate to the project (get your boss to do it!). We have ambitious plans for 2025, which will require us to grow the Foundation’s budget accordingly.
Annual meeting of DSF Members
Our Board members Tom Carrick, Thibaud Colas, Sarah Abderemane, and Paolo Melchiorre were at the conference to organize a meeting of Members of the Django Software Foundation. This was a good occasion to discuss long-standing topics, and issues of the moment, such as:
- Diversity, equity and inclusion. Did you know we recently got awarded the CHAOSS DEI bronze badge? We need to keep the momentum in this area.
- Management of the Membership at the Foundation. With different visions on how much the membership is a recognition or a commitment (or both). There was interest in particular in sharing more calls to action with members.
- Content of the website. A long-standing area for improvement (which we’re working on!)
All in all this was a good opportunity for further transparency, and to find people who might be interested in contributing to those areas of our work in the future.
Birthday celebrations
There was a cake (well, three!). Candles to blow out. And all-around great vibes and smiles, with people taking pictures and enjoying specially-made Django stickers!
Up next
We have a lot more events coming up this year where the Foundation will be present, and bringing celebrations of Django’s 20th birthday!
PyCon US 2025
It’s on, now! And we’re present, with a booth. Come say hi! There will be Django stickers available:
PyCon Italia 2025
Some of the PyCon Italia team was there at DjangoCon Europe to hype up their event – and we’ll definitely be there in Bologna! They promised better coffee 👀, and this will have to be independently verified. Check out their Djangonauts at PyCon Italia event.
EuroPython 2025
We got to meet up with some of the EuroPython crew at DjangoCon Europe too, and we’ll definitely be there at the conference too, as one of their EuroPython community partners 💚. There may well be birthday cake there too, get your tickets!
Django events
And if you haven’t already, be sure to check out our next flagship Django events!
Thank you to everyone who joined us at DjangoCon Europe, and thank you to the team behind the conference in particular ❤️. DjangoCon Europe continues to show the strength and warmth of our community, proving that the best part of Django is truly the people. See you at the next one!
PS: if you’re in Europe and like organizing big events, do reach out to talk about organizing a DjangoCon Europe in your locale in the coming years.
May 14, 2025
Hugo van Kemenade
PEPs & Co.
PEPs #
Here’s Barry Warsaw on the origin of PEPs, or Python Enhancement Proposals (edited from PyBay 2017):
I like backronyms. For those who don’t know: a backronym is where you come up with the acronym first and then you come up with the thing that the acronym stands for. And I like funny sounding words, like FLUFL was one of those. When we were working for CNRI, they also ran the IETF conferences. The IETF is the Internet Engineering Task Force, and they’re the ones who come up with the RFCs. If you look at RFC 822, it defines what an email message looks like.
We got to a point, because we were at CNRI we were more intimately involved in the IETF and how they do standards and things, we observed at the time that there were so many interesting ideas coming in being proposed for Python that Guido really just didn’t have time to dive into the details of everything.
So I thought: well, we have this RFC process, let’s try to mirror some of that so that we can capture the essence of an idea in a document that would serve as a point of discussion, and that Guido could let people discuss and then come in and read the summary of the discussion.
And I was just kind of thinking: well, PEPs, that’s kind of peppy, it’s kind of a funny sounding word. I came up with the word and then I backronymed it into Python Enhancement Proposal. And then I wrote PEP 0 and PEP 1. PEP 0 was originally handwritten, and so I was the first PEP author because I came up with the name PEP.
But the really interesting thing is that you see the E.P. part used in a lot of other places, like Debian has DEPs now. There’s a lot of other communities that have these enhancement proposals so it’s kind of interesting. And then the format of the PEP was directly from that idea of the RFC’s standard.
& Co. #
Here’s a collection of enhancement proposals from different communities.
Are there more? Let me know!
Header photo: Grand Grocery Co., Lincoln, Nebraska, USA (1942) by The Library of Congress, with no known copyright restrictions.
Real Python
How to Get the Most Out of PyCon US
Congratulations! You’re going to PyCon US!
Whether this is your first time or you’re a regular attendee, going to a conference full of people who love the same thing as you is always a fun experience. There’s so much more to PyCon than just a bunch of people talking about the Python language—it’s a vibrant community event filled with talks, workshops, hallway conversations, and social gatherings. But for first-time attendees, it can also feel a little intimidating. This guide will help you navigate all there is to see and do at PyCon.
PyCon US is the biggest conference centered around Python. Originally launched in 2003, this conference has grown exponentially and has even spawned several other PyCons and workshops around the world.
Everyone who attends PyCon will have a different experience, and that’s what makes the conference truly unique. This guide is meant to help you, but you don’t need to follow it strictly.
By the end of this article, you’ll know:
- How PyCon consists of tutorials, conference, and sprints
- What to do before you go
- What to do during PyCon
- What to do after the event
- How to have a great PyCon
This guide contains links that are specific to PyCon 2025, but it should be useful for future PyCons as well.
Free Download: Get a sample chapter from Python Tricks: The Book that shows you Python’s best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.
What PyCon Involves
Before considering how to get the most out of PyCon, it’s first important to understand what PyCon involves.
PyCon is divided into three stages:
-
Tutorials: PyCon starts with two days of three-hour workshops, during which you learn in depth with instructors. These sessions are worth attending because the class sizes are small, and you’ll have the chance to ask instructors questions directly. You should consider going to at least one of these if you can. They have an additional cost of $150 per tutorial.
-
Conference: Next, PyCon offers three days of talks. Each presentation runs for 30 to 45 minutes, and around five talks run concurrently, including a Spanish-language charlas track. But that’s not all: there are open spaces, sponsors, posters, lightning talks, dinners, and so much more.
-
Sprints: During this stage, you can take what you’ve learned and apply it! This is a four-day exercise where people group up to work on various open-source projects related to Python. If you’ve got the time, going to one or more sprint days is a great way to practice what you’ve learned, become associated with an open-source project, and network with other smart and talented people. If you’re still unconvinced, here’s what to expect at this year’s PyCon US sprints. Learn more about sprints from an earlier year in this blog post.
Since most PyCon attendees go to the conference part, that’ll be the focus of this article. However, don’t let that deter you from attending the tutorials or sprints if you can!
You may learn more technical skills by attending the tutorials rather than listening to the talks. The sprints are great for networking and applying the skills you already have, as well as learning new ones from the people you’ll be working with.
What to Do Before You Go
In general, the more prepared you are for something, the better your experience will be. The same applies to PyCon.
It’s really helpful to plan and prepare ahead of time, which you’re already doing just by reading this article!
Look through the talks schedule and see which talks sound most interesting. This doesn’t mean you need to plan out all of the talks you’ll see in every slot possible. But it helps to get an idea of which topics will be presented so that you can decide what you’re most interested in.
Getting the PyCon US mobile app will help you plan your schedule. This app lets you view the schedule for the talks and add reminders for those you want to attend. If you’re having a hard time picking which talks to attend, you can come prepared with a question or problem you need to solve. Doing this can help you focus on the topics that are important to you.
If you can, come a day early to check in and attend the opening reception. The line to check in on the first day is always long, so you’ll save time if you check in the day before. There’s also an opening reception that evening, where you can meet other attendees and speakers and check out the various sponsors and their booths.
If you’re new to PyCon, the Newcomer Orientation can help you learn about the conference and how you can participate.
Read the full article at https://realpython.com/pycon-guide/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
DSF member of the month - Simon Charette
For May 2025, we welcome Simon Charette as our DSF member of the month! ⭐
Simon Charette is a longtime Django contributor and community member. He served on the Django 5.x Steering Council and is part of the Security team and the Triage and Review team. He has been a DSF member since November 2014.
You can learn more about Simon by visiting Simon's GitHub Profile.
Let’s spend some time getting to know Simon better!
Can you tell us a little about yourself (hobbies, education, etc)
My name is Simon Charette and I'm based in Montréal. I've been contributing to Django for over a decade mainly to the ORM and I have a background in software engineering and mathematics. I work as a principal backend engineer at Zapier where we use Python and Django to power many of our backend services. Outside of Django and work I like to spend time cycling around the world, traveling with my partner, and playing ultimate frisbee.
Out of curiosity, your GitHub profile picture appears to be a Frisbee, is it correct? If so, have you been playing for a long time?
I've been playing ultimate frisbee since college which is around the time I started contributing to Django. It has been a huge part of my life since then as I made many friends and met my partner playing through the years. My commitment to ultimate frisbee can be reflected in my volume of contributions over the past decade as it requires more of my time during certain periods of the year. It also explains why I wasn't able to attend most DjangoCon in spring and fall as this is usually a pretty busy time for me. I took part in the world championships twice and I played in the UFA for about 5 years before retiring three years ago. Nowadays I still play but at a lower intensity level and I am focused on giving back to the community through coaching.
How did you start using Django?
Back in college I was working part time for a web agency that had an in house PHP framework and was trying to determine which tech stack and framework they should migrate to in order to ease onboarding of their developers and reduce their maintenance costs. I was tasked, with another member of the team, to identify potential candidates and despite my lack of familiarity with Python at the time we ended up choosing Django over PHP's Symphony mainly because of its spectacular documentation and third-party app ecosystem.
What other framework do you know and if there is anything you would like to have in Django if you had magical powers?
If I had magical powers I'd invent Python ergonomics to elegantly address the function coloring problem so it's easier for Django to be adapted to an async
-ready world. I'm hopeful that the recent development on the GIL removal in Python 3.13+ will result a renewed interest in the usage of threading, which Django is well equipped to take advantage of, over the systematic usage of an event loop to deal with web serving workloads as the async world comes with a lot of often overlooked drawbacks.
What projects are you working on now?
I have a few Django related projects I'm working on mainly relating to ORM improvements (deprecating extra
, better usage of RETURNING
when available) but the main one has been a tool to keep track of the SQL generated by the Django test suite over time to more easily identity unintended changes that still pass the test suite. My goal with this project is to have a CI invokable command that would run the full Django test suite and provide a set of tests that generated different SQL compared to the target branch so its much easier to identify unintended side effects when making invasive changes to the ORM.
Which Django libraries are your favorite (core or 3rd party)?
- DRF
- django-filter
django-seal
(shameless plug)
What are the top three things in Django that you like?
- The people
- The ORM, unsurprisingly
- The many entry points the framework provides to allow very powerful third-party apps to be used together
You've contributed significantly to improving the Django ORM. What do you believe is the next big challenge for Django ORM, and how do you envision it evolving in the coming years?
The ORM's expression interface is already very powerful but there are effectively some remaining rough edges. I believe that adding generalized support for composite virtual fields (a field composed of other fields) could solve many problems we currently face with how relationships are expressed between models as we currently lack a way to describe an expression that can return tuples of values internally. If we had this building block, adding a way to express and compose table expressions (CTE, subquery pushdown, aggregation through subqueries) would be much easier to implement without denaturing the ORM by turning it into a low level query builder. Many of these things are possible today (e.g. django-cte) but they require a lot of SQL compilation and ORM knowledge and can hardly be composed together.
How did you start to contribute to the ORM? What would be the advice you have for someone interested to contribute to this field?
I started small by fixing a few issues that I cared about and by taking the time to read through Trac, mailing lists, and git-blame
for changes in the area that were breaking tests as attempted to make changes. One thing that greatly helps in onboarding on the ORM is to at least have some good SQL fundamentals. When I first started I already had written a MSSQL ORM in PHP which helped me at least understand the idea behind the generation of SQL from a higher level abstraction. Nowadays there are tons of resources out there to help you get started on understand how things are organized but I would suggest this particular video where I attempt to walk through the different phases of SQL generation.
Is there anything else you’d like to say?
It has been a pleasure to be able to be part of this community for so long and I'd like to personally thank Claude Paroz for initially getting me interested in contributing seriously to the project.
Thank you for doing the interview, Simon !
eGenix.com
eGenix Antispam Bot for Telegram 0.7.1 GA
Introduction
eGenix has long been running a local user group meeting in Düsseldorf called Python Meeting Düsseldorf and we are using a Telegram group for most of our communication.
In the early days, the group worked well and we only had few spammers joining it, which we could well handle manually.
More recently, this has changed dramatically. We are seeing between 2-5 spam signups per day, often at night. Furthermore, the signups accounts are not always easy to spot as spammers, since they often come with profile images, descriptions, etc.
With the bot, we now have a more flexible way of dealing with the problem.
Please see our project page for details and download links.
Features
- Low impact mode of operation: the bot tries to keep noise in the group to a minimum
- Several challenge mechanisms to choose from, more can be added as needed
- Flexible and easy to use configuration
- Only needs a few MB of RAM, so can easily be put into a container or run on a Raspberry Pi
- Can handle quite a bit of load due to the async implementation
- Works with Python 3.9+
- MIT open source licensed
News
The 0.7.1 release fixes a few bugs and adds more features:
- Added missing dependency on emoji package to setup (bug introduced in 0.7.0, fixed in 0.7.1)
- Added user name check for number of emojis, since these are being used a lot by spammers
- Added wheel as requirement, since this is no longer included per default
- Updated copyright year
It has been battle-tested in production for several years already
and is proving to be a really useful tool to help with Telegram group
administration.
More Information
For more information on the eGenix.com Python products, licensing and download instructions, please write to sales@egenix.com.
Enjoy !
Marc-Andre Lemburg, eGenix.com
May 13, 2025
PyCoder’s Weekly
Issue #681: Loguru, GeoDjango, flexicache, and More (May 13, 2025)
#681 – MAY 13, 2025
View in Browser »
How to Use Loguru for Simpler Python Logging
In this tutorial, you’ll learn how to use Loguru to quickly implement better logging in your Python applications. You’ll spend less time wrestling with logging configuration and more time using logs effectively to debug issues.
REAL PYTHON
Maps With Django: GeoDjango, Pillow & GPS
A quick-start guide to create a web map with images, using the Python-based Django web framework, leveraging its GeoDjango module, and Pillow, the Python imaging library, to extract GPS information from images.
PAOLO MELCHIORRE
From try/except to Production Monitoring: Learn Python Error Handling the Right Way
This guide starts with the basics—errors vs. exceptions, how try/except works—and builds up to real-world advice on monitoring and debugging Python apps in production with Sentry. It’s everything you need to go from “I think it broke?” to “ai autofixed my python bug before it hit my users.” →
SENTRY sponsor
Exploring flexicache
flexicache
is a cache decorator that comes with the fastcore library. This post describes how it’s arguments give you finer control over your caching.
DANIEL ROY GREENFELD
Python Jobs
Senior Software Engineer – Quant Investment Platform (LA or Dallas) (Los Angeles, CA, USA)
Causeway Capital Management LLC
Articles & Tutorials
Gen AI, Knowledge Graphs, Workflows, and Python
Are you looking for some projects where you can practice your Python skills? Would you like to experiment with building a generative AI app or an automated knowledge graph sentiment analysis tool? This week on the show, we speak with Raymond Camden about his journey into Python, his work in developer relations, and the Python projects featured on his blog.
REAL PYTHON podcast
Sets in Python
In this tutorial, you’ll learn how to work effectively with Python’s set
data type. You’ll learn how to define set
objects and discover the operations that they support. By the end of the tutorial, you’ll have a good feel for when a set
is an appropriate choice in your programs.
REAL PYTHON
The Magic of Software
This article, subtitled “what makes a good engineer also makes a good engineering organization” is all about how we chase the latest development trends by the big corps, even when they have little bearing on your org’s success.
MOXIE MARLINSPIKE
Using the Python subprocess
Module
In this video course, you’ll learn how to use Python’s subprocess module to run and control external programs from your scripts. You’ll start with launching basic processes and progress to interacting with them as they execute.
REAL PYTHON course
Q&A With the PyCon US 2025 Keynote Speakers
Want to learn more about the PyCon US keynote speakers? This interview asked each of them the same five questions, ranging from how they got into Python to their favorite open source project people don’t know enough about.
LOREN CRARY
Making PyPI’s Test Suite 81% Faster
Trail of Bits is a security research company that sometimes works with the folks at PyPI. Their most recent work reduced test execution time from 163 seconds down to 30. This post describes how they accomplished that.
ALEXIS CHALLANDE
pre-commit
: Install With uv
pre-commit is Adam’s favourite Git-integrated “run things on commit” tool. It acts as a kind of package manager, installing tools as necessary from their Git repositories. This post explains how to use it with uv
.
ADAM JOHNSON
5 Weirdly Useful Python Libraries
This post describes five different Python libraries that you’ve probably never heard of, but very well may love using. Topics include generating fake data and making your computer talk.
DEV
Developer Trends in 2025
Talk Python interviews Gina Häußge, Ines Montani, Richard Campbell, and Calvin Hendryx-Parker and they talk about the recent Stack Overflow Developer survey results.
KENNEDY ET AL podcast
The Future of Textualize
Will McGugan, founder of Textualize the company has announced that they will be closing their doors. Textualize the open source project will remain.
WILL MCGUGAN
Asyncio Demystified: Rebuilding It One Yield at a Time
Get a better understanding of how asyncio works in Python, by building a lightweight version from scratch using generators and coroutines.
JEAN-BAPTISTE ROCHER • Shared by Jean-Baptiste Rocher
Projects & Code
Build Python GUI’s Using Drag and Drop
GITHUB.COM/PAULLEDEMON • Shared by Paul
Events
PyCon US 2025
May 14 to May 23, 2025
PYCON.ORG
PyData Bristol Meetup
May 15, 2025
MEETUP.COM
PyLadies Dublin
May 15, 2025
PYLADIES.COM
PyGrunn 2025
May 16 to May 17, 2025
PYGRUNN.ORG
Flask Con 2025
May 16 to May 17, 2025
FLASKCON.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #681.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
PyCharm
We’re excited to launch the second edition of our User Experience Survey for DataGrip and the Database Tools & SQL Plugin!
Your feedback from the previous survey helped us better understand your needs and prioritize the features and improvements that matter most to you.
Thanks to your input, we’ve already delivered a first set of enhancements focused on improving your experience:
- Faster introspection for MySQL and MariaDB (with more DBMS support coming soon!)
- New Quick Start Guide with sample database
- Non-modal Create and Modify dialogs
- AI-powered Fix and Explain SQL errors
- Better database context integration within the AI Assistant
- And much more: What’s New in DataGrip
Now, we’d love to hear from you again! Have these improvements made a difference for you? What should we focus on next to better meet your needs?
The survey takes approximately 10 minutes to complete.
As a thank you, everyone who provides meaningful feedback will be entered to win:
- A $100 Amazon Gift Card
- A 1-year JetBrains All Products Pack (individual license)
Thank you for helping us build the best database tools!
DataGrip and Database Tools UX Survey #2
Real Python
Working With Missing Data in Polars
Efficiently handling missing data in Polars is essential for keeping your datasets clean during analysis. Polars provides powerful tools to identify, replace, and remove null values, ensuring seamless data processing.
This video course covers practical techniques for managing missing data and highlights Polars’ capabilities to enhance your data analysis workflow. By following along, you’ll gain hands-on experience with these techniques and learn how to ensure your datasets are accurate and reliable.
By the end of this video course, you’ll understand that:
- Polars allows you to handle missing data using LazyFrames and DataFrames.
- You can check for null values in Polars using the
.null_count()
method. - NaN represents non-numeric values while null indicates missing data.
- You can replace NaN in Polars by converting them to nulls and using
.fill_null()
. - You can fix missing data by identifying, replacing, or removing null values.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Daniel Roy Greenfeld
Exploring flexicache
An exploration of using flexicache for caching in Python.
Real Python
Quiz: Getting Started With Python IDLE
In this quiz, you’ll test your understanding of Python IDLE.
Python IDLE is an IDE included with Python installations, designed for basic editing, execution, and debugging of Python code. You can also customize IDLE to make it a useful tool for writing Python.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Luke Plant
Knowledge creates technical debt
The term technical debt, now used widely in software circles, was coined to explain a deliberate process where you write software quickly to gain knowledge, and then you have to use that knowledge gained to improve your software.
This perspective is still helpful today when people speak of technical debt as only a negative, or only as a result of bad decisions. Martin Fowler’s Tech Debt Quadrant is a useful antidote to that.
A consequence of this perspective is that technical debt can appear at any time, apparently from nowhere, if you are unfortunate enough to gain some knowledge.
If you discover a better way to do things, the old way of doing it that is embedded in your code base is now “debt”:
you can either live with the debt, “paying interest” in the form of all the ways that it makes your code harder to work with;
or you can “pay down” the debt by fixing all the code in light of your new knowledge, which takes up front resources which could have been spent on something else, but hopefully will make sense in the long term.
This “better way” might be a different language, library, tool or pattern. In some cases, the better way has only recently been invented. It might be your own personal discovery, or something industry wide. It might be knowledge gained through the actual work of doing the current project (which was Ward Cunningham’s usage of the tem), or from somewhere else. But the end result is the same – you know more than you did, and now you have a debt.
The problem is that this doesn’t sound like a good thing. You learn something, and now you have a problem you didn’t have before, and it’s difficult to put a good spin on “I discovered a debt”.
But from another angle, maybe this perspective gives us different language to use when communicating with others and explaining why we need to address technical debt. Rather than say “we have a liability”, the knowledge we have gained can be framed as an opportunity. Failure to take the opportunity is an opportunity cost.
The “pile of technical debt” is essentially a pile of knowledge – everything we now think is bad about the code represents what we’ve learned about how to do software better. The gap between what it is and what it should be is the gap between what we used to know and what we now know.
And fixing that code is not “a debt we have to pay off”, but an investment opportunity that will reap rewards. You can refuse to take that opportunity if you want, but it’s a tragic waste of your hard-earned knowledge – a waste of the investment you previously made in learning – and eventually you’ll be losing money, and losing out to competitors who will be making the most of their knowledge.
Finally, I think phrasing it in terms of knowledge can help tame some of our more rash instincts to call everything we don’t like “tech debt”. Can I really say “we now know” that the existing code is inferior? Is it true that fixing the code is “investing my knowledge”? If it’s just a hunch, or a personal preference, or the latest fashion, maybe I can both resist the urge for unnecessary rewrites, and feel happier about it at the same time.
Talk Python to Me
#505: t-strings in Python (PEP 750)
Python has many string formatting styles which have been added to the language over the years. Early Python used the % operator to injected formatted values into strings. And we have string.format() which offers several powerful styles. Both were verbose and indirect, so f-strings were added in Python 3.6. But these f-strings lacked security features (think little bobby tables) and they manifested as fully-formed strings to runtime code. Today we talk about the next evolution of Python string formatting for advanced use-cases (SQL, HTML, DSLs, etc): t-strings. We have Paul Everitt, David Peck, and Jim Baker on the show to introduce this upcoming new language feature.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/connect'>Posit</a><br> <a href='https://talkpython.fm/auth0'>Auth0</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading">Links from the show</h2> <div><strong>Guests:</strong><br/> <strong>Paul on X</strong>: <a href="https://x.com/paulweveritt?featured_on=talkpython" target="_blank" >@paulweveritt</a><br/> <strong>Paul on Mastodon</strong>: <a href="https://fosstodon.org/@pauleveritt" target="_blank" >@pauleveritt@fosstodon.org</a><br/> <strong>Dave Peck on Github</strong>: <a href="https://github.com/davepeck/?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Jim Baker</strong>: <a href="https://github.com/jimbaker?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>PEP 750 – Template Strings</strong>: <a href="https://peps.python.org/pep-0750/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>tdom - Placeholder for future library on PyPI using PEP 750 t-strings</strong>: <a href="https://github.com/t-strings/tdom?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>PEP 750: Tag Strings For Writing Domain-Specific Languages</strong>: <a href="https://discuss.python.org/t/pep-750-tag-strings-for-writing-domain-specific-languages/60408?featured_on=talkpython" target="_blank" >discuss.python.org</a><br/> <strong>How To Teach This</strong>: <a href="https://peps.python.org/pep-0750/#how-to-teach-this" target="_blank" >peps.python.org</a><br/> <strong>PEP 501 – General purpose template literal strings</strong>: <a href="https://peps.python.org/pep-0501/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>Python's new t-strings</strong>: <a href="https://davepeck.org/2025/04/11/pythons-new-t-strings/?featured_on=talkpython" target="_blank" >davepeck.org</a><br/> <strong>PyFormat: Using % and .format() for great good!</strong>: <a href="https://pyformat.info?featured_on=talkpython" target="_blank" >pyformat.info</a><br/> <strong>flynt: A tool to automatically convert old string literal formatting to f-strings</strong>: <a href="https://github.com/ikamensh/flynt?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Examples of using t-strings as defined in PEP 750</strong>: <a href="https://github.com/davepeck/pep750-examples/?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>htm.py issue</strong>: <a href="https://github.com/jviide/htm.py/issues/11?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Exploits of a Mom</strong>: <a href="https://xkcd.com/327/?featured_on=talkpython" target="_blank" >xkcd.com</a><br/> <strong>pyparsing</strong>: <a href="https://github.com/pyparsing/pyparsing?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=WCWNeZ_rE68" target="_blank" >youtube.com</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/505/t-strings-in-python-pep-750" target="_blank" >talkpython.fm</a><br/> <br/> <strong>--- Stay in touch with us ---</strong><br/> <strong>Subscribe to Talk Python on YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" >youtube.com</a><br/> <strong>Talk Python on Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm at bsky.app</a><br/> <strong>Talk Python on Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes at bsky.app</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Quansight Labs Blog
The first year of free-threaded Python
A recap of the first year of work on enabling support for the free-threaded build of CPython in community packages.
May 12, 2025
Paolo Melchiorre
My DjangoCon Europe 2025
A summary of my experience at DjangoCon Europe 2025 told through the posts I published on Mastodon during the conference.
Real Python
Python's T-Strings Coming Soon and Other Python News for May 2025
Welcome to the May 2025 edition of the Python news roundup. Last month brought confirmation that Python will have the eagerly-awaited template strings, or t-strings, included in the next release. You’ll also read about other key developments in Python’s evolution from the past month, updates from the Django world, and exciting announcements from the community around upcoming conferences.
From new PEPs and alpha releases to major framework updates, here’s what’s been happening in the world of Python.
Join Now: Click here to join the Real Python Newsletter and you'll never miss another Python tutorial, course update, or post.
PEP 750: Template Strings Coming to Python
PEP 750 introduces template strings, a new standard mechanism for defining string templates as reusable, structured objects. Unlike f-strings or str.format()
, which embed formatting directly in string literals, template strings separate the definition of the string structure from the data used to populate it:
>>> template = t"Howdy, {input('Enter your name: ')}!"
Enter your name: Stephen
>>> template
Template(
strings=('Howdy, ', '!'),
interpolations=(
Interpolation('Stephen', "input('Enter your name: ')", None, ''),
)
)
>>> for item in template:
... print(item)
...
Howdy,
Interpolation('Stephen', "input('Enter your name: ')", None, '')
!
This new tool opens up new possibilities for dynamic formatting, localization, user-facing messages, and more. It also makes it easier to share and reuse format templates across an application. The addition of t-strings is already being described as a major enhancement to Python’s string-handling capabilities.
Other Python Language Developments
Python 3.14 continues to take shape, with a new alpha release and several PEPs being accepted or proposed. These updates give a sense of where the language is heading, especially in areas like debugging, dependency management, and type checking.
Python 3.14.0a7 Released
Python 3.14.0a7 was released in April, marking the final alpha in the Python 3.14 development cycle. This release includes several fixes and tweaks, with the focus now shifting to stabilization as the first beta approaches.
Read the full article at https://realpython.com/python-news-may-2025/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]