Planet Python
Last update: April 07, 2026 04:44 PM UTC
April 07, 2026
Python Engineering at Microsoft
Write SQL Your Way: Dual Parameter Style Benefits in mssql-python
Reviewed by: Sumit Sarabhai
If you’ve been writing SQL in Python, you already know the debate: positional parameters (?) or named parameters (%(name)s)? Some developers swear by the conciseness of positional. Others prefer the clarity of named. With mssql-python, you no longer need to choose – we support both.
We’ve added dual parameter style support to mssql-python, enabling both qmark and pyformat parameter styles in Python applications that interact with SQL Server and Azure SQL. This feature is especially useful if you’re building complex queries, dynamically assembling filters, or migrating existing code that already uses named parameters with other DBAPI drivers.
Try it here
You can install driver using pip install mssql-pythonCalling all Python + SQL developers! We invite the community to try out mssql-python and help us shape the future of high-performance SQL Server connectivity in Python.!
What Are Parameter Styles?
The DB-API 2.0 specification (PEP 249) defines several ways to pass parameters to SQL queries. The two most popular are:
- qmark – Positional ? placeholders with a tuple/list of values.
- pyformat – Named %(name)s placeholders with a dictionary of values.
# qmark style
cursor.execute("SELECT * FROM users WHERE id = ? AND status = ?", (42, "active"))
# pyformat style
cursor.execute("SELECT * FROM users WHERE id = %(id)s AND status = %(status)s",
{"id": 42, "status": "active"})
Business Requirement
Previously, mssql-python only supported qmark. It works fine for simple queries, but as parameters multiply, tracking their order becomes error-prone:
# Which ? corresponds to which value?
cursor.execute(
"UPDATE users SET name=?, email=?, age=? WHERE id=? AND status=?",
(name, email, age, user_id, status)
)
Mix up the order and it’s easy to introduce subtle, hard to spot bugs.
Why Named Parameters?
- Self-documenting queries – No more guessing which ? maps to what:
qmark — 6 parameters, which is which?
cursor.execute( """INSERT INTO employees (first_name, last_name, email, department, salary, hire_date) VALUES (?, ?, ?, ?, ?, ?)""", ("Jane", "Doe", "jane.doe@company.com", "Engineering", 95000, "2025-03-01") )
pyformat — every value is labeled
cursor.execute( """INSERT INTO employees (first_name, last_name, email, department, salary, hire_date) VALUES (%(first_name)s, %(last_name)s, %(email)s, %(dept)s, %(salary)s, %(hire_date)s)""", {"first_name": "Jane", "last_name": "Doe", "email": "jane.doe@company.com", "dept": "Engineering", "salary": 95000, "hire_date": "2025-03-01"} )
- Parameter reuse – Use the same value multiple times without repeating it:
Audit log: record who made the change and when
cursor.execute( """UPDATE orders SET status = %(new_status)s, modified_by = %(user)s, approved_by = %(user)s, modified_at = %(now)s, approved_at = %(now)s WHERE order_id = %(order_id)s""", {"new_status": "approved", "user": "admin@company.com", "now": datetime.now(), "order_id": 5042} )
3 unique values, used 5 times — no duplication needed
- Dynamic query building – Add filters without tracking parameter positions:
def search_orders(customer=None, status=None, min_total=None, date_from=None):
query_parts = ["SELECT * FROM orders WHERE 1=1"]
params = {}
if customer:
query_parts.append("AND customer_id = %(customer)s")
params["customer"] = customer
if status:
query_parts.append("AND status = %(status)s")
params["status"] = status
if min_total is not None:
query_parts.append("AND total >= %(min_total)s")
params["min_total"] = min_total
if date_from:
query_parts.append("AND order_date >= %(date_from)s")
params["date_from"] = date_from
query_parts.append("ORDER BY order_date DESC")
cursor.execute(" ".join(query_parts), params)
return cursor.fetchall()
# Callers use only the filters they need
recent_big_orders = search_orders(min_total=500, date_from="2025-01-01")
pending_for_alice = search_orders(customer=42, status="pending")
- Dictionary Reuse Across Queries
The same parameter dictionary can drive multiple queries:
report_params = {"region": "West", "year": 2025, "status": "active"}
# Summary count
cursor.execute(
"""SELECT COUNT(*) FROM customers
WHERE region = %(region)s AND status = %(status)s""",
report_params
)
total = cursor.fetchone()[0]
# Revenue breakdown
cursor.execute(
"""SELECT department, SUM(revenue)
FROM sales
WHERE region = %(region)s AND fiscal_year = %(year)s
GROUP BY department
ORDER BY SUM(revenue) DESC""",
report_params
)
breakdown = cursor.fetchall()
# Top performers
cursor.execute(
"""SELECT name, revenue
FROM sales_reps
WHERE region = %(region)s AND fiscal_year = %(year)s AND status = %(status)s
ORDER BY revenue DESC""",
report_params
)
top_reps = cursor.fetchall()
# Same dict, three different queries — change the filters once, all queries update
The Solution: Automatic Detection
mssql-python now detects which style you’re using based on the parameter type:
- tuple/list → qmark (?)
- dict → pyformat (%(name)s)
No configuration needed. Existing qmark code requires zero changes.
from mssql_python import connect
# qmark - works exactly as before
cursor.execute("SELECT * FROM users WHERE id = ?", (42,))
# pyformat - just pass a dict!
cursor.execute("SELECT * FROM users WHERE id = %(id)s", {"id": 42})
How It Works
When you pass a dict to execute(), the driver:
- Scans the SQL for %(name)s placeholders (context-aware – skips string literals, comments, and bracketed identifiers).
- Validates that every placeholder has a matching key in the dict.
- Builds a positional tuple in placeholder order (duplicating values for reused parameters).
- Replaces each %(name)s with ? and sends the rewritten query to ODBC.
User Code ODBC Layer
───────── ──────────
cursor.execute( SQLBindParameter(1, "active")
"WHERE status = %(status)s SQLBindParameter(2, "USA")
AND country = %(country)s", → SQLExecute(
{"status": "active", "WHERE status = ?
"country": "USA"} AND country = ?"
) )
The ODBC layer always works with positional ? placeholders. The pyformat conversion is purely a developer-facing convenience with zero overhead to database communication.
Clear Error Messages
Mismatched styles or missing parameters produce actionable errors – not cryptic database exceptions:
cursor.execute("WHERE id = %(id)s AND name = %(name)s", {"id": 42})
# KeyError: Missing required parameter(s): 'name'.
cursor.execute("WHERE id = ?", {"id": 42})
# TypeError: query uses positional placeholders (?), but dict was provided.
cursor.execute("WHERE id = %(id)s", (42,))
# TypeError: query uses named placeholders (%(name)s), but tuple was provided.
Real-World Examples
Example 1: Web Application
def add_user(name, email):
with connect(connection_string) as conn:
with conn.cursor() as cursor:
cursor.execute(
"INSERT INTO users (name, email) VALUES (%(name)s, %(email)s)",
{"name": name, "email": email}
)
Example 2: Batch Operations
cursor.executemany(
"INSERT INTO users (name, age) VALUES (%(name)s, %(age)s)",
[{"name": "Alice", "age": 30}, {"name": "Bob", "age": 25}]
)
Example 3: Financial Transactions
def transfer_funds(from_acct, to_acct, amount):
with connect(connection_string) as conn:
with conn.cursor() as cursor:
cursor.execute(
"UPDATE accounts SET balance = balance - %(amount)s WHERE id = %(id)s",
{"amount": amount, "id": from_acct}
)
cursor.execute(
"UPDATE accounts SET balance = balance + %(amount)s WHERE id = %(id)s",
{"amount": amount, "id": to_acct}
)
# Automatic commit on success, rollback on failure
Things to Keep in Mind
- Don’t mix styles in one query. Use either ? or %(name)s, not both. The driver determines which style you’re using from the parameter type (tuple vs dict), not from the SQL text. If placeholders don’t match the parameter type, you’ll get a clear TypeError explaining the mismatch. If both placeholder types appear in the SQL, only one set gets substituted, leading to parameter count mismatches at execution time.
# Mixing styles - raises TypeError
cursor.execute( "SELECT * FROM users WHERE id = ? AND name = %(name)s", {"name": "Alice"} # Driver finds %(name)s but also sees unmatched ? )
# ODBC error: parameter count mismatch (2 placeholders, 1 value)
# Pick one style and use it consistently
cursor.execute( "SELECT * FROM users WHERE id = %(id)s AND name = %(name)s", {"id": 42, "name": "Alice"} )
- Extra dict keys are OK. Unused parameters are silently ignored, this is by design to enable parameter dictionary reuse across different queries.
- SQL injection safe. Both styles use ODBC parameter binding under the hood. Values are never interpolated into the SQL string, they are always safely bound by the driver.
- Literal % in SQL. Use %% to escape if you need a literal %(…)s pattern in your query text.
cursor.execute(
"SELECT * FROM users WHERE name LIKE %(pattern)s",
{"pattern": "%alice%"} # The % inside the VALUE is fine
)
# But if you need a literal %(...)s in SQL text itself, use %%
cursor.execute(
"SELECT '%%(example)s' AS literal WHERE id = %(id)s",
{"id": 42}
)
- mssql_python.paramstyle reports “pyformat”. The DB-API 2.0 spec only allows a single value for this module-level constant. We set it to pyformat because it’s the more expressive style and the one we recommend for new code. But qmark is fully supported at runtime, the driver accepts both styles transparently based on whether you pass a tuple or a dict. Think of paramstyle = “pyformat” as the advertised default, not a limitation.
Compatibility at a Glance
| Feature | qmark (?) | pyformat (%(name)s) |
| cursor.execute() | |
|
| cursor.executemany() | |
|
| connection.execute() | |
|
| Parameter reuse | |
|
| Stored procedures | |
|
| All SQL data types | |
|
| Backward compatible with qmark paramstyle | |
N/A (new) |
Takeaway
Use ? for quick, simple queries. Use %(name)s for complex, multi-parameter queries where clarity and reuse matter. You don’t have to pick a side – use whichever fits the situation. The driver handles the rest.
Whether you’re building dynamic queries, or simply want more readable SQL, dual paramstyle support makes mssql-python work the way you already think.
Try It and Share Your Feedback!
We invite you to:
- Check-out the mssql-python driver and integrate it into your projects.
- Share your thoughts: Open issues, suggest features, and contribute to the project.
- Join the conversation: GitHub Discussions | SQL Server Tech Community.
Use Python Driver with Free Azure SQL Database
You can use the Python Driver with the free version of Azure SQL Database!
Deploy Azure SQL Database for free
Deploy Azure SQL Managed Instance for free Perfect for testing, development, or learning scenarios without incurring costs.
The post Write SQL Your Way: Dual Parameter Style Benefits in mssql-python appeared first on Microsoft for Python Developers Blog.
Django Weblog
Django security releases issued: 6.0.4, 5.2.13, and 4.2.30
In accordance with our security release policy, the Django team is issuing releases for Django 6.0.4, Django 5.2.13, and Django 4.2.30. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.
Django 4.2 has reached the end of extended support
Note that with this release, Django 4.2 has reached the end of extended support. All Django 4.2 users are encouraged to upgrade to Django 5.2 or later to continue receiving fixes for security issues.
See the downloads page for a table of supported versions and the future release schedule.
CVE-2026-3902: ASGI header spoofing via underscore/hyphen conflation
ASGIRequest normalizes header names following WSGI conventions, mapping hyphens to underscores. As a result, even in configurations where reverse proxies carefully strip security-sensitive headers named with hyphens, such a header could be spoofed by supplying a header named with underscores.
Under WSGI, it is the responsibility of the server or proxy to avoid ambiguous mappings. (Django's runserver was patched in CVE-2015-0219.) But under ASGI, there is not the same uniform expectation, even if many proxies protect against this under default configuration (including nginx via underscores_in_headers off;).
Headers containing underscores are now ignored by ASGIRequest, matching the behavior of Daphne, the reference server for ASGI.
This issue has severity "low" according to the Django Security Policy.
Thanks to Tarek Nakkouch for the report.
CVE-2026-4277: Privilege abuse in GenericInlineModelAdmin
Add permissions on inline model instances were not validated on submission of forged POST data in GenericInlineModelAdmin.
This issue has severity "low" according to the Django Security Policy.
Thanks to N05ec@LZU-DSLab for the report.
CVE-2026-4292: Privilege abuse in ModelAdmin.list_editable
Admin changelist forms using ModelAdmin.list_editable incorrectly allowed new instances to be created via forged POST data.
This issue has severity "low" according to the Django Security Policy.
CVE-2026-33033: Potential denial-of-service vulnerability in MultiPartParser via base64-encoded file upload
When using django.http.multipartparser.MultiPartParser, multipart uploads with Content-Transfer-Encoding: base64 that include excessive whitespace may trigger repeated memory copying, potentially degrading performance.
This issue has severity "moderate" according to the Django Security Policy.
Thanks to Seokchan Yoon for the report.
CVE-2026-33034: Potential denial-of-service vulnerability in ASGI requests via memory upload limit bypass
ASGI requests with a missing or understated Content-Length header could bypass the DATA_UPLOAD_MAX_MEMORY_SIZE limit when reading HttpRequest.body, potentially loading an unbounded request body into memory and causing service degradation.
This issue has severity "low" according to the Django Security Policy.
Thanks to Superior for the report.
Affected supported versions
- Django main
- Django 6.0
- Django 5.2
- Django 4.2
Resolution
Patches to resolve the issue have been applied to Django's main, 6.0, 5.2, and 4.2 branches. The patches may be obtained from the following changesets.
CVE-2026-3902: ASGI header spoofing via underscore/hyphen conflation
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
CVE-2026-4277: Privilege abuse in GenericInlineModelAdmin
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
CVE-2026-4292: Privilege abuse in ModelAdmin.list_editable
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
CVE-2026-33033: Potential denial-of-service vulnerability in MultiPartParser via base64-encoded file upload
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
CVE-2026-33034: Potential denial-of-service vulnerability in ASGI requests via memory upload limit bypass
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
The following releases have been issued
- Django 6.0.4 (download Django 6.0.4 | 6.0.4 checksums)
- Django 5.2.13 (download Django 5.2.13 | 5.2.13 checksums)
- Django 4.2.30 (download Django 4.2.30 | 4.2.30 checksums)
The PGP key ID used for this release is Jacob Walls: 131403F4D16D8DC7
General notes regarding security reporting
As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum. Please see our security policies for further information.
Real Python
Using Loguru to Simplify Python Logging
Logging is a vital programming practice that helps you track, understand, and debug your application’s behavior. Loguru is a Python library that provides simpler, more intuitive logging compared to Python’s built-in logging module.
Good logging gives you insights into your program’s execution, helps you diagnose issues, and provides valuable information about your application’s health in production. Without proper logging, you risk missing critical errors, spending countless hours debugging blind spots, and potentially undermining your project’s overall stability.
By the end of this video course, you’ll understand that:
- Logging in Python can be simple and intuitive with the right tools.
- Using Loguru lets you start logging immediately without complex configuration.
- You can customize log formats and send logs to multiple destinations like files, the standard error stream, or external services.
- Loguru provides powerful debugging capabilities that make troubleshooting easier.
- Loguru supports structured logging with JSON formatting for modern applications.
After watching this course, you’ll be able to quickly implement better logging in your Python applications. You’ll spend less time wrestling with logging configuration and more time using logs effectively to debug issues. This will help you build production-ready applications that are easier to troubleshoot when problems occur.
To get the most from this course, you should be familiar with Python concepts like functions, decorators, and context managers. You might also find it helpful to have some experience with Python’s built-in logging module, though this isn’t required.
Don’t worry if you’re new to logging in Python. This course will guide you through everything you need to know to get started with Loguru and implement effective logging in your applications.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
Could you host DjangoCon Europe 2027? Call for organizers
We are looking for the next group of organizers to own and lead the 2027 DjangoCon Europe conference. Could your town's football stadium, theatre, cinema, city hall, circus tent or a private island host this wonderful community event?
DjangoCon Europe is a major pillar of the Django community, as people from across the world meet and share. Many qualities make it a unique event: Unconventional and conventional venues, creative happenings, a feast of talks and a dedication to inclusion and diversity.
Hosting a DjangoCon is an ambitious undertaking. It's hard work, but each year it has been successfully run by a team of community volunteers, not all of whom have had previous experience - more important is enthusiasm, organizational skills, the ability to plan and manage budgets, time and people - and plenty of time to invest in the project.
For 2027, rest assured that we will be there to answer questions and put you in touch with previous organizers through the brand new DSF Events Support Working Group (a reboot of the previous DjangoCon Europe Support Working Group).
Step 1: Submit your expression of interest
If you're considering organizing DjangoCon Europe (🙌 great!), fill in our DjangoCon Europe 2027 expression of interest form with your contact details. No need to fill in all the information at this stage if you don't have it all already, we'll reach out and help you figure it out.
Express your interest in organizing
Step 2: We're here to help!
We've set up a DjangoCon Europe support working group of previous organizers that you can reach out to with questions about organizing and running a DjangoCon Europe.
The group will be in touch with everyone submitting the expression of interest form, or you can reach out to them directly: events-support@djangoproject.com
We'd love to hear from you as soon as possible, so your proposal can be finalized and sent to the DSF board by June 1st 2026.
Step 3: Submitting the proposal
The more detailed and complete your final proposal is, the better. Basic details include:
- Organizing committee members: You won't have a full team yet, probably, naming just some core team members is enough.
- The legal entity that is intended to run the conference: Even if the entity does not exist yet, please share how you are planning to set it up.
- Dates: See "What dates are possible in 2027?" below. We must avoid conflicts with major holidays, EuroPython, DjangoCon US, and PyCon US.
- Venue(s), including size, number of possible attendees, pictures, accessibility concerns, catering, etc.
- Transport links and accommodation: Can your venue be reached by international travelers?
- Budgets and ticket prices: Talk to the DjangoCon Europe Support group to get help with this, including information on past event budgets.
We also like to see:
- Timelines
- Pictures
- Plans for online participation, and other ways to make the event more inclusive and reduce its environmental footprint
- Draft agreements with providers
- Alternatives you have considered
Have a look at our proposed (draft, feedback welcome) DjangoCon Europe 2027 Licensing Agreement for the fine print on contractual requirements and involvement of the Django Software Foundation.
Submit your completed proposal by June 1st 2026 via our DjangoCon Europe 2027 expression of interest form, this time filling in as many fields as possible. We look forward to reviewing great proposals that continue the excellence the whole community associates with DjangoCon Europe.
Q&A
Can I organize a conference alone?
We strongly recommend that a team of people submit an application.
I/we don't have a legal entity yet, is that a problem?
Depending on your jurisdiction, this is usually not a problem. But please share your plans about the entity you will use or form in your application.
Do I/we need experience with organizing conferences?
The support group is here to help you succeed. From experience, we know that many core groups of 2-3 people have been able to run a DjangoCon with guidance from previous organizers and help from volunteers.
What is required in order to announce an event?
Ultimately, a contract with the venue confirming the dates is crucial, since announcing a conference makes people book calendars, holidays, buy transportation and accommodation etc. This, however, would only be relevant after the DSF board has concluded the application process. Naturally, the application itself cannot contain any guarantees, but it's good to check concrete dates with your venues to ensure they are actually open and currently available, before suggesting these dates in the application.
Do we have to do everything ourselves?
No. You will definitely be offered lots of help by the community. Typically, conference organizers will divide responsibilities into different teams, making it possible for more volunteers to join. Local organizers are free to choose which areas they want to invite the community to help out with, and a call will go out through a blog post announcement on djangoproject.com and social media.
What kind of support can we expect from the Django Software Foundation?
The DSF regularly provides grant funding to DjangoCon organizers, to the extent of $6,000 in recent editions. We also offer support via specific working groups:
- The dedicated DjangoCon Europe support working group.
- The social media working group can help you promote the event.
- The Code of Conduct working group works with all event organizers.
In addition, a lot of Individual Members of the DSF regularly volunteer at community events. If your team aren't Individual Members, we can reach out to them on your behalf to find volunteers.
What dates are possible in 2027?
For 2027, DjangoCon Europe should happen between January 4th and April 26th, or June 3rd and June 27th. This is to avoid the following community events' provisional dates:
- PyCon US 2027: May 2027
- EuroPython 2027: July 2027
- DjangoCon US 2027: September - October 2027
- DjangoCon Africa 2027: August - September 2027
We also want to avoid the following holidays:
- New Year's Day: Friday 1st January 2027
- Chinese New Year: Saturday 6th February 2027
- Eid Al-Fitr: Tuesday 9th March 2027
- Easter: Sunday 28th March 2027
- Passover: Wednesday 21st - Thursday 29th April 2027
- Eid Al-Adha: Monday 17th - Thursday 20th May 2027
- Rosh Hashanah: Saturday 2nd - Monday 4th October 2027
- Yom Kippur: Monday 11th - Tuesday 12th October 2027
What cities or countries are possible?
Any city in Europe. This can be a city or country where DjangoCon Europe has happened in the past (Athens, Vigo, Edinburgh, Porto, Copenhagen, Heidelberg, Florence, Budapest, Cardiff, Toulon, Warsaw, Zurich, Amsterdam, Berlin), or a new locale.
References
Past calls
- Interested in organizing DjangoCon Europe 2016? | Weblog | Django
- Could you host DjangoCon Europe 2017? | Weblog | Django
- DjangoCon Europe 2019 - where will it be? | Weblog | Django
- Could you host DjangoCon Europe 2023? | Weblog | Django
- Last Chance for a DjangoCon Europe 2023 | Weblog | Django
- Want to host DjangoCon Europe 2024? | Weblog | Django
- DjangoCon Europe 2025 Call for Proposals | Weblog | Django
- Last call for DjangoCon Europe 2025 organizers | Weblog | Django
- Could you host DjangoCon Europe 2026? Call for organizers | Weblog | Django
Real Python
Quiz: Building a Python GUI Application With Tkinter
In this quiz, you’ll test your understanding of Building a Python GUI Application With Tkinter.
Test your Tkinter knowledge by identifying core widgets, managing layouts, handling text with Entry and Text widgets, and connecting buttons to Python functions.
This quiz also covers event loops, widget sizing, and file dialogs, helping you solidify the essentials for building interactive, cross-platform Python GUI apps.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Using Loguru to Simplify Python Logging
In this quiz, you’ll test your understanding of Using Loguru to Simplify Python Logging.
By working through this quiz, you’ll revisit key concepts like the pre-configured logger, log levels, format placeholders, adding context with .bind() and .contextualize(), and saving logs to files.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm
How to Train Your First TensorFlow Model in PyCharm
This is a guest post from Iulia Feroli, founder of the Back To Engineering community on YouTube.
TensorFlow is a powerful open-source framework for building machine learning and deep learning systems. At its core, it works with tensors (a.k.a multi‑dimensional arrays) and provides high‑level libraries (like Keras) that make it easy to transform raw data into models you can train, evaluate, and deploy.
TensorFlow helps you handle the full pipeline: loading and preprocessing data, assembling models from layers and activations, training with optimizers and loss functions, and exporting for serving or even running on edge devices (including lightweight TensorFlow Lite models on Raspberry Pi and other microcontrollers).
If you want to make data-driven applications, prototyping neural networks, or ship models to production or devices, learning TensorFlow gives you a consistent, well-supported toolkit to go from idea to deployment.
If you’re brand new to TensorFlow, start by watching the short overview video where I explain tensors, neural networks, layers, and why TensorFlow is great for taking data → model → deployment, and how all of this can be explained with a LEGO-style pieces sorting example.
In this blog post, I’ll walk you through a first, stripped-down TensorFlow implementation notebook so we can get started with some practical experience. You can also watch the walkthrough video to follow along.
We’ll be exploring a very simple use case today: load the Fashion MNIST dataset, build two very simple Keras models, train and compare them, then dig into visualizations (predictions, confidence bars, confusion matrix). I kept the code minimal and readable so you can focus on the ideas – and you’ll see how PyCharm helps along the way.
Training TensorFlow models step by step
Getting started in PyCharm
We’ll be leveraging PyCharm’s native Notebook integration to build out our project. This way, we can inspect each step of the pipeline and use some supporting visualization along the way. We’ll create a new project and generate a virtual environment to manage our dependencies.
If you’re running the code from the attached repo, you can install directly from the requirements file. If you wish to expand this example with additional visualizations for further models, you can easily add more packages to your requirements as you go by using the PyCharm package manager helpers for installing and upgrading.
Load Fashion MNIST and inspect the data
Fashion MNIST is a great starter because the images are small (28×28 pixels), visually meaningful, and easy to interpret. They represent various garment types as pixelated black-and-white images, and provide the relevant labels for a well-contained classification task. We can first take a look at our data sample by printing some of these images with various matplotlib functions:
```
fig, axes = plt.subplots(2, 5, figsize=(10, 4))
for i, ax in enumerate(axes.flat):
ax.imshow(x_train[i], cmap='gray')
ax.set_title(class_names[y_train[i]])
ax.axis('off')
plt.show()
```
# Two simple models (a quick experiment)
```
model1 = models.Sequential([
layers.Flatten(input_shape=(28, 28)),
layers.Dense(128, activation='relu'),
layers.Dense(10, activation='softmax')
])
model2 = models.Sequential([
layers.Flatten(input_shape=(28, 28)),
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(10, activation='softmax')
])
```
Compile and train your first model
From here, we can compile and train our first TensorFlow model(s). With PyCharm’s code completion features and documentation access, you can get instant suggestions for building out these simple code blocks.
For a first try at TensorFlow, this allows us to spin up a working model with just a few presses of Tab in our IDE. We’re using the recommended standard optimizer and loss function, and we’re tracking for accuracy. We can choose to build multiple models by playing around with the number or type of layers, along with the other parameters.
```
model1.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
model1.fit(x_train, y_train, epochs=10)
model2.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
model2.fit(x_train, y_train, epochs=15)
```
Evaluate and compare your TensorFlow model performance
```
loss1, accuracy1 = model1.evaluate(x_test, y_test)
print(f'Accuracy of model1: {accuracy1:.2f}')
loss2, accuracy2 = model2.evaluate(x_test, y_test)
print(f'Accuracy of model2: {accuracy2:.2f}')
```
Once the models are trained (and you can see the epochs progressing visually as each cell is run), we can immediately evaluate the performance of the models.
In my experiment, model1 sits around ~0.88 accuracy, and while model2 is a little higher than that, it took 50% longer to train. That’s the kind of trade‑off you should be thinking about: Is a tiny accuracy gain worth the additional compute and complexity?
We can dive further into the results of the model run by generating a DataFrame instance of our new prediction dataset. Here we can also leverage built-in functions like `describe` to quickly get some initial statistical impressions:
``` predictions = model1.predict(x_test) import pandas as pd df_pred = pd.DataFrame(predictions, columns=class_names) df_pred.describe() ```
However, the most useful statistics will compare our model’s prediction with the ground truth “real” labels of our dataset. We can also break this down by item category:
```
y_pred = model1.predict(x_test).argmax(axis=1)
cm = confusion_matrix(y_test, y_pred)
plt.figure(figsize=(8,6))
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=class_names, yticklabels=class_names)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.title('Confusion Matrix')
plt.show()
print('Classification report:')
print(classification_report(y_test, y_pred, target_names=class_names))
```
From here, we can notice that the accuracy differs quite a bit by type of garment. A possible interpretation of this is that trousers are quite a distinct type of clothing from, say, t-shirts and shirts, which can be more commonly confused.
This is, of course, the type of nuance that, as humans, we can pick up by looking at the images, but the model only has access to a matrix of pixel values. The data does seem, however, to confirm our intuition. We can further build a more comprehensive visualization to test this hypothesis.
```
import numpy as np
import matplotlib.pyplot as plt
# pick 8 wrong examples
y_pred = predictions.argmax(axis=1)
wrong_idx = np.where(y_pred != y_test)[0][:8] # first 8 mistakes
n = len(wrong_idx)
fig, axes = plt.subplots(n, 2, figsize=(10, 2.2 * n), constrained_layout=True)
for row, idx in enumerate(wrong_idx):
p = predictions[idx]
pred = int(np.argmax(p))
true = int(y_test[idx])
axes[row, 0].imshow(x_test[idx], cmap="gray")
axes[row, 0].axis("off")
axes[row, 0].set_title(
f"WRONG P:{class_names[pred]} ({p[pred]:.2f}) T:{class_names[true]}",
color="red",
fontsize=10
)
bars = axes[row, 1].bar(range(len(class_names)), p, color="lightgray")
bars[pred].set_color("red")
axes[row, 1].set_ylim(0, 1)
axes[row, 1].set_xticks(range(len(class_names)))
axes[row, 1].set_xticklabels(class_names, rotation=90, fontsize=8)
axes[row, 1].set_ylabel("conf", fontsize=9)
plt.show()
```
This table generates a view where we can explore the confidence our model had in a prediction: By exploring which weight each class was given, we can see where there was doubt (i.e. multiple classes with a higher weight) versus when the model was certain (only one guess). These examples further confirm our intuition: top-types appear to be more commonly confused by the model.
Conclusion
And there we have it! We were able to set up and train our first model and already drive some data science insights from our data and model results. Using some of the PyCharm functionalities at this point can speed up the experimentation process by providing access to our documentation and applying code completion directly in the cells. We can even use AI Assistant to help generate some of the graphs we’ll need to further evaluate the TensorFlow model performance and investigate our results.
You can try out this notebook yourself, or better yet, try to generate it with these same tools for a more hands-on learning experience.
Where to go next
This notebook is a minimal, teachable starting point. Here are some practical next steps to try afterwards:
- Replace the dense baseline with a small CNN (Conv2D → MaxPooling → Dense).
- Add dropout or batch normalization to reduce overfitting.
- Apply data augmentation (random shifts/rotations) to improve generalization.
- Use callbacks like
EarlyStoppingandModelCheckpointso training is efficient, and you keep the best weights. - Export a
SavedModelfor server use or convert to TensorFlow Lite for edge devices (Raspberry Pi, microcontrollers).
Frequently asked questions
When should I use TensorFlow?
TensorFlow is best used when building machine learning or deep learning models that need to scale, go into production, or run across different environments (cloud, mobile, edge devices).
TensorFlow is particularly well-suited for large-scale models and neural networks, including scenarios where you need strong deployment support (TensorFlow Serving, TensorFlow Lite). For research prototypes, TensorFlow is viable, but it’s more commonplace to use lightweight frameworks for easier experimentation.
Can TensorFlow run on a GPU?
Yes, TensorFlow can run GPUs and TPUs. Additionally, using a GPU can significantly speed up training, especially for deep learning models with large datasets. The best part is, TensorFlow will automatically use an available GPU if it’s properly configured.
What is loss in TensorFlow?
Loss (otherwise known as loss function) measures how far a model’s predictions are from the actual target values. Loss in TensorFlow is a numerical value representing the distance between predictions and actual target values. A few examples include:
- MSE (mean squared error), used in regression tasks.
- Cross-entropy loss, often used in classification tasks.
How many epochs should I use?
There’s no set number of epochs to use, as it depends on your dataset and model. Typical approaches cover:
- Starting with a conservative number (10–50 epochs).
- Monitoring validation loss/accuracy and adjusting based on the results you see.
- Using early stopping to halt training when improvements decrease.
An epoch is one full pass through your training data. Not enough passes through leads to underfitting, and too many can cause overfitting. The sweet spot is where your model generalizes best to unseen data.
About the author
PyCon
Stories from the PyCon US Hotels
Friendships, collaborations, and breakthroughs
The fun, the learning, and the inspiration don't stop when you walk out of the convention center. Some of the most memorable moments from PyCon US happen in the lobby at 10 pm, laughing with someone you only knew as a username until an hour ago; over breakfast, where a casual conversation turns into a collaboration that lasts years; and on the walks to and from the conference. PyCon US hotels have their own lore.
We asked people about their experiences and were overwhelmed, it turns out that everyone has a story!
"One story stands out to me beyond getting to know each other and sharing ideas. When I was getting ready to give my first PyCon talk in Montreal, Selena Deckelmann offered to help review my slides and listen to me practice. We spent a few hours on the floor of her hotel room prepping while her very young daughter crawled around on the floor and chewed on my PyCon badge since she was teething. It's still one of my favorite PyCon and PyLadies memories.” - Carol Willing, Willing Consulting
“The hotel lobby last year turned into a makeshift meetup after the PyLadies Auction. People were having a great time at the auction and kept the energy going in the lobby afterward. Everyone was there, even those who hadn't attended the auction. Luckily, the hotel also sold my favorite chocolate milk in the lobby, so I got to end my evening drinking milk and chatting with Python friends.” - Cheuk Ting Ho (the PyLady who loves the auction and karaoke)
"In Pittsburgh a couple of years ago I was having breakfast at the hotel, when a guy I didn't know spotted my Python T-shirt and introduced himself. It was his first PyCon and my 21st, and we ended up having breakfast together. I gave him a few tips on enjoying a PyCon, but it turned out he was also a guitarist, so we spent most of breakfast talking about music and playing guitar.” - Naomi Ceder, former board chair and loooong time PyCon goer
"I ran into Trey Hunner during my first PyCon US in the hotel lobby as a PSF employee. He was running a Cabo game. He immediately welcomed me and showed me how to play. (He’s a great teacher, so I won three rounds in a row!) I also met a bunch of lovely people who have been attending PyCon US for years and years, and I learned that there is almost always a Cabo game in the hotel lobby." - Deb Nicholson (PSF Executive Director & resident Cabo shark)
“One of my most memorable hotel lobby moments was a chance encounter with Thomas Wouters. We fell into a natural conversation about his work and his deep, genuine pride in the Python Software Foundation community. He spoke warmly about the people who make the community what it is and what it means to him to be part of it. What I had no idea at the time was that just three days later, he would be called up on stage and announced as a Distinguished Service Award recipient — one of the highest honors the Python Software Foundation gives.” - Abigail Dogbe, PSF Board Member
“Juggling in the hotel lobby turned into an unexpected highlight of the conference. We had started teaching each other — my fault entirely for bringing the juggling balls — when a teenager and his mom wandered through on their way to see Pearl Jam. The kid's eyes lit up the moment he saw us, so I waved them over and started teaching him. Turns out they'd booked that very hotel hoping to cross paths with the band. He was excited about everything, and she was right there with him, every bit as thrilled.” - Ned Batchelder, Python Core Team and Netflix, Software Engineer
And this year, instead of sitting in LA-to-Long Beach traffic, consider staying in the official conference hotel block because there's too much to miss if you're too far away.
Real Talk: Why booking a room via PyCon US matters
If you're planning to attend PyCon US, please consider booking your stay within the official conference hotel block.
When attendees reserve rooms through the block, it helps the conference meet its contractual commitments with the venue, which directly impacts the overall cost of hosting the event.
Strong participation in the hotel block helps PyCon US:
Keep registration prices as low as possible while continuing to invest in programs that support our community, like travel grants, accessibility services, and community events.
When rooms go unfilled in the block, the conference incurs major financial penalties that ultimately make the event more expensive to run for everyone.
By booking in the hotel block, you are giving back and helping keep PyCon US sustainable and affordable for the entire Python community.
PSST! Exclusive swag when you book a room. We can't say more.
Attendees who book within the official hotel block this year will receive a special mystery swag item. We can't tell you what it is. That's why it's called mystery swag. But we can tell you the only way to get it is to book in the official PyCon US hotel block.
Where to stay: official PyCon US 2026 hotel block
All hotels are in Long Beach, within easy reach of the Long Beach Convention Center.
The Westin Long Beach Spacious rooms and great amenities, and the block still has availability. Book here
Hyatt Regency Long Beach is the conference headquarters hotel. Closest to the convention center–just about connected. Book here
Marriott Long Beach Downtown A solid choice with easy access to the convention center and the waterfront. Book here
Courtyard by Marriott Long Beach Downtown A comfortable, more affordable option still within the block. Book hereApril 06, 2026
ListenData
How to Use Gemini API in Python
In this tutorial, you will learn how to use Google's Gemini AI model through its API in Python.
Follow the steps below to access the Gemini API and then use it in python.
- Visit Google AI Studio website.
- Sign in using your Google account.
- Create an API key.
- Install the Google AI Python library for the Gemini API using the command below :
pip install google-genai
.
PyCon
Python and the Future of AI: Agents, Inference, and Edge AI
Finding AI insights and education at PyCon US 2026
While AI content is sprinkled throughout the event, how could it not be, PyCon US features a dedicated The Future of AI with Python track, new this year, and programmed by Elaine Wong, PyCon US Chair, Jon Banafato, PyCon US Co-Chair, and Philip Gagnon, Program Committee Chair. According to JetBrains' State of Developer Ecosystem 2025 report, 85% of developers now regularly use AI tools for coding and development (which tell us that you are probably doing that), and 62% rely on at least one AI coding assistant, agent, or code editor. Looking ahead, nearly half of all developers (49%) plan to try AI coding agents in the coming year, and the eight sessions in this track map onto those priorities, covering everything from running LLMs on your laptop to building real-time voice agents. Take a look at the big themes and the sessions and tutorials you won't want to miss in our new track and throughout the event.
Let’s start with newbies, if you or someone on your team is just getting started with ML, Corey Wade's Wednesday tutorial Your First Machine Learning Models: How to Build Them in Scikit-learn is the perfect entry point, a hands-on introduction to the building blocks that underpin so much of what's discussed in the talks.
LLMs Are Moving to the Edge
One of the most significant shifts in AI right now is the move toward running models locally on laptops, browsers, and devices, rather than in centralized cloud infrastructure. Want to know more? Check out: Running Large Language Models on Laptops: Practical Quantization Techniques in Python from Aayush Kumar JVS, a hands-on look at how quantization makes large models practical on consumer hardware. Fabio Pliger takes a look at the role of the browser with Distributing AI with Python in the Browser: Edge Inference and Flexibility Without Infrastructure, exploring how Python-powered inference can run client-side with no server required. If you've been watching the open-weights model explosion and wondering how to actually deploy these things, these two talks are for you.
Want to go deeper before the conference even starts? On Wednesday, May 13th, Isabel Michel's tutorial Implementing RAG in Python: Build a Retrieval-Augmented Generation System gives you hands-on experience building a retrieval-augmented generation pipeline from scratch, the practical foundation underneath a lot of modern LLM applications.
AI Agents and Async Python
Agentic AI, systems that take multi-step actions autonomously, is one of the defining developments of 2025 and continues to take the world by storm in 2026. But building agents that actually work in production requires getting async right. Aditya Mehra's Don't Block the Loop: Python Async Patterns for AI Agents digs into the concurrency pitfalls that trip up so many teams when they move from demo to deployment. This talk bridges a gap that many tutorials leave open: the gap between "I have a working agent" and "my agent works reliably at scale."
If you want a running start, Pamela Fox's Wednesday tutorial Build Your First MCP Server in Python is the perfect on-ramp, MCP (Model Context Protocol) is quickly becoming the standard way to give AI agents access to tools and data, and building one yourself is the fastest way to understand how agentic systems actually work under the hood.
AI and Open Source Sustainability
AI-Assisted Contributions and Maintainer Load by Paolo Melchiorre tackles a genuinely thorny question: as AI tools make it easier to generate pull requests, what happens to the maintainers on the receiving end? Drawing on real examples from projects like GNOME, OCaml, Python, and Django, Melchiorre examines how AI-generated contributions are shifting workload onto already time-constrained maintainers and what the open source community is doing about it.
High-Performance Inference in Python
Python performance engineering is no longer optional for AI workloads. Yineng Zhang's High-Performance LLM Inference in Pure Python with PyTorch Custom Ops walks through the techniques for squeezing real speed out of inference pipelines without leaving the Python ecosystem (This one isn’t in the track, but it is so on point, I had to add it). Paired with Santosh Appachu Devanira Poovaiah's What Python Developers Need to Know About Hardware: A Practical Guide to GPU Memory, Kernel Scheduling, and Execution Models, Friday's track offers a practical hardware-to-application view of the performance stack that's increasingly essential for anyone building production AI systems.
Also, Catherine Nelson and Robert Masson's Thursday tutorial Going from Notebooks to Production Code is a great complement; bridging the gap between exploratory AI work and the kind of reliable, maintainable code that actually makes it into production systems.
Explainability and Responsible AI
As AI systems make more consequential decisions, the demand for explainability is only growing from regulators, from users, and from the developers building these systems. Jyoti Yadav's Building AI That Explains Itself: Why Your Card Got Declined uses a familiar real-world example to demonstrate how Python developers can build transparency into AI-driven decisions. It's a topic at the heart of current conversations about AI trust, and one that every practitioner should be thinking about.
Two tutorials round this theme out nicely: Neha's Wednesday session Causal Inference with Python teaches you how to move beyond correlation and reason about cause and effect in your data, a foundational skill for anyone building AI systems that need to explain why they made a decision. And on Thursday, Juliana Ferreira Alves' When KPIs Go Weird: Anomaly Detection with Python gives you practical tools for catching when your AI-powered systems go off the rails before your users do.
Voice AI and Multimodal Interfaces
Real-time voice is one of the fastest-moving areas in applied AI, and Camila Hinojosa Añez and Elizabeth Fuentes close out the AI track Friday evening with How to Build Your First Real-Time Voice Agent in Python (Without Losing Your Mind). This practical session covers the building blocks of voice agents in Python, a skill set that's quickly becoming table stakes for developers building consumer-facing AI products.
AI and Education
Sonny Mupfuni's AI-Powered Python Education: Towards Adaptive and Inclusive Learning explores how Python can power learning that adapts to the student, and Gift Ojeabulu's Making African Languages Visible: A Python-Based Guide to Low-Resource Language ID takes on one of NLP's most persistent blind spots, the languages that dominant datasets routinely leave out. Don't skip these. These sessions represent Python's role not just in building AI products, but in democratizing access to AI's benefits.
Friday's AI track is a rare chance to hear from practitioners who are building real things in production, not just demoing prototypes. Whether you're a Python developer who's been watching the AI wave from the sidelines or a team already shipping AI features who wants to sharpen your craft, clear your schedule and pull up a chair. And a big THANK YOU to Anaconda and NVIDIA for sponsoring this track!
We'll see you in Long Beach.
PyCon US 2026 takes place May 13–19 in Long Beach, California. The Future of AI with Python talk track runs Friday, May 15th.
Trey Hunner
Using a ~/.pdbrc file to customize the Python Debugger
Did you know that you can customize the Python debugger (PDB) by creating custom aliases within a .pdbrc file in your home directory or Python’s current working directory?
I recently learned this and I’d like to share a few helpful aliases that I now have access to in my PDB sessions thanks to my new ~/.pdbrc file.
The aliases in my ~/.pdbrc file
Here’s my new ~/.pdbrc file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | |
This allows me to use:
dir,attrs, andvarsto inspect Python objectssrcto see the source code of a particular function/classlocto see the local variables
You might wonder “Can’t I use dir(x) instead of dir x and vars(x) instead of vars x and locals() instead of loc?”
You can!… but those aliases print things out in a nicer format.
A demo of these 5 aliases
Let’s use -m pdb -m calendar to launch Python’s calendar module from the command line while dropping into PDB immediately:
1 2 3 4 | |
Then we’ll set a breakpoint after lots of stuff has been defined and continue to that breakpoint:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | |
The string representation for that c variable doesn’t tell us much:
1 2 | |
If we use the dir alias, we’ll see every attribute that’s accessible on c printed in a pretty friendly format:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | |
If we use attrs we’ll see just the non-method attributes:
1 2 3 | |
And vars will show us just the attributes that live as proper instance attributes in that object’s __dict__ dictionary:
1 2 | |
The src alias can be used to see the source code for a given method:
1 2 3 4 5 6 7 | |
And the loc alias will show us all the local variables defined in the current scope:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | |
~/.pdbrc isn’t as powerful as PYTHONSTARTUP
I also have a custom PYTHONSTARTUP file, which is launched every time I start a new Python REPL (see Handy Python REPL modifications).
A PYTHONSTARTUP file is just Python code, which makes it easy to customize.
A ~/.pdbrc file is not Python code… it’s a very limited custom file format.
You may notice that every alias line defined in my ~/.pdbrc file is a bunch of code shoved all on one line.
That’s because there’s no way to define an alias over multiple lines.
Also any variables assigned in an alias will leak into the surrounding scope… so I have a del statement in a couple of those aliases to clean up a stray variable assignment (from an import).
See the documentation on alias and the top of the debugger commands for more on how ~/.pdbrc files work.
Real Python
D-Strings Could End Your textwrap.dedent() Days and Other Python News for April 2026
If you’ve ever wrapped a multiline string in textwrap.dedent() and wondered why Python can’t just handle that for you, then your PEP has arrived. PEP 822 proposes d-strings, a new d"""...""" prefix that automatically strips leading indentation. It’s one of those small quality-of-life ideas that make you wonder why it didn’t exist already. The PEP is currently a draft proposal.
March also delivered Python 3.15.0 alpha 7 with lazy imports you can finally test and security patches across three older branches. On the ecosystem side, GPT-5.4 landed with a tool search feature that changes agentic workflows. Meanwhile, the Python Insider blog migration moved 307 posts to a new home without breaking a single URL. It’s time to get into the biggest Python news from the past month.
Join Now: Click here to join the Real Python Newsletter and you’ll never miss another Python tutorial, course, or news update.
Python Releases and PEP Highlights
March brought the penultimate alpha of Python 3.15 with a long-awaited feature that finally lets Python developers defer imports cleanly. On top of that, security patches landed for three older branches, and a fresh PEP proposal showed up that could clean up your multiline strings for good.
Python 3.15.0 Alpha 7: Lazy Imports Land
Python 3.15.0a7 dropped on March 10, the second-to-last alpha before the beta freeze on May 5. The headline feature you can finally test is PEP 810, explicit lazy imports. The Steering Council accepted PEP 810 back in November, but this is the first alpha where the implementation is available to try.
The idea is straightforward: prefix any import statement with lazy, and the module won’t actually load until you first access an attribute on it:
lazy import json
lazy from datetime import timedelta
# The json module isn't loaded yet, so no startup cost
# Later, when you actually use it:
data = json.loads(payload) # Now it loads
The PEP authors note that 17 percent of standard library imports are already placed inside functions to defer loading. Tools like Django’s management commands, Click-based CLIs, and codebases heavy on type checking often spend hundreds of milliseconds on imports they might never use. Lazy imports make that optimization explicit and clean, without scattering imports deep inside function bodies.
Note: Alpha 7 also continues to ship the JIT compiler improvements from earlier alphas, with 3–4 percent geometric mean gains on x86-64 Linux and 7–8 percent on AArch64 macOS. Alpha 8 is scheduled for April 7, with the beta phase starting May 5.
Security Releases: Python 3.12.13, 3.11.15, and 3.10.20
On March 3, Thomas Wouters released security-only patches across three older Python branches. The updates fix several CVEs, including two XML parsing vulnerabilities (CVE-2026-24515 and CVE-2026-25210), patched by upgrading the bundled libexpat to 2.7.4. Additional fixes cover an XML memory amplification bug and the rejection of control characters in HTTP headers and URL parsing.
If you’re still running Python 3.12 or older in production, applying these patches is highly recommended. Python 3.12 is now in security-fixes-only mode, so no binary installers are provided. You’ll need to build from source.
PEP 822: Dedented Multiline Strings (D-Strings)
PEP 822, authored by Inada Naoki, proposes a new d"""...""" string prefix that automatically strips leading indentation from multiline strings, using the same algorithm as textwrap.dedent().
Anyone who’s written a multiline SQL query or help text inside a function and battled with indentation knows the pain:
import textwrap
# Before: awkward indentation or textwrap.dedent() wrapper
def get_query():
return textwrap.dedent("""\
SELECT name, email
FROM users
WHERE active = true
""")
# With d-strings: clean and readable
def get_query():
return d"""
SELECT name, email
FROM users
WHERE active = true
"""
The d prefix combines with f, r, b, and even the upcoming t (template strings) prefixes. PEP 822 was submitted to the Steering Council on March 9 and targets Python 3.15, though a decision hasn’t landed yet. If you’ve ever wished Python strings would just handle indentation for you, this one’s worth keeping an eye on.
Other PEPs in Progress
Read the full article at https://realpython.com/python-news-april-2026/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: For Loops in Python (Definite Iteration)
Test your understanding of For Loops in Python (Definite Iteration).
You’ll revisit Python loops, iterables, and how iterators behave. You’ll also explore set iteration order and the effects of the break and continue statements.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Bytes
#476 Common themes
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://pydevtools.com/blog/migrating-from-mypy-to-ty-lessons-from-fastapi/?featured_on=pythonbytes">Migrating from mypy to ty: Lessons from FastAPI</a></strong></li> <li><strong><a href="https://oxyde.fatalyst.dev/latest/?featured_on=pythonbytes">Oxyde ORM</a></strong></li> <li><strong><a href="https://guoci.github.io/typeshedded_CPython_docs/library/functions.html?featured_on=pythonbytes">Typeshedded CPython docs</a></strong></li> <li><strong><a href="https://mkennedy.codes/posts/raw-dc-a-retrospective/?featured_on=pythonbytes">Raw+DC Database Pattern: A Retrospective</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=tOM8fOhcNbI' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="476">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: <a href="https://pydevtools.com/blog/migrating-from-mypy-to-ty-lessons-from-fastapi/?featured_on=pythonbytes">Migrating from mypy to ty: Lessons from FastAPI</a></strong></p> <ul> <li>Tim Hopper</li> <li>I saw this post by Sebastián Ramírez about all of his projects <a href="https://bsky.app/profile/tiangolo.com/post/3milnufxpcs2h?featured_on=pythonbytes">switching to ty</a> <ul> <li>FastAPI, Typer, SQLModel, Asyncer, FastAPI CLI</li> </ul></li> <li>SqlModel is already ty only - mypy removed</li> <li>This signals that ty is ready to use</li> <li>Tim lists some steps to apply ty to your own projects <ul> <li>Add ty alongside mypy</li> <li>Set <code>error-on-warning = true</code></li> <li>Accept the double-ignore comments</li> <li>Pick a smaller project to cut over first</li> <li>Drop mypy when the noise exceeds the signalAdd ty alongside mypy</li> </ul></li> <li>Related anecdote: <ul> <li>I had tried out ty with <a href="https://github.com/okken/pytest-check?featured_on=pythonbytes">pytest-check</a> in the past with difficulty</li> <li>Tried it again this morning, only a few areas where mypy was happy but ty reported issues</li> <li>At least one ty warning was a potential problem for people running pre-releases of pytest,</li> <li>Not really related: <a href="https://packaging.pypa.io/en/latest/version.html?featured_on=pythonbytes">packaging.version.parse</a> is awesome</li> </ul></li> </ul> <p><strong>Michael #2: <a href="https://oxyde.fatalyst.dev/latest/?featured_on=pythonbytes">Oxyde ORM</a></strong></p> <ul> <li><strong>Oxyde ORM</strong> is a type-safe, Pydantic-centric asynchronous ORM with a high-performance Rust core.</li> <li>Note: Oxyde is a young project under active development. The API may evolve between minor versions.</li> <li>No sync wrappers or thread pools. Oxyde is async from the ground up</li> <li>Includes <a href="https://github.com/mr-fatalyst/oxyde-admin?featured_on=pythonbytes"><strong>oxyde-admin</strong></a></li> <li>Features <ul> <li><strong>Django-style API</strong> - Familiar <code>Model.objects.filter()</code> syntax</li> <li><strong>Pydantic v2 models</strong> - Full validation, type hints, serialization</li> <li><strong>Async-first</strong> - Built for modern async Python with <code>asyncio</code></li> <li><strong>Rust performance</strong> - SQL generation and execution in native Rust</li> <li><strong>Multi-database</strong> - PostgreSQL, SQLite, MySQL support</li> <li><strong>Transactions</strong> - <code>transaction.atomic()</code> context manager with savepoints</li> <li><strong>Migrations</strong> - Django-style <code>makemigrations</code> and <code>migrate</code> CLI</li> </ul></li> </ul> <p><strong>Brian #3:</strong> <a href="https://guoci.github.io/typeshedded_CPython_docs/library/functions.html?featured_on=pythonbytes">Typeshedded CPython docs</a></p> <ul> <li><a href="https://bsky.app/profile/emmatyping.dev/post/3mfhxrttu2s22?featured_on=pythonbytes"><strong>Thanks emmatyping for the suggestion</strong></a></li> <li>Documentation for Python with typeshed types</li> <li>Source: <a href="https://github.com/guoci/typeshedding_cpython_docs?featured_on=pythonbytes"><strong>typeshedding_cpython_docs</strong></a></li> </ul> <p><strong>Michael #4:</strong> <a href="https://mkennedy.codes/posts/raw-dc-a-retrospective/?featured_on=pythonbytes">Raw+DC Database Pattern: A Retrospective</a></p> <ul> <li>A new design pattern I’m seeing gain traction in the software space: <a href="https://mkennedy.codes/posts/raw-dc-the-orm-pattern-of-2026/?featured_on=pythonbytes">Raw+DC: The ORM pattern of 2026</a></li> <li>I’ve had a chance to migrate three of my most important web app.</li> <li>Thrilled to report that yes, <strong>the web app is much faster using Raw+DC</strong></li> <li>Plus, this was part of the journey to move from 1.3 GB memory usage to 0.45 GB (more on this next week)</li> </ul> <p><img src="https://cdn.mkennedy.codes/posts/raw-dc-a-retrospective/raw-dc-vs-mongoengine-graph.webp" alt="" /></p> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">Lean TDD 0.5 update</a> <ul> <li>Significant rewrite and focus</li> </ul></li> </ul> <p>Michael:</p> <ul> <li><a href="https://github.com/databooth/pytest-just?featured_on=pythonbytes">pytest-just</a> (for <a href="https://github.com/casey/just?featured_on=pythonbytes">just command file</a> testing), by Michael Booth</li> <li>Something going on with Encode <ul> <li><strong>httpx</strong>: <a href="https://www.reddit.com/r/Python/comments/1rl5kuq/anyone_know_whats_up_with_httpx/?featured_on=pythonbytes">Anyone know what's up with HTTPX?</a> And <a href="https://tildeweb.nl/~michiel/httpxyz.html?featured_on=pythonbytes">forked</a></li> <li><strong>starlette</strong> and <strong>uvicorn</strong>: <a href="https://github.com/Kludex/starlette/discussions/2997?featured_on=pythonbytes">Transfer of Uvicorn & Starlette</a></li> <li><strong>mkdocs</strong>: <a href="https://fpgmaas.com/blog/collapse-of-mkdocs/?featured_on=pythonbytes">The Slow Collapse of MkDocs</a></li> <li><strong>django-rest-framework:</strong> <a href="https://github.com/django-commons/membership/issues/188#issue-3070631761">Move to django commons?</a></li> </ul></li> <li><a href="https://talkpython.fm/blog/posts/announcing-course-completion-certificates/?featured_on=pythonbytes">Certificates at Talk Python Training</a></li> </ul> <p><strong>Joke:</strong> </p> <ul> <li><a href="https://x.com/PR0GRAMMERHUM0R/status/2021509552504525304?featured_on=pythonbytes"><strong>Neue Rich</strong></a></li> </ul>
April 05, 2026
EuroPython
Humans of EuroPython: George Zisopoulos
Behind every flawless talk, engaging workshop, and perfectly timed coffee break at EuroPython is a crew of unsung heroes—our volunteers! 🌟 Not just organizers, but dream enablers: printer ninjas, registration magicians, social butterflies, and even salsa instructors (yeah, that happened!)
We’re the quiet force turning chaos into community, one sprint at a time. 💻✨
Curious who really makes the magic happen? Today we’d like to introduce George Zisopoulos, member of the Operations team at EuroPython 2025.
George Zisopoulos, member of the Operations Team at EuroPython 2025EP: What first inspired you to volunteer for EuroPython? And which edition of the conference was it?
I was inspired because I gave a presentation in 2020, and after that I wanted to experience the conference from the other side, as part of the volunteers. It was amazing to see how much work all these people had done for us as attendees, and I wanted to be a part of that.
So I applied and became an online volunteer in 2022 in Dublin, and the following year I joined EuroPython 2023 as an on-site volunteer. Once you start, you can’t stop doing it.
EP: Have you learned new skills while contributing to EuroPython? If so, which ones?
It’s less about learning new skills and more about discovering the ones you already have. With guidance and a supportive team, you feel confident using them and even pushing a bit past your comfort zone.
EP: What&aposs your favorite memory from volunteering at the conference?
My favorite part is walking into the conference and unexpectedly running into someone you met at previous years’ editions. It’s like a little déjà vu. They hug you like you just saw them yesterday, even if it’s been a whole year.
EP: Did you make any lasting friendships or professional connections through volunteering?
Yes, I’ve made a few lasting friendships. We stay in touch all year, even though we live in different cities or countries. We visit each other, and often end up meeting in other countries while traveling.
EP: Any unexpected or funny experiences during the conference which you’d like to share?
I love coffee, so during the conference I’m usually wandering around with a cup in hand. Two years ago, thanks to some playful hits from friends, I ended up destroying three t-shirts with coffee during the conference! Now every year they wonder… How many shirts will I sacrifice this time?
EP: Would you volunteer again, and why?
I would say what I used to say last year: Summer without EuroPython just doesn’t really feel like a summer 😉 See you all there!
EP: Thank you for your contribution, George!
April 04, 2026
Marcos Dione
Correcting OpenStreetMap wrong tag values
As a hobbyist consumer of OSM data to render maps, I find wrong tags annoying. Bad values mean that the resulting map is wrong or incomplete, so less useful. I decided to attack the most egregious ones, which include typos, street names instead of type and some other errors. The idea is to attack the long tail first, so I'm not blocked because the next batch of errors (objects with exactly the same error) looks too big (yes, OCD).
So I hacked a small python script to help me find and edit them:
#! /usr/bin/env python3 import os import psycopg2 def main(): db = psycopg2.connect(dbname='europe') cursor = db.cursor() cursor.execute(''' SELECT count(*) AS count, highway FROM planet_osm_line WHERE highway IS NOT NULL GROUP BY highway ORDER BY count ASC ''') data = cursor.fetchall() for count, highway in data: print(f"next {count}: {highway}") cursor.execute(''' SELECT osm_id FROM planet_osm_line WHERE highway = %s ''', (highway, )) for (osm_id, ) in cursor.fetchall(): if osm_id < 0: # in rendering DBs, this is a relation os.system(f"librewolf -P default 'https://www.openstreetmap.org/edit?relation={-osm_id}'") else: os.system(f"librewolf -P default 'https://www.openstreetmap.org/edit?way={osm_id}'") if __name__ == '__main__': main()
It is quite inefficient, but what I want is to edit the errors, not to write a script :) This requires a rendering database, which I already have locally :)
From here the workflow is:
- Analyze the type of error.
- This includes looking at the history of the object.
- Search the wiki.
- Correct the error or leave a note.
- Find the original changeset and leave a note if needed.
- Add any details to te changeset if needed.
In my machine, finding the long tail, and finding each set of errors takes one minute, so I was launching two at the same time. One of the things to notice is that if the object you try to edit does no exists anymore, you get and edit view of the whole planet.
Armin Ronacher
Absurd In Production
About five months ago I wrote about Absurd, a durable execution system we built for our own use at Earendil, sitting entirely on top of Postgres and Postgres alone. The pitch was simple: you don’t need a separate service, a compiler plugin, or an entire runtime to get durable workflows. You need a SQL file and a thin SDK.
Since then we’ve been running it in production, and I figured it’s worth sharing what the experience has been like. The short version: the design held up, the system has been a pleasure to work with, and other people seem to agree.
A Quick Refresher
Absurd is a durable execution system that lives entirely inside Postgres. The core is a single SQL file (absurd.sql) that defines stored procedures for task management, checkpoint storage, event handling, and claim-based scheduling. On top of that sit thin SDKs (currently TypeScript, Python and an experimental Go one) that make the system ergonomic in your language of choice.
The model is straightforward: you register tasks, decompose them into steps, and each step acts as a checkpoint. If anything fails, the task retries from the last completed step. Tasks can sleep, wait for external events, and suspend for days or weeks. All state lives in Postgres.
If you want the full introduction, the original blog post covers the fundamentals. What follows here is what we’ve learned since.
What Changed
The project got multiple releases over the last five months. Most of the changes are things you’d expect from a system that people actually started depending on: hardened claim handling, watchdogs that terminate broken workers, deadlock prevention, proper lease management, event race conditions, and all the edge cases that only show up when you’re running real workloads.
A few things worth calling out specifically.
Decomposed steps. The original design only had ctx.step(), where you pass
in a function and get back its checkpointed result. That works well for many
cases but not all. Sometimes you need to know whether a step already ran before
deciding what to do next. So we added beginStep() / completeStep(), which
give you a handle you can inspect before committing the result. This turned out
to be very useful for modeling intentional failures and conditional logic.
This in particular is necessary when working with “before call” and “after call”
type hook APIs.
Task results. You can now spawn a task, go do other things, and later come back to fetch or await its result. This sounds obvious in hindsight, but the original system was purely fire-and-forget. Having proper result inspection made it possible to use Absurd for things like spawning child tasks from within a parent workflow and waiting for them to finish. This is particularly useful for debugging with agents too.
absurdctl. We built this out as a proper CLI tool. You can initialize
schemas, run migrations, create queues, spawn tasks, emit events, retry failures
from the command line. It’s installable via uvx or as a standalone binary.
This has been invaluable for debugging production issues. When something is
stuck, being able to just absurdctl dump-task --task-id=<id> and see exactly
where it stopped is a very different experience from digging through logs.
Habitat. A small Go application that serves up a web dashboard for monitoring tasks, runs, checkpoints, and events. It connects directly to Postgres and gives you a live view of what’s happening. It’s simple, but it’s the kind of thing that makes the system more enjoyable for humans.
Agent integration. Since Absurd was originally built for agent workloads,
we added a bundled skill that coding agents can discover and use to debug
workflow state via absurdctl. There’s also a documented pattern for making
pi agent turns durable by logging each message as a
checkpoint.
What Held Up
The thing I’m most pleased about is that the core design didn’t need to change all that much. The fundamental model of tasks, steps, checkpoints, events, and suspending is still exactly what it was initially. We added features around it, but nothing forced us to rethink the basic abstractions.
Putting the complexity in SQL and keeping the SDKs thin turned out to be a genuinely good call. The TypeScript SDK is about 1,400 lines. The Python SDK is about 1,900 but most of this comes from the complexity of supporting colored functions. Compare that to Temporal’s Python SDK at around 170,000 lines. It means the SDKs are easy to understand, easy to debug, and easy to port. When something goes wrong, you can read the entire SDK in an afternoon and understand what it does.
The checkpoint-based replay model also aged well. Unlike systems that require
deterministic replay of your entire workflow function, Absurd just loads the
cached step results and skips over completed work. That means your code doesn’t
need to be deterministic outside of steps. You can call Math.random() or
datetime.now() in between steps and things still work, because only the step
boundaries matter. In practice, this makes it much easier to reason about
what’s safe and what isn’t.
Pull-based scheduling was the right choice too. Workers pull tasks from Postgres as they have capacity. There’s no coordinator, no push mechanism, no HTTP callbacks. That makes it trivially self-hostable and means you don’t have to think about load management at the infrastructure level.
What Might Not Be Optimal
I had some discussions with folks about whether the right abstraction should have been a durable promise. It’s a very appealing idea, but it turns out to be much more complex to implement in practice. It’s however in theory also more powerful. I did make some attempts to see what absurd would look like if it was based on durable promises but so far did not get anywhere with it. It’s however an experiment that I think would be fun to try!
What We Use It For
The primary use case is still agent workflows. An agent is essentially a loop that calls an LLM, processes tool results, and repeats until it decides it’s done. Each iteration becomes a step, and each step’s result is checkpointed. If the process dies on iteration 7, it restarts and replays iterations 1 through 6 from the store, then continues from 7.
But we’ve found it useful for a lot of other things too. All our crons just dispatch distributed workflows with a pre-generated deduplication key from the invocation. We can have two cron processes running and they will only trigger one absurd task invocation. We also use it for background processing that needs to survive deploys. Basically anything where you’d otherwise build your own retry-and-resume logic on top of a queue.
What’s Still Missing
Absurd is deliberately minimal, but there are things I’d like to see.
There’s no built-in scheduler. If you want cron-like behavior, you run your own scheduler loop and use idempotency keys to deduplicate. That works, and we have a documented pattern for it, but it would be nice to have something more integrated.
There’s no push model. Everything is pull. If you need an HTTP endpoint to receive webhooks and wake up tasks, you build that yourself. I think that’s the right default as push systems are harder to operate and easier to overwhelm but there are cases where it would be convenient. In particular there are quite a few agentic systems where it would be super nice to have webhooks natively integrated (wake on incoming POST request). I definitely don’t want to have this in the core, but that sounds like the kind of problem that could be a nice adjacent library that builds on top of absurd.
The biggest omission is that it does not support partitioning yet. That’s unfortunate because it makes cleaning up data more expensive than it has to be. In theory supporting partitions would be pretty simple. You could have weekly partitions and then detach and delete them when they expire. The only thing that really stands in the way of that is that Postgres does not have a convenient way of actually doing that.
The hard part is not partitioning itself, it’s partition lifecycle management under
real workloads. If a worker inserts a row whose expires_at lands in a month
without a partition, the insert fails and the workflow crashes. So you need a
separate maintenance loop that always creates future partitions far enough ahead
for sleeps/retries, and does that for every queue.
On the delete side, the safe approach is DETACH PARTITION CONCURRENTLY, but
getting that to run from pg_cron doesn’t work because it cannot be run within a
transaction, but pg_cron runs everything in one.
I don’t think it’s an unsolvable problem, but it’s one I have not found a good solution for and I would love to get input on.
Does Open Source Still Matter?
This brings me a bit to a meta point on the whole thing which is what the point of Open Source libraries in the age of agentic engineering is. Durable Execution is now something that plenty of startups sell you. On the other hand it’s also something that an agent would build you and people might not even look for solutions any more. It’s kind of … weird?
I don’t think a durable execution library can support a company, I really don’t. On the other hand I think it’s just complex enough of a problem that it could be a good Open Source project void of commercial interests. You do need a bit of an ecosystem around it, particularly for UI and good DX for debugging, and that’s hard to get from a throwaway implementation.
I don’t think we have squared this yet, but it’s already much better to use than a few months ago.
If you’re using Absurd, thinking about it, or building adjacent ideas, I’d love your feedback. Bug reports, rough edges, design critiques, and contributions are all very welcome—this project has gotten better every time someone poked at it from a different angle.
April 03, 2026
PyCon
¡Haciendo Historia! Celebrating PyCon US’s First-Ever Spanish-Language Keynote
PyCon US has always been about more than just a programming language; it’s about the incredible, global community that builds, supports, and innovates with it. This year at PyCon US 2026 in Long Beach, we are thrilled to celebrate a remarkable milestone for our community: our very first Spanish-language keynote address, delivered by the brilliant Pablo Galindo Salgado! If you’ve used Python recently, you’ve benefited from Pablo’s work. Pablo works on the Python team at Hudson River Trading and is a CPython core developer. He is currently serving his 6th term on the Python Steering Council and served as the release manager for Python 3.10 and 3.11. But his background isn't just in software engineering—Pablo is a Theoretical Physicist specializing in general relativity and black hole physics! (He also has a cat, though he assures us that his cat does not write any code). With his deep technical expertise and unique scientific perspective, Pablo is uniquely positioned to deliver an unforgettable keynote. But what makes this moment even more special is the language in which it will be delivered. Currently, about 14% of the US population speaks Spanish. PyCon US has proudly hosted the PyCon Charlas, a dedicated track of talks presented entirely in Spanish, since 2018. The Charlas have been a phenomenal success, highlighting the deep well of talent within the Spanish-speaking Pythonista community across Latin America, Europe, the United States, Africa (looking at you, Equatorial Guinea), and beyond. Elevating a Spanish-language presentation to the Keynote Stage is a significant step for PyCon US for several reasons: Amplifying the Charlas Track: Moving from the Charlas track to the Keynote stage bridges the gap between our English and Spanish programming. It exposes the broader PyCon US audience to the vibrancy of our Spanish-speaking community, and we hope it encourages more cross-cultural collaboration and networking. Encourage the Spanish-speaking community: A few initiatives have been developed around the Python ecosystem supporting the Spanish-speaking community, like the official translation of the Python documentation. This initiative has been adding more and more people, so if you are interested, make sure to check out the contribution guide! We also encourage you to check out the Python en Español community hub, which has gathered many Spanish-speaking communities around the world. You can also come visit the Python en Español booth in the PyCon US Expo Hall! Motivating other language-focused communities: If the Spanish-speaking community managed to get here, we have no doubt that in the future we could have even more languages in our beloved conference. Let us know which language you’d like to see follow these steps! Whether you are a native Spanish speaker, you've been practicing your español on some apps, or you just want to experience a fantastic technical talk through the magic of live translation and community spirit, you will not want to miss this. So charge up your phone, pack a set of headphones so you can use the live translation service and get ready to give a massive, warm welcome to Pablo Galindo Salgado. ¡Nos vemos en Long Beach! (See you in Long Beach!) PyCon US siempre ha sido mucho más que un lenguaje de programación; se trata de la increíble comunidad global que lo construye, lo apoya y lo impulsa. Este año, en PyCon US 2026 en Long Beach, estamos emocionados de celebrar un hito extraordinario para nuestra comunidad: ¡nuestra primera keynote en español, presentada por el brillante Pablo Galindo Salgado! Si has usado Python recientemente, probablemente te has beneficiado del trabajo de Pablo. Actualmente forma parte del equipo de Python en Hudson River Trading y es desarrollador core de CPython. Además, está cumpliendo su sexto mandato en el Python Steering Council y fue release manager de Python 3.10 y 3.11. Pero su trayectoria no se limita a la ingeniería de software: ¡Pablo también es físico teórico especializado en relatividad general y física de agujeros negros! (Y sí, también tiene un gato, aunque asegura que no programa). Con su sólida experiencia técnica y su perspectiva científica única, Pablo está más que preparado para ofrecer una keynote inolvidable. Pero lo que hace este momento aún más especial es el idioma en el que será presentada. Actualmente, aproximadamente el 14% de la población de Estados Unidos habla español. Desde 2018, PyCon US ha sido anfitrión de PyCon Charlas, un track dedicado completamente a presentaciones en español. Este espacio ha sido un éxito rotundo, mostrando el enorme talento de la comunidad Python hispanohablante en América Latina, Europa, Estados Unidos, África (sí, Guinea Ecuatorial 👀) y más allá. Las Charlas han evolucionado desde un pequeño conjunto de presentaciones en un solo día hasta convertirse en un track oficial de ¡dos días completos! Pero creemos que podemos ir aún más lejos. Llevar una presentación en español al escenario principal es un paso importante por varias razones: Reflejar nuestra comunidad global Amplificar el track de Charlas Impulsar a la comunidad hispanohablante Motivar a otras comunidades lingüísticas Ya sea que hables español nativo, estés practicando con alguna app o simplemente quieras disfrutar de una excelente charla técnica con traducción en vivo y el espíritu de la comunidad, no querrás perderte este momento. Carga tu teléfono, lleva tus audífonos para usar el servicio de traducción en vivo y prepárate para darle una gran bienvenida a Pablo Galindo Salgado. ¡Nos vemos en Long Beach! Para conocer más sobre Pablo y otros increíbles keynote speakers, visita la página de Keynote Speakers de PyCon US 2026.Meet Your Keynote Speaker: Pablo Galindo Salgado
Why a Spanish Keynote Matters to PyCon US
The Charlas has evolved from a small day with a handful of talks to an official track with 2 full days of talks! But we believe we could do more.
Reflecting our Global Community: Python is a truly global language, and its developer base is wonderfully diverse. By placing a Spanish-language talk on the main stage, PyCon US is reflecting the reality of our worldwide community. We want to remind folks that top-tier technical leadership and innovation happen in every language.Join Us for This Historic Moment
Título: ¡Haciendo historia! Celebrando la primera keynote en español de PyCon US
Conoce a tu keynote speaker: Pablo Galindo Salgado
Por qué una keynote en español es importante para PyCon US
Python es un lenguaje verdaderamente global, con una comunidad diversa y vibrante. Al incluir una keynote en español, PyCon US refleja esta realidad y reconoce que la innovación y el liderazgo técnico existen en todos los idiomas.
Este paso conecta el track de Charlas con el escenario principal, acercando a toda la audiencia de PyCon US a la riqueza de la comunidad hispanohablante y fomentando más colaboración e intercambio cultural.
Existen iniciativas dentro del ecosistema Python que apoyan a la comunidad en español, como la traducción oficial de la documentación de Python. Este esfuerzo sigue creciendo, así que si te interesa contribuir, ¡no dudes en revisar la guía de contribución!
También te invitamos a conocer el hub de Python en Español, que reúne comunidades de todo el mundo, y a visitar su booth en el Expo Hall de PyCon US.
Si la comunidad hispanohablante logró este paso, estamos seguros de que en el futuro veremos más idiomas en nuestra conferencia. ¡Cuéntanos cuál te gustaría ver próximamente!Acompáñanos en este momento histórico
Real Python
Quiz: How to Add Features to a Python Project With Codex CLI
In this quiz, you’ll test your understanding of How to Add Features to a Python Project With Codex CLI.
By working through this quiz, you’ll revisit how to install, configure, and use Codex CLI to implement and refine features in a Python project using natural language prompts.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Class Concepts: Object-Oriented Programming in Python
In this quiz, you’ll test your understanding of Class Concepts: Object-Oriented Programming in Python.
By working through this quiz, you’ll revisit how to define classes, use instance and class attributes, write different types of methods, and apply the descriptor protocol through properties.
You can also deepen your knowledge with the tutorial Python Classes: The Power of Object-Oriented Programming.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Rodrigo Girão Serrão
Indexable iterables
Learn how objects are automatically iterable if you implement integer indexing.
Introduction
An iterable in Python is any object you can traverse through with a for loop.
Iterables are typically containers and iterating over the iterable object allows you to access the elements of the container.
This article will show you how you can create your own iterable objects through the implementation of integer indexing.
Indexing with __getitem__
To make an object that can be indexed you need to implement the method __getitem__.
As an example, you'll implement a class ArithmeticSequence that represents an arithmetic sequence, like \(5, 8, 11, 14, 17, 20\).
An arithmetic sequence is defined by its first number (\(5\)), the step between numbers (\(3\)), and the total number of elements (\(6\)).
The sequence \(5, 8, 11, 14, 17, 20\) is seq = ArithmeticSequence(5, 3, 6) and seq[3] should be \(14\).
Using some arithmetic, you can implement indexing in __getitem__ directly:
class ArithmeticSequence:
def __init__(self, start: int, step: int, total: int) -> None:
self.start = start
self.step = step
self.total = total
def __getitem__(self, index: int) -> int:
if not 0 <= index < self.total:
raise IndexError(f"Invalid index {index}.")
return self.start + index * self.step
seq = ArithmeticSequence(5, 3, 6)
print(seq[3]) # 14
Turning an indexable object into an iterable
If your object accepts integer indices, then it is automatically an iterable.
In fact, you can already iterate over the sequence you created above by simply using it in a for loop:
for value in seq:
print(value, end=", ")
# 5, 8, 11, 14, 17, 20,
How Python distinguishes iterables from non-iterables
You might ask yourself “how does Python inspect __getitem__ to see it uses numeric indices?”
It doesn't!
If your object implements __getitem__ and you try to use it as an iterable, Python will try to iterate over it.
It either works or it doesn't!
To illustrate this point, you can define a class DictWrapper that wraps a dictionary and implements __getitem__ by just grabbing the corresponding item out of a dictionary:
class DictWrapper:
def __init__(self, values):
self.values = values
def __getitem__(self, index):
return self.values[index]
Since DictWrapper implements __getitem__, if an instance of DictWrapper just happens to have some integer keys (starting at 0) then you'll be able to iterate partially over the dictionary:
d1 = DictWrapper({0: "hey", 1: "bye", "key": "value"})
for value in d1:
print(value)
hey
bye
Traceback (most recent call last):
File "<python-input-25>", line 3, in <module>
for value in d1:
^^
File "<python-input-18>", line 6, in __getitem__
return self.values[index]
~~~~~~~~~~~^^^^^^^
KeyError: 2
What's interesting is that you can see explicitly that Python tried to index the object d with the key 2 and it didn't work.
In the ArithmeticSequence above, you didn't get an error because you raised IndexError when you reached the end and that's how Python understood the iteration was done.
In this case, since you get a KeyError, Python doesn't understand what's going on and just...
Talk Python Blog
Announcing Course Completion Certificates
I’m very excited to share that you can now generate course completion certificates automatically at Talk Python Training. What’s even better is our certificates allow you to one-click add them as official licenses and certifications on LinkedIn.
Remember last week, I added some really nice features to your account page showing which courses are completed and which ones you’ve recently participated in. Just start there. Find a course you recently completed, click certificate, and there is a Share to LinkedIn UI right there. It’s nearly entirely automated.
ListenData
How to Build ChatGPT Clone in Python
In this article, we will see the steps involved in building a chat application and an answering bot in Python using the ChatGPT API and gradio.
Developing a chat application in Python provides more control and flexibility over the ChatGPT website. You can customize and extend the chat application as per your needs. It also help you to integrate with your existing systems and other APIs.
To read this article in full, please click hereHow to Use Web Search in ChatGPT API
In this tutorial, we will explore how to use web search in OpenAI API.
Installation Step : Please make sure to install the openai library using the command - pip install openai.
from openai import OpenAI
client = OpenAI(api_key="sk-xxxxxxxxx") # Replace with your actual API key
response = client.responses.create(
model="gpt-5.4",
tools=[{"type": "web_search_preview"}],
input="Apple (AAPL) most recent stock price"
)
print(response.output_text)
As of the latest available data (April 2, 2026), Apple Inc. (AAPL) stock is trading at $255.92 per share, reflecting an increase of $0.29 (approximately 0.11%) from the previous close.
In the openai latest models, the search_context_size setting controls how much information the tool gathers from the web to answer your question. A higher setting gives better answers but is slower and costs more while a lower setting is faster and cheaper but might not be as accurate. Possible values are high, medium or low.
from openai import OpenAI
client = OpenAI(api_key="sk-xxxxxxxxx") # Replace with your actual API key
response = client.responses.create(
model="gpt-5.4",
tools=[{
"type": "web_search_preview",
"search_context_size": "high",
}],
input="Which team won the latest FIFA World Cup?"
)
print(response.output_text)
You can improve the relevance of search results by providing approximate geographic details such as country, city, region or timezone. For example, use a two-letter country code like GB for the United Kingdom or free-form text for cities and regions like London. You may also specify the user's timezone using IANA format such as Europe/London.
from openai import OpenAI
client = OpenAI(api_key="sk-xxxxxxxxx") # Use your actual API key
response = client.responses.create(
model="gpt-5.4",
tools=[{
"type": "web_search_preview",
"user_location": {
"type": "approximate",
"country": "GB", # ISO 2-letter country code
"city": "London", # Free text for city
"region": "London", # Free text for region/state
"timezone": "Europe/London" # IANA timezone (optional)
}
}],
input="What are the top-rated places to eat near Buckingham Palace?",
)
print(response.output_text)
You can use the following code to get the URL, title and location of the cited sources.
# Citations
response = client.responses.create(
model="gpt-5.4",
tools=[{"type": "web_search_preview"}],
input="most recent news from New York?"
)
annotations = response.output[1].content[0].annotations
print("Annotations:", annotations)
print("Annotations List:")
print("-" * 80)
for i, annotation in enumerate(annotations, 1):
print(f"Annotation {i}:")
print(f" Title: {annotation.title}")
print(f" URL: {annotation.url}")
print(f" Type: {annotation.type}")
print(f" Start Index: {annotation.start_index}")
print(f" End Index: {annotation.end_index}")
print("-" * 80)
Alternative method to use web search is by integrating Google's Custom Search API with ChatGPT.
By using Google's Custom Search API, we can get real-time search results. Refer the steps below how to get an API key from the Google Developers Console and creating a custom search engine.
To read this article in full, please click here4 Ways to Use ChatGPT API in Python
In this tutorial, we will explain how to use ChatGPT API in Python, along with examples.
Please follow the steps below to access the ChatGPT API.
- Visit the OpenAI Platform and sign up using your Google, Microsoft or Apple account.
- After creating your account, the next step is to generate a secret API key to access the API. The API key looks like this -
sk-xxxxxxxxxxxxxxxxxxxx - If your phone number has not been associated with any other OpenAI account previously, you may get free credits to test the API. Otherwise you have to add atleast 5 dollars into your account and charges will be based on the usage and the type of model you use. Check out the pricing details in the OpenAI website.
- Now you can call the API using the code below.



