skip to navigation
skip to content

Planet Python

Last update: February 24, 2026 07:44 PM UTC

February 24, 2026


PyCoder’s Weekly

Issue #723: Chained Assignment, Great Tables, Docstrings, and More (Feb. 24, 2026)

#723 – FEBRUARY 24, 2026
View in Browser »

The PyCoder’s Weekly Logo


Chained Assignment in Python Bytecode

When doing chained assignment with mutables (e.g. a == b == []) all chained values get assigned to a single mutable object. This article explains why this happens and what you can do instead.
ROHAN PRINJA

Great Tables: Publication-Ready Tables From DataFrames

Learn how to create publication-ready tables from Pandas and Polars DataFrames using Great Tables. Format currencies, add sparklines, apply conditional styling, and export to PNG.
CODECUT.AI • Shared by Khuyen Tran

Replay: Where Developers Build Reliable AI

alt

Replay is a practical conference for developers building real systems. The Python AI & versioning workshop covers durable AI agents, safe workflow evolution, and production-ready deployment techniques. Use code PYCODER75 for 75% off your ticket →
TEMPORAL sponsor

Write Python Docstrings Effectively

Learn to write clear, effective Python docstrings using best practices, common styles, and built-in conventions for your code.
REAL PYTHON course

Discussions

Use of PyPI as a Generic Storage Platform for Binaries

PYTHON.ORG

Python Jobs

Python + AI Content Specialist (Anywhere)

Real Python

Software Engineer (Python / Django) (South San Francisco, CA, USA)

Mirvie

More Python Jobs >>>

Articles & Tutorials

Join the Python Security Response Team!

The Python Security Response Team is a group of volunteers and PSF staff that coordinate and triage vulnerability reports and remediations. It is governed in a similar fashion to the core team. This article explains what the PSRT is and how you can join.
CPYTHON DEV BLOG

A CLI to Fight GitHub Spam

Hugo is a core Python maintainer and the CPython project gets lots of garbage PRs, not just AI slop but spam tickets as well. To help with this he has written a new GitHub CLI extension that makes it easier to apply a label to the PR and close it.
HUGO VAN KEMENADE

B2B MCP Auth Support

alt

Your users are asking if they can connect their AI agent to your product, but you want to make sure they can do it safely and securely. PropelAuth makes that possible →
PROPELAUTH sponsor

Exploring MCP Apps & Adding Interactive UIs to Clients

How can you move your MCP tools beyond plain text? How do you add interactive UI components directly inside chat conversations? This week on the show, Den Delimarsky from Anthropic joins us to discuss MCP Apps and interactive UIs in MCP.
REAL PYTHON podcast

How to Use Overloaded Signatures in Python?

Sometimes a function takes multiple arguments of different types, and the return type depends on specific combinations of inputs. How do you tell the type checker? Use the @overload decorator from the typing module.
BORUTZKI

icu4py: Bindings to the Unicode ICU Library

The International Components for Unicode (ICU) is the official library for Unicode and globalization tools and is used by many major projects. icu4py is a first step at a Python binding to the C++ API.
ADAM JOHNSON

Evolving Git for the Next Decade

This article summarizes Patrick Steinhardt’s talk at FOSDEM 2026 that discusses the current shortcomings of git and how they’re being addressed, preparing your favorite repo tool for the next decade.
JOE BROCKMEIER

How to Install Python on Your System: A Guide

Learn how to install the latest Python version on Windows, macOS, and Linux. Check your version and choose the best installation method for your system.
REAL PYTHON

Quiz: How to Install Python on Your System: A Guide

REAL PYTHON

TinyDB: A Lightweight JSON Database for Small Projects

If you’re looking for a JSON document-oriented database that requires no configuration for your Python project, TinyDB could be exactly what you need.
REAL PYTHON

Quiz: TinyDB: A Lightweight JSON Database for Small Projects

REAL PYTHON

Django ORM Standalone: Querying an Existing Database

A practical step-by-step guide to using Django ORM in standalone mode to connect to and query an existing database using inspectdb.
PAOLO MELCHIORRE

Projects & Code

tallyman: CLI to Summarize Code Size by Language

GITHUB.COM/MIKECKENNEDY

cattrs: Class Converters for Attrs, Dataclasses and Friends

GITHUB.COM/PYTHON-ATTRS

toml-fmt: Format Python TOML Configurations

GITHUB.COM/TOX-DEV

movement: Analyse Animal Body Movements

GITHUB.COM/NEUROINFORMATICS-UNIT

django-hawkeye: Django Full-Text Search Using PostgreSQL

GITHUB.COM/FARHANALIRAZA

Events

Weekly Real Python Office Hours Q&A (Virtual)

February 25, 2026
REALPYTHON.COM

MLOps Open Source Sprint

February 27, 2026
MEETUP.COM

Melbourne Python Users Group, Australia

March 2, 2026
J.MP

PyBodensee Monthly Meetup

March 2, 2026
PYBODENSEE.COM

Python Unplugged on PyTV

March 4 to March 5, 2026
JETBRAINS.COM


Happy Pythoning!
This was PyCoder’s Weekly Issue #723.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

February 24, 2026 07:30 PM UTC


Real Python

Start Building With FastAPI

FastAPI is a web framework for building APIs with Python. It leverages standard Python type hints to provide automatic validation, serialization, and interactive documentation. When you’re deciding between Python web frameworks, FastAPI stands out for its speed, developer experience, and built-in features that reduce boilerplate code for API development:

Use Case Pick FastAPI Pick Flask or Django
You want to build an API-driven web app
You need a full-stack web framework
You value automatic API documentation

Whether you’re building a minimal REST API or a complex backend service, understanding core features of FastAPI will help you make an informed decision about adopting it for your projects. To get the most from this video course, you’ll benefit from having basic knowledge of Python functions, HTTP concepts, and JSON handling.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

February 24, 2026 02:00 PM UTC


The Python Coding Stack

“Python’s Plumbing” Is Not As Flashy as “Magic Methods” • But Is It Better?

You’ve heard this phrase before: “Everything is an object in Python”. But here’s another phrase that’s related but rather less catchy:

Everything goes through special methods (dunder methods) in Python

Special methods are everywhere. However, you don’t see them. In fact, you’re not meant to use them directly unless you’re defining them within a class. They’re out of sight. But they keep everything moving smoothly in Python.

Here’s a short essay exploring where these special methods fit within Python.

Why Should I Care About Special Methods?

“Everything goes through special methods in Python.” I should probably preface the phrase with “almost”, but the phrase is already unwieldy as it is.

You want to display an object on the screen? Python looks for guidance in .__str__() or .__repr__().

You want to use the object in a for loop? Is it even possible? Python looks for .__iter__() to check whether it can iterate over the object and how.

Do you want to fetch an individual item from the object? If this makes sense for this object, then it will have a .__getitem__() special method.

How should Python interpret an object’s truthiness? There’s .__bool__() for that. Or .__len__()!

Ah, you’d like to add an object to another object. Does your object have a .__add__() special method?

I could go on. I’ll post links to articles that cover some of these special methods in more detail below.

I’m also running a two-hour workshop this Thursday, 26 February, called Python’s Plumbing • Dunder Methods and Python’s Hidden Interface

Book Workshops

Many operations you take for granted in your code are governed by special methods. Each class defines the special methods it needs, and Python knows how to handle instances of that class through those methods.

But let’s talk about different ways we refer to these special methods.

What’s In A Name?

These methods are called special methods. That’s their official name. These special methods have double underscores at the beginning and end of their names, such as .__init__(), .__str__(), and .__iter__(). This double-underscore notation led to the informal name “dunder methods”. And let’s face it, most people call them “dunder methods” instead of their actual name: “special methods.”

I often call them dunder methods, too. However, the term dunder merely describes the syntax, double underscore, so it doesn’t tell us much about what they do. The term special doesn’t tell us what they do, either, but it shows us they have a special role in Python.

There’s No Such Thing as Magic

Some people also call them magic methods. However, I avoid this term, and I discourage students from using it. It makes these methods look unnecessarily mysterious, perhaps difficult to understand because it’s all down to magic.

But there’s no such thing as magic (unless you’re Harry Potter). And the magic tricks we see from real-world “magicians” are just that – tricks. The magic dissolves away once you know how the trick works.

And if you’re learning Python, then you need to learn how to be a magician. You need to learn the “magic tricks.” Therefore, they’re no longer magic!

Python’s Plumbing (”Plumbing Methods”, Anyone?)

So, “special method” tells us that these methods are important, but it doesn’t tell us what they do. “Dunder method” describes the syntax. “Magic method” misleads us and doesn’t provide any useful insight.

How about “plumbing methods” then? Now, before anyone takes me too seriously, I’m saying this with my tongue firmly in my cheek. I’m not that foolish to suggest a new term for the whole Python community to adopt. And it’s not as flashy as “magic methods” or as cool as “dunder methods”. But bear with me…

Let’s explore the analogy, even if the term won’t catch on.

Disclaimer: I know very little about plumbing. But I think that’s OK for this essay!

There are pipes, valves, and other stuff carrying water (clean or otherwise) around your house. You know they’re there. You need them there. But you don’t see them.

You don’t think about these pipes unless you’re building the house – or unless the pipes are blocked or leaking.

The house’s plumbing keeps things running smoothly. Yet, it’s out of sight, and you don’t normally think about it. You take it for granted.

You see where I’m going, right?

Python’s special methods perform the same role. You don’t normally see them when coding since they’re called implicitly, behind the scenes. You do need to define the special methods you need when you’re defining the class, just like you’ll need to lay the pipes when building or modifying your house.

And if something goes wrong in your code, you may need to dive into how these dunder methods behave, just as when you have a leak and need to explore which pipe is responsible.

Good plumbing is reliable, predictable. Bad plumbing is asking for trouble. The same applies to the infrastructure you create through the special methods you define in classes.

So, there you go, they’re “plumbing methods”. This name tells us what they do!


I’m running a two-hour live workshop this Thursday called Python’s Plumbing • Dunder Methods and Python’s Hidden Interface. This is your last chance to join and I may not run this workshop again for a while.

This workshop is the first of three in the Python Behind the Scenes series. The other two workshops in the series are:

Join all three, or pick and choose:

The Python Coding Stack
When "It Works" Is Not Good Enough • Live Workshops
You’ve been reading articles here on The Python Coding Stack. How about live workshops…
Read more

Book Workshops

Image by Pete Linforth from Pixabay


For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!

Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.

And you can find out more about me at stephengruppetta.com

Further reading related to this article’s topic:

February 24, 2026 01:56 PM UTC


PyBites

Why do we insist on struggling alone?

A realisation about my son’s basketball team reminded me that we should never be ashamed to ask for help, or better yet, seek formal coaching/support.

Last year, for two straight seasons, my son’s team got absolutely smashed on the court. 

They had the energy and the determination, but they were effectively running in circles without any real guidance (and I’m definitely not a basketball player!).

Sound familiar? It should!

It’s exactly what it feels like when you can’t figure out where you’re going wrong, stuck in tutorial hell or banging your head against a wall trying to architect an application by yourself.

You end up spending hours debugging something a senior developer could easily point out in a 5-minute PR review. To put it another way – you’re wasting the most valuable resource you have: your time.

Everything changed for my son’s team when we brought in an experienced coach to help. He analysed their play style, identified each of their strengths and gaps, then gave them proven, concrete steps to correct their form. They just finished their latest season in first place.

This is the exact strategy behind the Pybites Developer Mindset (PDM) program. PDM is a 12-week personalised 1:1 coaching program focused on project-based development, designed to bridge specific gaps in an individual’s knowledge. This is bespoke coaching to help you where you need it.

We provide detailed, actionable PR reviews to speed up the learning process. You’ll learn to navigate real-world constraints that go beyond your code, like hosting, deployment and stakeholder needs.

Stop guessing and start building with a mentor who will reinforce professional developer skills and validate your technical direction.

You can use the links below to chat with us about how we can best help. There really is no need to go it alone!

Julian

This was originally sent to our email list. Join here.

And if you do decide to go alone…

image 2

February 24, 2026 03:36 AM UTC


Seth Michael Larson

Respecting maintainer time should be in security policies

Generative AI tools becoming more common means that vulnerability reports these days are loooong. If you're an open source maintainer, you unfortunately know what I'm talking about. Markdown-formatted, more than five headings, similar in length to a blog post, and characterized as a vulnerability worthy of its own domain name.

This makes triaging vulnerabilities by often under-resourced maintainer more difficult, time-consuming, and stressful. Whether a report is a genuine vulnerability or not, it now requires more time from maintainers to make a determination than is necessary. I've heard from multiple maintainers how specifically report length weighs negatively on maintainer time, whether these are “slop vulnerability reports” or just overly-thorough reporters.

David Lord, the maintainer of Flask and Pallets, captures this problem concisely, that the best security reports respect maintainer time:

Post by @davidism@mas.to
View on Mastodon

Here's my proposal: require security reports respect maintainer time in your security policy. This especially applies to “initial” security reports, where the reporter is most likely to send disproportionately more information than is needed to make an initial determination.

The best part about this framing is you don't have to mention the elephant in the room. Here's a few example security policy requirements to include:

Notice I didn't even have to mention LLMs or generative AI, so there’s no ambiguity about whether a given report follows the policy or not. While you have reporters reading your security policy, you might also add suggestions that also benefit maintainers remediating vulnerabilities faster:

After this is added to a project security policy (preferably under its own linkable heading) then any security report that doesn't respect maintainer time can be punted back to the reporter with a canned response:

Your report doesn't meet our security policy: https://... Please amend your report to meet our policy so we may efficiently make a determination and remediation. Thank you!

Now your expectations have been made clear to reporters and valuable maintainer time is saved. From here the security report can evolve more like a dialogue which requires maintainer time and energy proportionate to the value that the report represents to the project.

I understand this runs counter to how many vulnerability teams work today. Many reporters opt to provide as much context and detail up-front in a single report, likely to reduce back-and-forth or push-back from projects. If teams reporting vulnerabilities to open source projects want to be the most effective, they should meet the pace and style that best suites the project they are reporting to.

Please note that many vulnerability reporters are acting in good faith and aren't trying to burden maintainers. If you believe your peer is acting in good faith, maybe give them a pass if the report doesn't strictly meet the requirements and save the canned response for the bigger offenders.

Have any thoughts about this topic? Have you seen this in any open source security policies already? Let me know! Thanks to OSTIF founder Derek Zimmer for reviewing this blog post prior to publication.



Thanks for keeping RSS alive! ♥

February 24, 2026 12:00 AM UTC


Israel Fruchter

Coodie: A 4-year-old idea brought to life by AI (and some coffee)

So, let’s rewind a bit. About four years ago, I was looking at Beanie — this really nifty, Pydantic-based ODM for MongoDB. And as someone who spends an unhealthy amount of time deep in the trenches of ScyllaDB and Cassandra, I had a thought: “Where is my Beanie? I want a hoodie, but for Cassandra.” And thus, the name was born: coodie = cassandra + beanie (hoodie). Catchy, right?

Like any respectable developer, I rushed to GitHub, created the repository fruch/coodie, maybe wrote a highly ambitious README.md, set up a pyproject.toml, and then… absolutely nothing. Crickets. Life happened. I had CI pipelines to fix, Scylla drivers to maintain, conferences like PyConIL and EuroPython to attend, and live-tweeting to do. coodie just sat there, gathering digital dust, a glorious monument to my good intentions and severe lack of free time.

Fast forward to today. The AI era.

We are living in a weird timeline where LLMs are everywhere, threatening to take our jobs while simultaneously failing to center a div. People are talking about AI coding assistants non-stop. I myself was quite happy with my usual workflow, but I figured — why not take my 4-year-old fever dream for a walk in the park and feed it to the machine?

I threw the basic concept at an AI. I gave it some ground rules: it has to use Pydantic v2, it needs to be backed by scylla-driver, and it needs to feel as magical as Beanie.

And holy crap. It actually worked.

I spent four years procrastinating on this, and a glorified autocomplete bot basically bootstrapped the whole thing while I was sipping my morning coffee. I’m not sure if I should be proud of my newfound status as a “prompt engineer,” or slightly terrified that my open-source street cred is now partially owned by a matrix of weights and biases.

The sweet setup we have now

Despite the AI doing the heavy lifting, the architecture actually makes a lot of sense. Here is what coodie brings to the table:

Here is a quick taste of what the AI and I managed to cook up:

from typing import Annotated
from uuid import UUID, uuid4
from pydantic import Field
from coodie import Document, init_coodie, PrimaryKey

class Product(Document):
    id: Annotated[UUID, PrimaryKey()] = Field(default_factory=uuid4)
    name: str
    price: float = 0.0

    class Settings:
        keyspace = "my_ks"

And then querying it is a breeze:

# async
products = (
    await Product.find()
    .filter(brand="Acme")
    .order_by("price")
    .limit(20)
    .all()
)

Wrapping up

The AI works out of the box with almost zero friction. It’s very impressive, and honestly, a bit surreal to see an idea you abandoned years ago suddenly have a passing CI suite, complete with tests and GitHub Actions (which are still very tidy to look at, by the way).

You can check out the code, star it, or fork it over at github.com/fruch/coodie.

PRs are welcome. Preferably written by humans, but honestly… I guess I can’t really enforce that anymore, can I? יאללה, let’s see how it goes.

February 24, 2026 12:00 AM UTC

February 23, 2026


Anarcat

PSA: North america changes time forward soon, Europe next

This is a copy of an email I used to send internally at work and now made public. I'm not sure I'll make a habit of posting it here, especially not twice a year, unless people really like it. Right now, it's mostly here to keep with my current writing spree going.

This is your bi-yearly reminder that time is changing soon!

What's happening?

For people not on tor-internal, you should know that I've been sending semi-regular announcements when daylight saving changes occur. Starting now, I'm making those announcements public so they can be shared with the wider community because, after all, this affects everyone (kind of).

For those of you lucky enough to have no idea what I'm talking about, you should know that some places in the world implement what is called Daylight saving time or DST.

Normally, you shouldn't have to do anything: computers automatically change time following local rules, assuming they are correctly configured, provided recent updates have been applied in the case of a recent change in said rules (because yes, this happens).

Appliances, of course, will likely not change time and will need to adjusted unless they are so-called "smart" (also known as "part of a bot net").

If your clock is flashing "0:00" or "12:00", you have no action to take, congratulations on having the right time once or twice a day.

If you haven't changed those clocks in six months, congratulations, they will be accurate again!

In any case, you should still consider DST because it might affect some of your meeting schedules, particularly if you set up a new meeting schedule in the last 6 months and forgot to consider this change.

If your location does not have DST

Properly scheduled meetings affecting multiple time zones are set in UTC time, which does not change. So if your location does not observer time changes, your (local!) meeting time will not change.

But be aware that some other folks attending your meeting might have the DST bug and their meeting times will change. They might miss entire meetings or arrive late as you frantically ping them over IRC, Matrix, Signal, SMS, Ricochet, Mattermost, SimpleX, Whatsapp, Discord, Slack, Wechat, Snapchat, Telegram, XMPP, Briar, Zulip, RocketChat, DeltaChat, talk(1), write(1), actual telegrams, Meshtastic, Meshcore, Reticulum, APRS, snail mail, and, finally, flying a remote presence drone to their house, asking what's going on.

(Sorry if I forgot your preferred messaging client here, I tried my best.)

Be kind; those poor folks might be more sleep deprived as DST steals one hour of sleep from them on the night that implements the change.

If you do observe DST

If you are affected by the DST bug, your local meeting times will change access the board. Normally, you can trust that your meetings are scheduled to take this change into account and the new time should still be reasonable.

Trust, but verify; make sure the new times are adequate and there are no scheduling conflicts.

Do this now: take a look at your calendar in two week and in April. See if any meeting need to be rescheduled because of an impossible or conflicting time.

When does time change, how and where?

Notice how I mentioned "North America" in the subject? That's a lie. ("The doctor lies", as they say on the BBC.) Other places, including Europe, also changes times, just not all at once (and not all North America).

We'll get into "where" soon, but first let's look at the "how". As you might already know, the trick is:

Spring forward, fall backwards.

This northern-centric (sorry!) proverb says that clocks will move forward by an hour this "spring", after moving backwards last "fall". This is why we lose an hour of work, sorry, sleep. It sucks, to put it bluntly. I want it to stop and will keep writing those advisories until it does.

To see where and when, we, unfortunately, still need to go into politics.

USA and Canada

First, we start with "North America" which, really, is just some parts of USA[1] and Canada[2]. As usual, on the Second Sunday in March (the 8th) at 02:00 local (not UTC!), the clocks will move forward.

This means that properly set clocks will flip from 1:59 to 3:00, coldly depriving us from an hour of sleep that was perniciously granted 6 months ago and making calendar software stupidly hard to write.

Practically, set your wrist watch and alarm clocks[3] back one hour before going to bed and go to bed early.

[1] except Arizona (except the Navajo nation), US territories, and Hawaii

[2] except Yukon, most of Saskatchewan, and parts of British Columbia (northeast), one island in Nunavut (Southampton Island), one town in Ontario (Atikokan) and small parts of Quebec (Le Golfe-du-Saint-Laurent), a list which I keep recopying because I find it just so amazing how chaotic it is. When your clock has its own Wikipedia page, you know something is wrong.

[3] hopefully not managed by a botnet, otherwise kindly ask your bot net operator to apply proper software upgrades in a timely manner

Europe

Next we look at our dear Europe, which will change time on the last Sunday in March (the 29th) at 01:00 UTC (not local!). I think it means that, Amsterdam-time, the clocks will flip from 1:59 to 3:00 AM local on that night.

(Every time I write this, I have doubts. I would welcome independent confirmation from night owls that observe that funky behavior experimentally.)

Just like your poor fellows out west, just fix your old-school clocks before going to bed, and go to sleep early, it's good for you.

Rest of the world with DST

Renewed and recurring apologies again to the people of Cuba, Mexico, Moldova, Israel, Lebanon, Palestine, Egypt, Chile (except Magallanes Region), parts of Australia, and New Zealand which all have their own individual DST rules, omitted here for brevity.

In general, changes also happen in March, but either on different times or different days, except in the south hemisphere, where they happen in April.

Rest of the world without DST

All of you other folks without DST, rejoice! Thank you for reminding us how manage calendars and clocks normally. Sometimes, doing nothing is precisely the right thing to do. You're an inspiration for us all.

Changes since last time

There were, again, no changes since last year on daylight savings that I'm aware of. It seems the US congress debating switching to a "half-daylight" time zone which is an half-baked idea that I should have expected from the current USA politics.

The plan is to, say, switch from "Eastern is UTC-4 in the summer" to "Eastern is UTC-4.5". The bill also proposes to do this 90 days after enactment, which is dangerously optimistic about our capacity at deploying any significant change in human society.

In general, I rely on the Wikipedia time nerds for this and Paul Eggert which seems to singlehandledly be keeping everything in order for all of us, on the tz-announce mailing list.

This time, I've also looked at the tz mailing list which is where I learned about the congress bill.

If your country has changed time and no one above noticed, now would be an extremely late time to do something about this, typically writing to the above list. (Incredibly, I need to write to the list because of this post.)

One thing that did change since last year is that I've implemented what I hope to be a robust calendar for this, which was surprisingly tricky.

If you have access to our Nextcloud, it should be visible under the heading "Daylight saving times". If you don't, you can access it using this direct link.

The procedures around how this calendar was created, how this email was written, and curses found along the way, are also documented in this wiki page, if someone ever needs to pick up the Time Lord duty.

February 23, 2026 07:31 PM UTC


Real Python

Python for Loops: The Pythonic Way

Python’s for loop allows you to iterate over the items in a collection, such as lists, tuples, strings, and dictionaries. The for loop syntax declares a loop variable that takes each item from the collection in each iteration. This loop is ideal for repeatedly executing a block of code on each item in the collection. You can also tweak for loops further with features like break, continue, and else.

By the end of this tutorial, you’ll understand that:

  • Python’s for loop iterates over items in a data collection, allowing you to execute code for each item.
  • To iterate from 0 to 10, you use the for index in range(11): construct.
  • To repeat code a number of times without processing the data of an iterable, use the for _ in range(times): construct.
  • To do index-based iteration, you can use for index, value in enumerate(iterable): to access both index and item.

In this tutorial, you’ll gain practical knowledge of using for loops to traverse various collections and learn Pythonic looping techniques. You’ll also learn how to handle exceptions and use asynchronous iterations to make your Python code more robust and efficient.

Get Your Code: Click here to download the free sample code that shows you how to use for loops in Python.

Take the Quiz: Test your knowledge with our interactive “Python for Loops: The Pythonic Way” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Python for Loops: The Pythonic Way

In this quiz, you'll test your understanding of Python's for loop. You'll revisit how to iterate over items in a data collection, how to use range() for a predefined number of iterations, and how to use enumerate() for index-based iteration.

Getting Started With the Python for Loop

In programming, loops are control flow statements that allow you to repeat a given set of operations a number of times. In practice, you’ll find two main types of loops:

  1. for loops are mostly used to iterate a known number of times, which is common when you’re processing data collections with a specific number of data items.
  2. while loops are commonly used to iterate an unknown number of times, which is useful when the number of iterations depends on a given condition.

Python has both of these loops and in this tutorial, you’ll learn about for loops. In Python, you’ll generally use for loops when you need to iterate over the items in a data collection. This type of loop lets you traverse different data collections and run a specific group of statements on or with each item in the input collection.

In Python, for loops are compound statements with a header and a code block that runs a predefined number of times. The basic syntax of a for loop is shown below:

Python Syntax
for variable in iterable:
    <body>

In this syntax, variable is the loop variable. In each iteration, this variable takes the value of the current item in iterable, which represents the data collection you need to iterate over. The loop body can consist of one or more statements that must be indented properly.

Here’s a more detailed breakdown of this syntax:

  • for is the keyword that initiates the loop header.
  • variable is a variable that holds the current item in the input iterable.
  • in is a keyword that connects the loop variable with the iterable.
  • iterable is a data collection that can be iterated over.
  • <body> consists of one or more statements to execute in each iteration.

Here’s a quick example of how you can use a for loop to iterate over a list:

Python
>>> colors = ["red", "green", "blue", "yellow"]

>>> for color in colors:
...     print(color)
...
red
green
blue
yellow

In this example, color is the loop variable, while the colors list is the target collection. Each time through the loop, color takes on a successive item from colors. In this loop, the body consists of a call to print() that displays the value on the screen. This loop runs once for each item in the target iterable. The way the code above is written is the Pythonic way to write it.

However, what’s an iterable anyway? In Python, an iterable is an object—often a data collection—that can be iterated over. Common examples of iterables in Python include lists, tuples, strings, dictionaries, and sets, which are all built-in data types. You can also have custom classes that support iteration.

Note: Python has both iterables and iterators. Iterables support the iterable protocol consisting of the .__iter__() special method. Similarly, iterators support the iterator protocol that’s based on the .__iter__() and .__next__() special methods.

Both iterables and iterators can be iterated over. All iterators are iterables, but not all iterables are iterators. Python iterators play a fundamental role in for loops because they drive the iteration process.

A deeper discussion on iterables and iterators is beyond the scope of this tutorial. However, to learn more about them, check out the Iterators and Iterables in Python: Run Efficient Iterations tutorial.

You can also have a loop with multiple loop variables:

Python
>>> points = [(1, 4), (3, 6), (7, 3)]

>>> for x, y in points:
...     print(f"{x = } and {y = }")
...
x = 1 and y = 4
x = 3 and y = 6
x = 7 and y = 3

In this loop, you have two loop variables, x and y. Note that to use this syntax, you just need to provide a tuple of loop variables. Also, you can have as many loop variables as you need as long as you have the correct number of items to unpack into them. You’ll also find this pattern useful when iterating over dictionary items or when you need to do parallel iteration.

Sometimes, the input iterable may be empty. In that case, the loop will run its header once but won’t execute its body:

Python
>>> for item in []:
...     print(item)
...

Read the full article at https://realpython.com/python-for-loop/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

February 23, 2026 02:00 PM UTC

Quiz: Build a Hash Table in Python With TDD

In this quiz, you’ll review hash functions, collision resolution strategies, hash function distribution, the avalanche effect, and key principles of Test-Driven Development.

For more practice and context, explore Build a Hash Table in Python With TDD.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

February 23, 2026 12:00 PM UTC


Python Bytes

#470 A Jolting Episode

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://pydantic.dev/articles/inline-snapshot?featured_on=pythonbytes">Better Python tests with inline-snapshot</a></strong></li> <li><strong><a href="https://getjolt.sh/?featured_on=pythonbytes">jolt Battery intelligence for your laptop</a></strong></li> <li><strong><a href="https://docs.astral.sh/ruff/formatter/#markdown-code-formatting">Markdown code formatting with ruff</a></strong></li> <li><strong><a href="https://github.com/nektos/act?featured_on=pythonbytes">act - run your GitHub actions locally</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=MT2zsZ-lGzg' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="470">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a> <strong>Connect with the hosts</strong></li> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky) Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</li> </ul> <p><strong>Brian #1: <a href="https://pydantic.dev/articles/inline-snapshot?featured_on=pythonbytes">Better Python tests with inline-snapshot</a></strong></p> <ul> <li>Alex Hall, on Pydantic blog</li> <li>Great for testing complex data structures</li> <li><p>Allows you to write a test like this:</p> <div class="codehilite"> <pre><span></span><code><span class="n">from</span><span class="w"> </span><span class="n">inline_snapshot</span><span class="w"> </span><span class="n">import</span><span class="w"> </span><span class="n">snapshot</span> <span class="n">def</span><span class="w"> </span><span class="n">test_user_creation</span><span class="err">():</span> <span class="w"> </span><span class="n">user</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="err">create</span><span class="mi">_</span><span class="n">user</span><span class="err">(</span><span class="n">id</span><span class="o">=</span><span class="mi">123</span><span class="err">,</span><span class="w"> </span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;test_user&quot;</span><span class="err">)</span> <span class="w"> </span><span class="n">assert</span><span class="w"> </span><span class="n">user</span><span class="p">.</span><span class="n">dict</span><span class="err">()</span><span class="w"> </span><span class="o">=</span><span class="err">= snapshot(</span><span class="p">{}</span><span class="err">)</span> </code></pre> </div></li> <li><p>Then run <code>pytest --inline-snapshot=fix</code></p></li> <li><p>And the library updates the test source code to look like this:</p> <div class="codehilite"> <pre><span></span><code><span class="n">def</span><span class="w"> </span><span class="n">test_user_creation</span><span class="err">():</span> <span class="w"> </span><span class="n">user</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="err">create</span><span class="mi">_</span><span class="n">user</span><span class="err">(</span><span class="n">id</span><span class="o">=</span><span class="mi">123</span><span class="err">,</span><span class="w"> </span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;test_user&quot;</span><span class="err">)</span> <span class="w"> </span><span class="n">assert</span><span class="w"> </span><span class="n">user</span><span class="p">.</span><span class="n">dict</span><span class="err">()</span><span class="w"> </span><span class="o">=</span><span class="err">= snapshot(</span><span class="p">{</span> <span class="w"> </span><span class="s2">&quot;id&quot;</span><span class="err">:</span><span class="w"> </span><span class="n">123</span><span class="err">,</span> <span class="w"> </span><span class="s2">&quot;name&quot;</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;test_user&quot;</span><span class="err">,</span> <span class="w"> </span><span class="s2">&quot;status&quot;</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;active&quot;</span> <span class="w"> </span><span class="err">})</span> </code></pre> </div></li> <li><p>Now, when you run the code without “fix” the collected data is used for comparison</p></li> <li>Awesome to be able to visually inspect the test data right there in the test code.</li> <li>Projects mentioned <ul> <li><a href="https://15r10nk.github.io/inline-snapshot/latest/?featured_on=pythonbytes">inline-snapshot</a></li> <li><a href="https://github.com/pydantic/pytest-examples?featured_on=pythonbytes">pytest-examples</a></li> <li><a href="https://github.com/syrupy-project/syrupy?featured_on=pythonbytes">syrupy</a></li> <li><a href="https://github.com/samuelcolvin/dirty-equals?featured_on=pythonbytes">dirty-equals</a></li> <li><a href="https://github.com/alexmojaki/executing?featured_on=pythonbytes">executing</a></li> </ul></li> </ul> <p><strong>Michael #2: <a href="https://getjolt.sh/?featured_on=pythonbytes">jolt Battery intelligence for your laptop</a></strong></p> <ul> <li>Support for both macOS and Linux</li> <li><strong>Battery Status</strong> — Charge percentage, time remaining, health, and cycle count</li> <li><strong>Power Monitoring</strong> — System power draw with CPU/GPU breakdown</li> <li><strong>Process Tracking</strong> — Processes sorted by energy impact with color-coded severity</li> <li><strong>Historical Graphs</strong> — Track battery and power trends over time</li> <li><strong>Themes</strong> — 10+ built-in themes with dark/light auto-detection</li> <li><strong>Background Daemon</strong> — Collect historical data even when the TUI isn't running</li> <li><strong>Process Management</strong> — Kill energy-hungry processes directly</li> </ul> <p><strong>Brian #3: <a href="https://docs.astral.sh/ruff/formatter/#markdown-code-formatting">Markdown code formatting with ruff</a></strong></p> <ul> <li>Suggested by Matthias Schoettle</li> <li><code>ruff</code> can now format code within markdown files</li> <li>Will format valid Python code in code blocks marked with <code>python</code>, <code>py</code>, <code>python3</code> or <code>py3</code>.</li> <li>Also recognizes <code>pyi</code> as Python type stub files.</li> <li>Includes the ability to turn off formatting with comment <code>[HTML_REMOVED]</code> , <code>[HTML_REMOVED]</code> blocks.</li> <li>Requires preview mode <div class="codehilite"> <pre><span></span><code><span class="k">[tool.ruff.lint]</span> <span class="n">preview</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">true</span> </code></pre> </div></li> </ul> <p><strong>Michael #4: <a href="https://github.com/nektos/act?featured_on=pythonbytes">act - run your GitHub actions locally</a></strong></p> <ul> <li>Run your <a href="https://developer.github.com/actions/?featured_on=pythonbytes">GitHub Actions</a> locally! Why would you want to do this? Two reasons: <ul> <li><strong>Fast Feedback</strong> - Rather than having to commit/push every time you want to test out the changes you are making to your <code>.github/workflows/</code> files (or for any changes to embedded GitHub actions), you can use <code>act</code> to run the actions locally. The <a href="https://help.github.com/en/actions/configuring-and-managing-workflows/using-environment-variables#default-environment-variables">environment variables</a> and <a href="https://help.github.com/en/actions/reference/virtual-environments-for-github-hosted-runners#filesystems-on-github-hosted-runners">filesystem</a> are all configured to match what GitHub provides.</li> <li><strong>Local Task Runner</strong> - I love <a href="https://en.wikipedia.org/wiki/Make_(software)?featured_on=pythonbytes">make</a>. However, I also hate repeating myself. With <code>act</code>, you can use the GitHub Actions defined in your <code>.github/workflows/</code> to replace your <code>Makefile</code>!</li> </ul></li> <li>When you run <code>act</code> it reads in your GitHub Actions from <code>.github/workflows/</code> and determines the set of actions that need to be run. <ul> <li>Uses the Docker API to either pull or build the necessary images, as defined in your workflow files and finally determines the execution path based on the dependencies that were defined.</li> <li>Once it has the execution path, it then uses the Docker API to run containers for each action based on the images prepared earlier.</li> <li>The <a href="https://help.github.com/en/actions/configuring-and-managing-workflows/using-environment-variables#default-environment-variables">environment variables</a> and <a href="https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#file-systems">filesystem</a> are all configured to match what GitHub provides.</li> </ul></li> </ul> <p><strong>Extras</strong></p> <p>Michael:</p> <ul> <li>Winter is coming: <a href="https://www.linkedin.com/feed/update/urn:li:activity:7427589361048948736/?utm_source=share&utm_medium=member_desktop&rcm=ACoAAABOjqABPkOWTTbZXV9tmnQohvpkplQOibU&featured_on=pythonbytes">Frozendict accepted</a></li> <li><a href="https://mastodon.social/@webology/116103649163718377?featured_on=pythonbytes">Django ORM stand-alone</a></li> <li>Command Book app <a href="https://mkennedy.codes/posts/your-terminal-tabs-are-fragile-i-built-something-better/?featured_on=pythonbytes">announcement post</a></li> </ul> <p><strong>Joke:</strong> <a href="https://x.com/pr0grammerhum0r/status/2017704478267314514?s=12&featured_on=pythonbytes">Plug ‘n Paste</a></p>

February 23, 2026 08:00 AM UTC


Tibo Beijen

Introducing the Zen of DevOps

Introduction Over the past ten years or so, my role has gradually shifted from software to platforms. More towards the ‘ops’ side of things, but coming from a background that values APIs, automation, artifacts and guardrails in the form of automated tests. And I found out that a lot of best practices from software engineering can be adapted and applied to modern ops practices as well. DevOps in a nutshell really: Bridging the gap between Dev and Ops.

February 23, 2026 04:00 AM UTC

February 21, 2026


Brett Cannon

CLI subcommands with lazy imports

In case you didn&apost hear, PEP 810 got accepted which means Python 3.15 is going to support lazy imports! One of the selling points of lazy imports is with code that has a CLI so that you only import code as necessary, making the app a bit more snappy at startup. A common example given is when you run --help you probably don&apost need all modules imported to make that work. But another use case for CLIs and lazy imports is subcommands where each subcommand very likely only needs a subset of modules to work.

How to make subcommands work with argparse

There&aposs two ways to typically do subcommands in argparse. The old-fashioned way is with a dict that dispatches to a function based on the subcommand name that was specified on the terminal. An example of that which I wrote can be found in the stdlib.

cpython/Platforms/WASI/__main__.py at 2be2dd5fc219a5c252d72f351c85db14314bfca5 · python/cpython
The Python programming language. Contribute to python/cpython development by creating an account on GitHub.
alt

The other approach is covered in the argparse docs for subcommands which involves setting a default value for a subparser for the subcommand.

Regardless of which approach you use, the key detail is that something stores the callable you want to use based on the chosen subcommand.

Why these approaches don&apost work with lazy imports

Subcommands for a CLI is a great use of lazy imports. By making the imports that are not universally needed lazy, you can avoid paying any extra cost for imports you don&apost care about for any specific subcommand. This is even easier to do when you put a subcommand&aposs code in a separate module as you can just lazy import the main function to call in your __main__.py, e.g. lazy from spam import main as spam_main and then use spam_main as the function to call when the spam subcommand is called.

The problem is that reification (the act of making the lazy import do the actual import and become the object it&aposs meant to be) is triggered by assignment. That means using the lazy import object as a value in a dict or passing it as an argument to anything in argparse triggers the import. And since you need to do all your wiring upfront for argparse to do its thing, all of your lazy imports will get triggered that you assign using either of the approaches I listed above for making subcommands work. But I think lazy imports for subcommands is worth the hassle of trying to find a solution.

Some solutions to this problem

To make this all play nicely with lazy imports, you need to either avoid touching the lazy import objects if you don&apost need to use them or you need a level of indirection. To avoid touching the objects, you can do something like turning the dict approach into using a match statement.

match context.subcommand:
    case "spam":
        spam_main(context)

Another way to do it is to wrap the lazy import object in a lambda so there&aposs a layer of indirection between the object and the assigment.

parser_foo.set_defaults(func=lambda args: foo(args))

Both approaches work and keep the lazy import object from being acidentally reified.

February 21, 2026 10:37 PM UTC


Talk Python to Me

#537: Datastar: Modern web dev, simplified

You love building web apps with Python, and HTMX got you excited about the hypermedia approach -- let the server drive the HTML, skip the JavaScript build step, keep things simple. But then you hit that last 10%: You need Alpine.js for interactivity, your state gets out of sync, and suddenly you're juggling two unrelated libraries that weren't designed to work together. <br/> <br/> What if there was a single 11-kilobyte framework that gave you everything HTMX and Alpine do, and more, with real-time updates, multiplayer collaboration out of the box, and performance so fast you're actually bottlenecked by the monitor's refresh rate? That's Datastar. <br/> <br/> On this episode, I sit down with its creator Delaney Gillilan, core maintainer Ben Croker, and Datastar convert Chris May to explore how this backend-driven, server-sent-events-first framework is changing the way full-stack developers think about the modern web.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code talkpython26</a><br> <a href='https://talkpython.fm/commandbookapp'>Command Book</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guests</strong><br/> <strong>Delaney Gillilan</strong>: <a href="https://www.linkedin.com/in/delaney-gillilan-338734a8/?featured_on=talkpython" target="_blank" >linkedin.com</a><br/> <strong>Ben Croker</strong>: <a href="https://x.com/ben_pylo?featured_on=talkpython" target="_blank" >x.com</a><br/> <strong>Chris May</strong>: <a href="https://everydaysuperpowers.dev?featured_on=talkpython" target="_blank" >everydaysuperpowers.dev</a><br/> <br/> <strong>Datastar</strong>: <a href="https://data-star.dev?featured_on=talkpython" target="_blank" >data-star.dev</a><br/> <strong>HTMX</strong>: <a href="https://htmx.org?featured_on=talkpython" target="_blank" >htmx.org</a><br/> <strong>AlpineJS</strong>: <a href="https://alpinejs.dev?featured_on=talkpython" target="_blank" >alpinejs.dev</a><br/> <strong>Core Attribute Tour</strong>: <a href="https://data-star.dev/guide/getting_started#data-*" target="_blank" >data-star.dev</a><br/> <strong>data-star.dev/examples</strong>: <a href="https://data-star.dev/examples/?featured_on=talkpython" target="_blank" >data-star.dev</a><br/> <strong>github.com/starfederation/datastar-python</strong>: <a href="https://github.com/starfederation/datastar-python?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>VSCode</strong>: <a href="https://marketplace.visualstudio.com/items?itemName=starfederation.datastar-vscode&amp;featured_on=talkpython" target="_blank" >marketplace.visualstudio.com</a><br/> <strong>OpenVSX</strong>: <a href="https://open-vsx.org/extension/starfederation/datastar-vscode?featured_on=talkpython" target="_blank" >open-vsx.org</a><br/> <strong>PyCharm/Intellij plugin</strong>: <a href="https://plugins.jetbrains.com/plugin/26072-datastar-support?featured_on=talkpython" target="_blank" >plugins.jetbrains.com</a><br/> <strong>data-star.dev/datastar_pro</strong>: <a href="https://data-star.dev/datastar_pro?featured_on=talkpython" target="_blank" >data-star.dev</a><br/> <strong>gg</strong>: <a href="https://discord.gg/bnRNgZjgPh?featured_on=talkpython" target="_blank" >discord.gg</a><br/> <strong>HTML-ivating your Django web app's experience with HTMX, AlpineJS, and streaming HTML - Chris May</strong>: <a href="https://www.youtube.com/watch?v=kYV8K71pY64&amp;t=548s" target="_blank" >www.youtube.com</a><br/> <strong>Senior Engineer tries Vibe Coding</strong>: <a href="https://www.youtube.com/watch?v=_2C2CNmK7dQ" target="_blank" >www.youtube.com</a><br/> <strong>1 Billion Checkboxes</strong>: <a href="https://checkboxes.andersmurphy.com?featured_on=talkpython" target="_blank" >checkboxes.andersmurphy.com</a><br/> <strong>Game of life example</strong>: <a href="https://example.andersmurphy.com?featured_on=talkpython" target="_blank" >example.andersmurphy.com</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=SFc74eFhKBY" target="_blank" >youtube.com</a><br/> <strong>Episode #537 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/537/datastar-modern-web-dev-simplified#takeaways-anchor" target="_blank" >talkpython.fm/537</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/537/datastar-modern-web-dev-simplified" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

February 21, 2026 08:36 PM UTC


Django Weblog

DSF member of the month - Baptiste Mispelon

For February 2026, we welcome Baptiste Mispelon as our DSF member of the month! ⭐

Baptiste is standing in front the camera, smiling, with his computer in his hand. He wears a yellow and green plaid shirt, a green T-shirt with DjangoCon visible, and green pants. His computer is covered with Django-related stickers. Photo by Bartek Pawlik - bartpawlik.format.com

Baptiste is a long-time Django and Python contributor who co-created the Django Under the Hood conference series and serves on the Ops team maintaining its infrastructure. He has been a DSF member since November 2014. You can learn more about Baptiste by visiting Baptiste's website and his GitHub Profile.

Let’s spend some time getting to know Baptiste better!

Can you tell us a little about yourself? (hobbies, education, etc)

I'm a French immigrant living in Norway. In the day time I work as software engineer at Torchbox building Django and Wagtail sites. Education-wise I'm a "self-taught" (whatever that means) developer and started working when I was very young. In terms of hobbies, I'm a big language nerd and I'm always up for a good etymology fact. I also enjoy the outdoor whether it's on a mountain bike or on foot (still not convinced by this skiing thing they do in Norway, but I'm trying).

How did you start using Django?

I was working in a startup where I had built an unmaintainable pile of custom framework-less PHP code. I'd heard of this cool Python framework and thought it would help me bring some structure to our codebase. So I started rewriting our services bit-by-bit and eventually switched everything to Django after about a year.

In 2012, I bought a ticket to DjangoCon Europe in Zurich and went there not knowing anyone. It was one of the best decisions of my life: the Django community welcomed me and has given me so much over the years.

What other framework do you know and if there is anything you would like to have in Django if you had magical powers?

I've been making website for more than two decades now, so I've used my fair share of various technologies and frameworks, but Django is still my "daily driver" and the one I like the best. I like writing plain CSS, and when I need some extra bit of JS I like to use Alpine JS and/or HTMX: I find they work really well together with Django.

If I had magical powers and could change anything, I would remove the word "patch" from existence (and especially from the Django documentation).

What projects are you working on now?

I don't have any big projects active at the moment, I'm mostly working on client projects at work.

Which Django libraries are your favorite (core or 3rd party)?

My favorite Django library of all time is possibly django-admin-dracula. It's the perfect combination of professional and whimsical for me.

Other than that I'm also a big fan of the Wagtail CMS. I've been learning more and more about it in the past year and I've really been liking it. The code feels very Django-y and the community around it is lovely as well.

What are the top three things in Django that you like?

1) First of course is the people. I know it's a cliche but the community is what makes Django so special.

2) In terms of the framework, what brought me to it in the first place was its opinionated structure and settings. When I started working with Django I didn't really know much about web development, but Django's standard project structure and excellent defaults meant that I could just use things out of the box knowing I was building something solid. And more than that, as my skills and knowledge grew I was able to swap out those defaults with some more custom things that worked better for me. There's room to grow and the transition has always felt very smooth for me.

3) And if I had to pick a single feature, then I'd go for one that I think is underrated: assertQuerySetEqual(). I think more people should be using it!

What is it like to be in the Ops team?

It's both very exciting and very boring 😅

Most of the tasks we do are very mundane: create DNS records, update a server, deploy a fix. But because we have access and control over a big part of the infrastructure that powers the Django community, it's also a big responsibility which we don't take lightly.

I know you were one of the first members of the Django Girls Foundation board of directors. That's amazing! How did that start for you?

By 2014 I'd become good friend with Ola & Ola and in July they asked me to be a coach at the very first Django Girls workshop at EuroPython in Berlin. The energy at that event was amazing an unlike any other event I'd been a part of, so I got hooked.

I went on to coach at many other workshops after that. When Ola & Ola had the idea to start an official entity for Django Girls, they needed a token white guy and I gladly accepted the role!

You co-created Django Under the Hood series which, from what I've heard, was very successful at the time. Can you tell us a little more about this conference and its beginnings?

I'm still really proud of having been on that team and of what we achieved with this conference. So many stories to tell!

I believe it all started at the Django Village conference where Marc Tamlin and I were looking for ideas for how to bring the Django core team together.

We thought that having a conference would be a good way to give an excuse (and raise funds) for people to travel all to the same place and work on Django. Somehow we decided that Amsterdam was the perfect place for that.

Then we were extremely lucky that a bunch of talented folks actually turned that idea into a reality: Sasha, Ola, Tomek, Ola, Remco, Kasia (and many others) 💖.

As a former conference organizer and volunteer, do you have any recommendations for those who want to contribute or organize a conference?

I think our industry (and even the world in general) is in a very different place today than a decade ago when I was actively organizing conferences. Honestly I'm not sure it would be as easy today to do the things we've done.

My recommendation is to do it if you can. I've forged some real friendships in my time as an organizer, and as exhausting and stressful as it can be, it's also immensely rewarding in its own way.

The hard lesson I'd also give is that you should pay attention to who gets to come to your events, and more importantly who doesn't. Organizing a conference is essentially making a million decisions, most of which are really boring. But every decision you make has an effect when it's combined with all the others. The food you serve or don't serve, the time of year your event takes place, its location. Whether you spend your budget on fun tshirts, or on travel grants.

All of it makes a difference somehow.

Do you remember your first contribution in Django?

I do! It was commit ac8eb82abb23f7ae50ab85100619f13257b03526: a one character typo fix in an error message 😂

Is there anything else you’d like to say?

Open source is made of people, not code. You'll never go wrong by investing in your community. Claude will never love you back.


Thank you for doing the interview, Baptiste !

February 21, 2026 09:11 AM UTC


PyBites

3 Questions to go from thinking like a Scrappy to Senior Dev

How do you know if you’re actually growing as a dev?

Last week I was chatting with a developer who’d hit a wall. (I talk to a lot of devs now that I think about it!)

Like him, you might consider yourself a scrappy coder. You’re an all-rounder, can generally figure things out and write some sort of scrappy script to solve a problem.

It’s actually a badge of pride! I myself have a bunch of scrappy scripts running in my home lab to do various things.

The thing is, sometimes this confidence we feel being scrappy, may actually just be comfort.

For this dev, he was comfortable fixing bugs, but the idea of staring at a blank file and architecting an end-to-end solution from scratch scared the pants off him. 

It was this form of “engineering” that he just couldn’t get down.

The chat inspired me to create 3 things you can check today to push yourself out of this scrappy zone and into the realm of senior development/engineering. 

Have a look at a function or codebase you’ve written and ask:

  1. “Is this deliberate?” Did I write it this way because it’s the best way, or because it’s the first way I found on Google that didn’t crash? 
    Senior developers are intentional in the code they write.
  2. “How do I prove this works?” If I had to write a test for this right now, could I? Are you writing tests for your code? If not, how come? Your code should be straightforward enough that you can write tests for it.
  3. “What happens if I leave?” If I disappeared tomorrow, would the next developer thank me or curse my soul? 
    This forces you to think about readability, context managers, and structure (like using uv or proper config files) rather than hard-coding values just to get it done.

These are simple questions that have real impact when you start considering them as you write code. You’ll be surprised how much more robust your applications become!

Try asking these three questions today. If it’s uncomfortable, good! That means you’re growing!

And when you’re ready to take the next step on your Python journey, (shameless plug incoming), click the link below to chat with us about 1:1 Coaching. We’ll help you get there!

Julian

This was originally sent to our email list. Join here.

February 21, 2026 09:00 AM UTC

February 20, 2026


Graham Dumpleton

Teaching an AI about Educates

The way we direct AI coding agents has changed significantly over the past couple of years. Early on, the interaction was purely conversational. You'd open a chat, explain what you wanted, provide whatever context seemed relevant, and hope the model could work with it. If it got something wrong or went down the wrong path, you'd correct it and try again. It worked, but it was ad hoc. Every session started from scratch. Every conversation required re-establishing context.

What's happened since then is a steady progression toward giving agents more structured, persistent knowledge to work with. Each step in that progression has made agents meaningfully more capable, to the point where they can now handle tasks that would have been unrealistic even a year ago. I've been putting these capabilities to work on a specific challenge: getting an AI to author interactive workshops for the Educates training platform. In my previous posts I talked about why workshop content is actually a good fit for AI generation. Here I want to explain how I've been making that work in practice.

How agent steering has evolved

The first real step beyond raw prompting was agent steering files. These are files you place in a project directory that give the agent standing instructions whenever it works in that context. Think of it as a persistent briefing document. You describe the project structure, the conventions to follow, the tools to use, and the agent picks that up automatically each time you interact with it. No need to re-explain the basics of your codebase every session. This was a genuine improvement, but the instructions are necessarily general-purpose. They tell the agent about the project, not about any particular domain of expertise.

The next step was giving agents access to external tools and data sources through protocols like the Model Context Protocol (MCP). Instead of the agent only being able to read and write files, it could now make API calls, query databases, fetch documentation, and interact with external services. The agent went from being a conversationalist that could edit code to something that could actually do things in the world. That opened up a lot of possibilities, but the agent still needed you to explain what to do and how to approach it.

Planning modes added another layer. Rather than the agent diving straight into implementation, it could first think through the approach, break a complex task into steps, and present a plan for review before acting. This was especially valuable for larger tasks where getting the overall approach right matters more than any individual step. The agent became more deliberate and less likely to charge off in the wrong direction.

Skills represent where things stand now, and they're the piece that ties the rest together. A skill is a self-contained package of domain knowledge, workflow guidance, and reference material that an agent can invoke when working on a specific type of task. Rather than the agent relying solely on what it learned during training, a skill gives it authoritative, up-to-date, structured knowledge about a particular domain. The agent knows when to use the skill, what workflow to follow, and which reference material to consult for specific questions.

With the advances in what LLMs are capable of combined with these structured ways of steering them, agents are genuinely reaching a point where their usefulness is growing in ways that matter for real work.

Why model knowledge isn't enough

Large language models know something about most topics. If you ask an AI about Educates, it will probably have some general awareness of the project. But general awareness is not the same as the detailed, precise knowledge you need to produce correct output for a specialised platform.

Educates workshops have specific YAML structures for their configuration files. The interactive instructions use a system of clickable actions with particular syntax for each action type. There are conventions around how learners interact with terminals and editors, how dashboard tabs are managed, how Kubernetes resources are configured, and how data variables are used for parameterisation. Getting any of these wrong doesn't just produce suboptimal content, it produces content that simply won't work when someone tries to use it.

I covered the clickable actions system in detail in my last post. There are eight categories of actions covering terminal execution, file viewing and editing, YAML-aware modifications, validation, and more. Each has its own syntax and conventions. An AI that generates workshop content needs to use all of these correctly, not approximately, not most of the time, but reliably.

This is where skills make the difference. Rather than hoping the model has absorbed enough Educates documentation during its training to get these details right, you give it the specific knowledge it needs. The skill becomes the agent's reference manual for the domain, structured in a way that supports the workflow rather than dumping everything into context at once.

The Educates workshop authoring skill

The obvious approach would be to take the full Educates documentation and load it into the agent's context. But AI agents work within a finite context window, and that window is shared between the knowledge you give the agent and the working space it needs for the actual task. Generating a workshop involves reasoning about structure, producing instruction pages, writing clickable action syntax, and keeping track of what's been created so far. If you consume most of the context with raw documentation, there's not enough room left for the agent to do its real work. You have to be strategic about what goes in.

The skill I built for Educates workshop authoring is a deliberate distillation. At its core is a main skill definition of around 25 kilobytes that captures the essential workflow an agent follows when creating a workshop. It covers gathering requirements from the user, creating the directory structure, generating the workshop configuration file, writing instruction pages with clickable actions, and running through verification checklists at the end. This isn't a copy of the documentation. It's the key knowledge extracted and organised to drive the workflow correctly.

Supporting that are 20+ reference files totalling around 300 kilobytes. These cover specific aspects of the platform in the detail needed to get things right: the complete clickable actions system across all eight action categories, Kubernetes access patterns and namespace isolation, data variables for parameterising workshop content, language-specific references for Python and Java workshops, dashboard configuration and tab management, workshop image selection, setup scripts, and more.

The skill is organised around the workflow rather than being a flat dump of information. The main definition tells the agent what to do at each step, and the reference files are there for it to consult when it needs detail on a particular topic. If it's generating a terminal action, it knows to check the terminal actions reference for the correct syntax. If it's setting up Kubernetes access, it consults the Kubernetes reference for namespace configuration patterns. The agent pulls in the knowledge it needs when it needs it, keeping the active context focused on the task at hand.

There's also a companion skill for course design that handles the higher-level task of planning multi-workshop courses, breaking topics into individual workshops, and creating detailed plans for each one. But the workshop authoring skill is where the actual content generation happens, and it's the one I want to demonstrate.

Putting it to the test with Air

To show what the skill can do, I decided to use it to generate a workshop for the Air web framework. Air is a Python web framework written by friends in the Python community. It's built on FastAPI, Starlette, and HTMX, with a focus on simplicity and minimal JavaScript. What caught my attention about it as a test case is the claim on their website: "The first web framework designed for AI to write. Every framework claims AI compatibility. Air was architected for it." That's a bold statement, and using Air as the subject for this exercise is partly a way to see how that claim holds up in practice, not just for writing applications with the framework but for creating training material about it.

There's another reason Air makes for a good test. I haven't used the framework myself. I know the people behind it, but I haven't built anything with it. That means I can't fall back on my own knowledge to fill in gaps. The AI needs to research the framework and understand it well enough to teach it to someone, while the skill provides all the Educates platform knowledge needed to structure that understanding into a proper interactive workshop. It's a genuine test of both the skill and the model working together.

The process starts simply enough. You tell the agent what you want: create me a workshop for the Educates training platform introducing the Air web framework for Python developers. The phrasing matters here. The agent needs enough context in the request to recognise that a relevant skill exists and should be applied. Mentioning Educates in the prompt is what triggers the connection to the workshop authoring skill. Some agents also support invoking a skill directly through a slash command, which removes the ambiguity entirely. Either way, once the skill is activated, its workflow kicks in. It asks clarifying questions about the workshop requirements. Does it need an editor? (Yes, learners will be writing code.) Kubernetes access? (No, this is a web framework workshop, not a Kubernetes one.) What's the target difficulty and duration?

I'd recommend using the agent's planning mode for this initial step if it supports one. Rather than having the agent jump straight into generating files, planning mode lets it first describe what it intends to put in the workshop: the topics it will cover, the page structure, and the learning progression. You can review that plan and steer it before any files are created. It's a much better starting point than generating everything and then discovering the agent went in a direction you didn't want.

From those answers and the approved plan, it builds up the workshop configuration and starts generating content.

lab-python-air-intro/
├── CLAUDE.md
├── README.md
├── exercises/
│   ├── README.md
│   ├── pyproject.toml
│   └── app.py
├── resources/
│   └── workshop.yaml
└── workshop/
    ├── setup.d/
    │   └── 01-install-packages.sh
    ├── profile
    └── content/
        ├── 00-workshop-overview.md
        ├── 01-first-air-app.md
        ├── 02-air-tags.md
        ├── 03-adding-routes.md
        └── 99-workshop-summary.md

The generated workshop pages cover a natural learning progression:

  1. Overview, introducing Air and its key features
  2. Your First Air App, opening the starter app.py, running it, and viewing it in the dashboard
  3. Building with Air Tags, replacing the simple page with styled headings, lists, and a horizontal rule to demonstrate tag nesting, attributes, and composition
  4. Adding Routes, creating an about page with @app.page, a dynamic greeting page with path parameters, and navigation links between pages
  5. Summary, recapping concepts and pointing to further learning

What the skill produced is a complete workshop with properly structured instruction pages that follow the guided experience philosophy. Learners progress through the material entirely through clickable actions. Terminal commands are executed by clicking. Files are opened, created, and modified through editor actions. The workshop configuration includes the correct YAML structure, the right session applications are enabled, and data variables are used where content needs to be parameterised for each learner's environment.

Workshop dashboard showing the Air framework workshop with instructions and clickable actions

The generated content covers the progression you'd want in an introductory workshop, starting from the basics and building up to more complete applications. At each step, the explanations provide context for what the learner is about to do before the clickable actions guide them through doing it. That rhythm of explain, show, do, observe, the pattern I described in my earlier posts, is maintained consistently throughout.

Is the generated workshop perfect and ready to publish as-is? Realistically, no. Although the AI can generate some pretty amazing content, it doesn't always get things exactly right. In this case three changes were needed before the workshop would run correctly.

The first was removing some unnecessary configuration from the pyproject.toml. The generated file included settings that attempted to turn the application into an installable package, which wasn't needed for a simple workshop exercise. This isn't a surprise. AI agents often struggle to generate correct configuration for uv because the tooling has changed over time and there's plenty of outdated documentation out there that leads models astray.

The second was that the AI generated the sample application as app.py rather than main.py, which meant the air run command in the workshop instructions had to be updated to specify the application name explicitly. A small thing, but the kind of inconsistency that would trip up a learner following the steps.

The third was an unnecessary clickable action. The generated instructions included an action for the learner to click to open the editor on the app.py file, but the editor would already have been displayed by a previous action. This one turned out to be a gap in the skill itself. When using clickable actions to manipulate files in the editor, the editor tab is always brought to the foreground as a side effect. The skill didn't make that clear enough, so the AI added a redundant step to explicitly show the editor tab.

That last issue is a good example of why even small details matter when creating a skill, and also why skills have an advantage over relying purely on model training. Because the skill can be updated at any time, fixing that kind of gap is straightforward. You edit the reference material, and every future workshop generation benefits immediately. You aren't dependent on waiting for some future LLM model release that happens to have seen more up-to-date documentation.

You can browse the generated files in the sample repository on GitHub. If you check the commit history you'll see how little had to be changed from what was originally generated.

Even with those fixes, the changes were minor. The overall structure was correct, the clickable actions worked, and the content provided a coherent learning path. What would have taken hours of manual authoring to produce (writing correct clickable action syntax, getting YAML configuration right, maintaining consistent pacing across instruction pages) the skill handles all of that. A domain expert would still want to review the content, verify the technical accuracy of the explanations, and adjust the pacing or emphasis based on what they think matters most for learners. But the job shifts from writing everything from scratch to reviewing and refining what was generated.

What this means

Skills are a way of packaging expertise so that it can be reused. The knowledge I've accumulated about how to author effective Educates workshops over years of building the platform is now encoded in a form that an AI agent can apply. Someone who has never created an Educates workshop before could use this skill and produce content that follows the platform's conventions correctly. They bring the subject matter knowledge (or the AI researches it), and the skill provides the platform expertise.

That's what makes this different from just asking an AI to "write a workshop." The skill encodes not just facts about the platform but the workflow, the design principles, and the detailed reference material that turn general knowledge into correct, structured output. It's the difference between an AI that knows roughly what a workshop is and one that knows exactly how to build one for this specific platform.

Both the workshop authoring skill and the course design skill are available now, and I'm continuing to refine them as I use them. If the idea of guided, interactive workshops appeals to you, the Educates documentation is the place to start. And if you're interested in exploring the use of AI to generate workshops for Educates, do reach out to me.

February 20, 2026 09:39 PM UTC

Clickable actions in workshops

The idea of guided instruction in tutorials isn't new. Most online tutorials these days provide a click-to-copy icon next to commands and code snippets. It's a useful convenience. You see the command you need to run, you click the icon, and it lands in your clipboard ready to paste. Better than selecting text by hand and hoping you got the right boundaries.

But this convenience only goes so far. The instructions still assume you have a suitable environment set up on your own machine. The commands might reference tools you haven't installed, paths that don't exist in your setup, or configuration that differs from what the tutorial expects. The copy button solves the mechanics of getting text into your clipboard, but the real friction is in the gap between the tutorial and your environment. You end up spending more time troubleshooting your local setup than actually learning the thing the tutorial was supposed to teach you.

Hosted environments and the copy/paste problem

Online training platforms like Instruqt and Strigo improved on this by providing VM-based environments that are pre-configured and ready to go. You don't need to install anything locally. The environment matches what the instructions expect, so commands and paths should work as written. That eliminates the entire class of problems around "works on the tutorial author's machine but not on mine."

The interaction model, though, is still copy and paste. You read instructions in one panel, find the command you need, copy it, switch to the terminal panel, paste it, and run it. For code changes, you copy a snippet from the instructions and paste it into a file in the editor. It works, but it's a manual process that requires constant context switching between panels. Every copy and paste is a small interruption, and over the course of a full workshop those interruptions add up. Learners end up spending mental energy on the mechanics of following instructions rather than on the material itself.

When commands became clickable

Katacoda, before it was shut down by O'Reilly in 2022, included an improvement to this model. Commands embedded in the workshop instructions were clickable. Click on a command and it would automatically execute in the terminal session provided alongside the instructions. No copying, no pasting, no switching between panels. The learner reads the explanation, clicks the command, and watches the result appear in the terminal. The flow from reading to doing became much more seamless.

This was a meaningful step forward for terminal interactions specifically. But it only covered one part of the workflow. For code changes, editing configuration files, or any interaction that involved working with files in an editor, you were still back to the copy and paste model. The guided experience had a gap. Commands were frictionless, but everything else still required manual effort.

Educates and the fully guided experience

Educates takes the idea of clickable actions and extends it across the entire workshop interaction. The workshop dashboard provides instructions alongside live terminals and an embedded VS Code editor. Throughout the instructions, learners encounter clickable actions that cover not just running commands, but the full range of things you'd normally do in a hands-on technical workshop.

Terminal actions work the way Katacoda relied on. Click on a command in the instructions and it runs in the terminal. But Educates goes further by providing a full set of editor actions as well. Clickable actions can open a file in the embedded editor, create a new file with specified content, select and highlight specific text within a file, and then replace that selected text with new content. You can append lines to a file, insert content at a specific location, or delete a range of lines. All of it driven by clicking on actions in the instructions rather than manually editing files.

Educates also includes YAML-aware editor actions, which is significant because YAML editing is notoriously error-prone when done by hand. A misplaced indent or a missing space after a colon can break an entire configuration file, and debugging YAML syntax issues is not what anyone signs up for in a workshop about Kubernetes or application deployment. The YAML actions let you reference property paths like spec.replicas or spec.template.spec.containers[name=nginx] and set values, add items to sequences, or replace entries, all while preserving existing comments and formatting in the file.

Beyond editing, Educates provides examiner actions that run validation scripts to check whether the learner has completed a step correctly. In effect, the workshop can grade the learner's work and provide immediate feedback. If they missed a step or made an error, they find out right away rather than discovering it three steps later when something else breaks. There are also collapsible section actions for hiding optional content or hints until the learner needs them, and file transfer actions for downloading files from the workshop environment to the learner's machine or uploading files into it.

The end result is that learners can progress through an entire workshop without ever manually typing a command, editing a file by hand, or wondering whether they've completed a step correctly. They focus on understanding the concepts being taught while the clickable actions handle the mechanics. That changes the experience fundamentally. Instead of the workshop being something you push through, it becomes something that carries you forward.

The dashboard in action

To get a sense for what this looks like in practice, here are a couple of screenshots from an Educates workshop.

Workshop instructions with a clickable terminal command and the result displayed in the terminal panel

The instructions panel on the left contains a clickable action for running a command. When the learner clicks it, the command executes in the terminal panel and the output appears immediately. No copying, no pasting, no typing.

The embedded editor showing text that has been selected and replaced through clickable actions in the instructions

Here the embedded editor shows the result of a select-and-replace flow. The instructions guided the learner through highlighting specific text in a file and then replacing it with updated content, all through clickable actions. The learner sees exactly what changed and why, without needing to manually locate the right line and make the edit themselves.

How it works in the instructions

Workshop instructions in Educates are written in markdown. Clickable actions are embedded as specially annotated fenced code blocks where the language identifier specifies the action type and the body contains YAML configuration that controls what the action does.

For example, to guide a learner through updating an image reference in a Kubernetes deployment file, you might include two actions in sequence. The first selects the text that needs to change:

```editor:select-matching-text
file: ~/exercises/deployment.yaml
text: "image: nginx:1.19"
```

The second replaces the selected text with the new value:

```editor:replace-text-selection
file: ~/exercises/deployment.yaml
text: "image: nginx:latest"
```

When the learner clicks the first action, the matching text is highlighted in the editor so they can see exactly what will change. When they click the second, the replacement is applied. They understand the change being made because they see both the before and after states, but they don't need to manually find the right line, select the text, and type the replacement. The instructions guide them through it.

For terminal commands, the syntax is even simpler:

```terminal:execute
command: |-
  echo "Hello from terminal:execute"
```

The YAML within each code block controls everything about the action: which file to operate on, what text to match or replace, which terminal session to use, and so on. The format is consistent across all action types. Once you understand the pattern of action type as the language identifier and YAML configuration as the body, authoring with actions is straightforward.

The value of removing friction

The progression from copy/paste tutorials to hosted environments to clickable commands to a fully guided experience like Educates is ultimately a progression toward removing every point where a learner might disengage. Each improvement eliminates another source of friction, another moment where someone might lose focus because they're fighting the tools instead of learning the material. When the mechanics of following instructions become invisible, learners stay engaged longer and absorb more of what the workshop is trying to teach.

In my previous post I discussed how this interactive format, combined with thoughtful use of AI for content generation, can produce workshop content that maintains consistent quality throughout. The clickable actions I've described here are what make that format possible. They're the mechanism that turns static instructions into a guided, interactive experience where the learner's attention stays on the concepts rather than the process.

In future posts I plan to write about how I'm using AI agent skills to automate the creation of Educates workshops, including the generation of all the clickable actions that drive the guided process along with the commentary and explanations the workshop instructions include. The goal is that the generated workshop runs out of the box, with the only remaining step being for the domain expert to validate the content and tweak where necessary. That has the potential to save a huge amount of time in creating workshops, making it practical to build high-quality guided learning experiences for topics that would otherwise never get the investment.

February 20, 2026 09:39 PM UTC


Real Python

The Real Python Podcast – Episode #285: Exploring MCP Apps & Adding Interactive UIs to Clients

How can you move your MCP tools beyond plain text? How do you add interactive UI components directly inside chat conversations? This week on the show, Den Delimarsky from Anthropic joins us to discuss MCP Apps and interactive UIs in MCP.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

February 20, 2026 12:00 PM UTC


Graham Dumpleton

When AI content isn't slop

In my last post I talked about the forces reshaping developer advocacy. One theme that kept coming up was content saturation. AI has made it trivially easy to produce content, and the result is a flood of generic, shallow material that exists to fill space rather than help anyone. People have started calling this "AI slop," and the term captures something real. Recycled tutorials, SEO-bait blog posts, content that says nothing you couldn't get by asking a chatbot directly. There's a lot of it, and it's getting worse.

The backlash against AI slop is entirely justified. But I've been wondering whether it has started to go too far.

The backlash is justified

To be clear, the problem is real. You can see it every time you search for something technical. The same generic "getting started" guide, rewritten by dozens of different sites (or quite possibly the same AI), each adding nothing original. Shallow tutorials that walk through the basics without any insight from someone who has actually used the technology in practice. Content that was clearly produced to fill a content calendar rather than to answer a question anyone was actually asking.

Developers have become good at spotting this. Most can tell within a few seconds whether something was written by a person with genuine experience or generated to tick a box. That's a healthy instinct. The bar for content worth reading has gone up, and honestly, that's probably a good thing. There was plenty of low-effort content being produced by humans long before AI entered the picture.

But healthy skepticism can tip over into reflexive dismissal. "AI-generated" has become a label that gets applied broadly, and once it sticks, people stop evaluating the content on its merits. The assumption becomes that if AI was involved, the content can't be worth reading. That misses some important distinctions.

Not all AI content serves the same purpose

There are two very different ways to use AI for content. One is to mass-produce generic articles to flood search results or pad out a blog. The goal is volume, not value. Nobody designed the output with a particular audience in mind or thought carefully about what the content needed to achieve. That's slop, and the label fits.

The other is to use AI as a tool within a system you've designed, where the output has a specific structure, a specific audience, and a specific purpose. The human provides the intent and the domain knowledge. The AI helps execute within those constraints.

The problem with AI slop is not that AI generated it. The problem is that nobody designed it with care or purpose. There was no thought behind the structure, no domain expertise informing the content, no consideration for who would read it or what they'd take away from it. If you bring all of those things to the table, the output is a different thing entirely.

Workshop instructions aren't blog posts

I've been thinking about this because of my own project. Educates is an interactive training platform I've been working on for over five years (I mentioned it briefly in my earlier post when I started writing here again). It's designed for hands-on technical workshops where people learn by doing, not just by reading.

Anyone who has run a traditional workshop knows the problem. You give people a set of instructions, and half of them get stuck before they've finished the first exercise. Not because the concepts are hard, but because the mechanics are. They're copying long commands from a document, mistyping a path, missing a flag, getting an error that has nothing to do with what they're supposed to be learning. The experience becomes laborious. People switch off. They stop engaging with the material and start just trying to get through it.

Educates takes a different approach. Workshop instructions are displayed alongside live terminals and an embedded code editor. The instructions include things that learners can click on that perform actions for them. Click to run a command in the terminal. Click to open a file in the editor. Click to apply a code change. Click to run a test. The aim is to make the experience as frictionless as possible so that learners stay engaged throughout.

This creates a rhythm. You see code in context. You read an explanation of what it does and what needs to change. You click to apply the change. You click to run it and observe the result. At every step, learners are actively progressing through a guided flow rather than passively reading a wall of text. Their attention stays on the concepts being taught, not on the mechanics of following instructions. People learn more effectively because nothing about the process gives them a reason to disengage.

Where AI fits into this

Writing good workshop content by hand is hard. Not just because of the volume of writing, but because maintaining that engaging, well-paced flow across a full workshop takes sustained focus. It's one thing to write a good explanation for one section. It's another to keep that quality consistent across dozens of sections covering an entire topic. Humans get tired. Explanations become terse halfway through. Steps that should guide the learner smoothly start to feel rushed or incomplete. The very quality that makes workshops effective, keeping learners engaged from start to finish, is the hardest thing to sustain when you're writing it all by hand.

This is where AI, with the right guidance and steering, can actually do well. When you provide the content conventions for the platform, the structure of the workshop, and clear direction about the learning flow you want, AI can generate content that maintains consistent quality and pacing throughout. It doesn't get fatigued halfway through and start cutting corners on explanations. It follows the same pattern of explaining, showing, applying, and observing as carefully in section twenty as it did in section one.

That said, this only works because the content has a defined structure, a specific format, and a clear purpose. The human still provides the design and the domain expertise. The AI operates within those constraints. With review and iteration, the result can actually be superior to what most people would produce by hand for this kind of structured content. Not because AI is inherently better at explaining things, but because maintaining that engaging flow consistently across a full workshop is something humans genuinely struggle with.

Slop is a design problem, not a tool problem

The backlash against AI slop is well-founded. Content generated without intent, without structure, and without domain expertise behind it deserves to be dismissed. But the line should be drawn at intent and design, not at whether AI was involved in the process. Content that was designed with a clear purpose, structured for a specific use case, and reviewed by someone who understands the domain is not slop, regardless of how it was produced. Content that was generated to fill space with no particular audience in mind is slop, regardless of whether a human wrote it.

I plan to write more about Educates in future posts, including what makes the interactive workshop format effective and how it changes the way people learn. For now, the point is simpler. Before dismissing AI-generated content out of hand, it's worth asking what it was designed to do and whether it does that well.

And yes, this post was itself written with the help of AI, guided by the kind of intent, experience, and hands-on steering I've been talking about. The same approach I'm applying to generating workshop content. If the argument holds, it should hold here too.

February 20, 2026 12:00 AM UTC

February 19, 2026


Paolo Melchiorre

Django ORM Standalone⁽¹⁾: Querying an existing database

A practical step-by-step guide to using Django ORM in standalone mode to connect to and query an existing database using inspectdb.

February 19, 2026 11:00 PM UTC


PyBites

How Even Senior Developers Mess Up Their Git Workflow

There are few things in software engineering that induce panic quite like a massive git merge conflict.

You pull down the latest code, open your editor, and suddenly your screen is bleeding with <<<<<<< HEAD markers. Your logic is tangled with someone else’s, the CSS is conflicting, and you realise you just wasted hours building on top of outdated architecture.

It is easy to think this only happens to juniors, but it happens to us all. Case in point – this week it was the two of us butting… HEADs (get it?).

When you code in isolation, you get comfortable. You stop checking for open pull requests, you ignore issue trackers and you just start writing code. This is the trap I fell into.

And that is exactly how you break your application. It’s exactly how I broke our application!

If you want to avoid spending your weekend untangling a broken repository (ahem… like we did), you need to enforce these three non-negotiable git habits.

1. Stop Coding in a Vacuum and Use Issue Trackers

Don’t go rogue and start redesigning a codebase without talking to your team. It doesn’t matter if it’s a massive enterprise app or a two-person side project.

If two developers are working on the same views and templates without dedicated issue tickets, a collision is inevitable. You need to break generic ideas like “redesign the UI” into highly specific, granular issues (e.g., “fix this menu,” “change the nav bar colour”).

Communication is your first line of defence against code conflicts.

2. Check for Stale Pull Requests Before You Branch

Pulling the latest code from main is the baseline, but as I was painfully reminded, it isn’t enough.

Before you write a single line of code, you have to check for open pull requests. Your teammate might have a massive architectural change sitting in review that hasn’t hit production yet. If you branch off an old version of main while ignoring a pending PR, you are guaranteed to hit merge conflicts when you finally try to integrate your work.

Once your branch is merged, leave it alone. Don’t keep committing to a stale branch. Go ahead and create a brand new one for your next feature.

3. Master the Bailout Commands

Even with the best practices in place, mistakes happen. You might accidentally code a new feature directly on the main branch, or tangle your logic with a bug fix.

When things go wrong, you need to know how to safely extract your work. This is where advanced git commands become lifesavers. You need to know how to use git stash to temporarily park your changes, create a clean branch, and reapply them. You should also understand how to use git cherry-pick to pull specific historical commits out of a messy branch and into a clean one.

These tools give you the comfort to manipulate code without the fear of destroying the repository.


Bob and I got into a deep discussion about this exact issue after we, as I alluded to, broke every single one of these rules over the weekend.

We were working on our privacy-first book tracking app, Pybites Books. Because we hadn’t coded deeply together on the same codebase in a while, I was rusty and complacent. We didn’t use hyper-specific issues, I ignored an open pull request that was three weeks old, and we both changed the colour scheme independently.

It resulted in a massive merge conflict that required a lot of manual reconciliation, stashing, and cherry-picking to fix.

If you want to hear the full breakdown of our git mess, what went wrong, and how we saved the app, listen using the following links!

Listen to the Episode

– Julian

P.S. Check out the app that caused all of this drama! If you want a privacy-first way to track your reading without being farmed for data, head over to Pybites Books. We just shipped a massive new statistics dashboard (that survived the merge conflict!)

February 19, 2026 10:39 PM UTC


The Python Coding Stack

The Journey From LBYL to EAFP • [Club]

LBYL came more naturally to me in my early years of programming. It seemed to have fewer obstacles in those early stages, fewer tricky concepts.

And in my 10+ years of teaching Python, I also preferred teaching LBYL to beginners and delaying EAFP until later.

But over the years, as I came to understand Python’s psyche better, I gradually shifted my programming style—and then, my teaching style, too.

So, what are LBYL and EAFP? And which one is more suited to Python?

I’m running a series of three live workshops starting this week.
Each workshop is 2 hours long, so plenty to time to explore core Python topics:

#1 • Python’s Plumbing: Dunder Methods and Python’s Hidden Interface
#2 • Pythonic Iteration: Iterables, Iterators,
itertools
#3 • To Inherit or Not? Inheritance, Composition, Abstract Base Classes, and Protocols

Read more and book your place here:
https://www.thepythoncodingstack.com/p/when-it-works-is-not-good-enough

Book Workshops

Look Both Sides Before Crossing the Road

You should definitely look before you leap across a busy road…or any road, really. And programming also has a Look Before You Leap concept—that’s LBYL—when handling potential failure points in your code.

Let’s start by considering this basic example. You define a function that accepts a value and a list. The function adds the value to the list if the value is above a user-supplied threshold:

def add_value_above_threshold(value, threshold, data):
    if value >= threshold:
        data.append(value)

You can confirm this short code works as intended:

# ...
prices = []
add_value_above_threshold(12, 5, prices)
add_value_above_threshold(3, 5, prices)
add_value_above_threshold(9, 5, prices)
print(prices)

This code outputs the list with the two prices above the threshold:

[12, 9]

However, you want to ensure this can’t happen:

# ...
products = {}
add_value_above_threshold(12, 5, products)

Now, products is a dictionary, but add_value_above_threshold() was designed to work with lists and not dictionaries:

Traceback (most recent call last):
  ...
    add_value_above_threshold(12, 5, products)
    ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
  ...
    data.append(value)
    ^^^^^^^^^^^
AttributeError: ‘dict’ object has no attribute ‘append’

One option is the look before you leap (LBYL):

def add_value_above_threshold(value, threshold, data):
    if not isinstance(data, list):
        print(”Invalid format. ‘data’ must be a list”)
        return
    if value >= threshold:
        data.append(value)

Now, the function prints a warning when you pass a dictionary, and it doesn’t crash the program!

But this is too restrictive.

Let’s assume you decide to use a deque instead of a list:

from collections import deque
​
# ...
​
prices = deque()
add_value_above_threshold(12, 5, prices)
add_value_above_threshold(3, 5, prices)
add_value_above_threshold(9, 5, prices)
print(prices)

This code still complains that it wants a list and doesn’t play ball:

Invalid format. ‘data’ must be a list
Invalid format. ‘data’ must be a list
Invalid format. ‘data’ must be a list
deque([])

But there’s no reason why this code shouldn’t work since deque also has an .append() method.

You could change the call to isinstance() to include the deque data type—isinstance(data, list | deque)—but then there may be other data structures that are valid and can be used in this function. You don’t want to have to write them all.

If you’re well-versed with the categories of data structures—perhaps because you devoured the The Python Data Structure Categories Series—then you might conclude you need to check whether the object is a MutableSequence since all mutable sequences have an .append() method. You can import MutableSequence from collections.abc and use isinstance(data, MutableSequence). Now you’re fine to use lists, deques, or any other mutable sequence.

This version fits better with Python’s duck-typing philosophy. It doesn’t restrict the function to a limited number of data types but to a category of data types. This category is defined by what the data types can do. In duck typing, you care about what an object can do rather than what it is. You can read more about duck typing in Python in this post: When a Duck Calls Out • On Duck Typing and Callables in Python

However, you could still have other data types that have an .append() method but may not fully fit into the MutableSequence category. There’s no reason you should exclude those data types from working with your function.

Perhaps, you could use Python’s built-in hasattr() to check whether the object you pass has an .append() attribute. You’re now checking whether the object has the required attribute rather than what the object is.

But if you’re going through all this trouble, you can go a step further.

Just Go For It and See What Happens

Why not just run the line of code that includes data.append() and see what happens? Ah, but you don’t want the code to fail if you use the wrong data type—you only want to print a warning, say.

That’s where the try..except construct comes in:

def add_value_above_threshold(value, threshold, data):
    if value < threshold:  # inequality flipped to avoid nesting
        return
    try:
        data.append(value)
    except AttributeError:
        print(
            “Provided data structure does not support appending values.”
        )

This is the Easier to Ask for Forgiveness than Permission (EAFP) philosophy. Just try the code. If it doesn’t work, you can then deal with it in the except block. Now, this fits even more nicely with Python’s duck typing philosophy. You’re asking the program whether data can append a value. It doesn’t matter what data is–can it append a value?

You don’t have to think about all the valid data types or which category they fall into. And rather than checking whether the data type has the .append() attribute first, you just try to run the code and deal with the consequences later. That’s why it’s easier to ask for forgiveness than permission.

But don’t use this philosophy when crossing a busy road. Stick with “look before you leap” there!

Another Example Comparing LBYL and EAFP

Read more

February 19, 2026 10:26 PM UTC


Django Weblog

Plan to Adopt Contributor Covenant 3 as Django’s New Code of Conduct

Last month we announced our plan to adopt Contributor Covenant 3 as Django's new Code of Conduct through a multi-step process. Today we're excited to share that we've completed the first step of that journey!

What We've Done

We've merged new documentation that outlines how any member of the Django community can propose changes to our Code of Conduct and related policies. This creates a transparent, community-driven process for keeping our policies current and relevant.

The new process includes:

How You Can Get Involved

We welcome and encourage participation from everyone in the Django community! Here's how you can engage with this process:

What's Next

We're moving forward with the remaining steps of our plan:

Each step will have its own pull request where the community can review and provide feedback before we merge. We're committed to taking the time needed to incorporate your input thoughtfully.

Thank you for being part of this important work to make Django a more welcoming and inclusive community for everyone!

February 19, 2026 03:51 PM UTC


Real Python

Quiz: Python's tuple Data Type: A Deep Dive With Examples

In this quiz, you’ll test your understanding of Python tuples.

By working through this quiz, you’ll revisit various ways to interact with Python tuples. You’ll also practice recognizing common features and gotchas.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

February 19, 2026 12:00 PM UTC


PyCharm

LangChain Python Tutorial: 2026’s Complete Guide

LangChain Python Tutorial

If you’ve read the blog post How to Build Chatbots With LangChain, you may want to know more about LangChain. This blog post will dive deeper into what LangChain offers and guide you through a few more real-world use cases. And even if you haven’t read the first post, you might still find the info in this one helpful for building your next AI agent.

LangChain fundamentals

Let’s have a look at what LangChain is. LangChain provides a standard framework for building AI agents powered by LLMs, like the ones offered by OpenAI, Anthropic, Google, etc., and is therefore the easiest way to get started. LangChain supports most of the commonly used LLMs on the market today.

LangChain is a high-level tool built on LangGraph, which provides a low-level framework for orchestrating the agent and runtime and is suitable for more advanced users. Beginners and those who only need a simple agent build are definitely better off with LangChain.

We’ll start by taking a look at several important components in a LangChain agent build.

Agents

Agents are what we are building. They combine LLMs with tools to create systems that can reason about tasks, decide which tools to use for which steps, analyze intermittent results, and work towards solutions iteratively.

Creating an agent is as simple as using the `create_agent` function with a few parameters:

from langchain.agents import create_agent

agent = create_agent(

   "gpt-5",

   tools=tools

)

In this example, the LLM used is GPT-5 by OpenAI. In most cases, the provider of the LLM can be inferred. To see a list of all supported providers, head over here.

LangChain Models: Static and Dynamic

There are two types of agent models that you can build: static and dynamic. Static models, as the name suggests, are straightforward and more common. The agent is configured in advance during creation and remains unchanged during execution.

import os

from langchain.chat_models import init_chat_model

os.environ["OPENAI_API_KEY"] = "sk-..."

model = init_chat_model("gpt-5")

print(model.invoke("What is PyCharm?"))



Dynamic models allow you to build an agent that can switch models during runtime based on customized logic. Different models can then be picked based on the current state and context. For example, we can use ModelFallbackMiddleware (described in the Middleware section below) to have a backup model in case the default one fails.

from langchain.agents import create_agent

from langchain.agents.middleware import ModelFallbackMiddleware

agent = create_agent(

   model="gpt-4o",

   tools=[],

   middleware=[

       ModelFallbackMiddleware(

           "gpt-4o-mini",

           "claude-3-5-sonnet-20241022",

       ),

   ],

)

Tools

Tools are important parts of AI agents. They make AI agents effective at carrying out tasks that involve more than just text as output, which is a fundamental difference between an agent and an LLM. Tools allow agents to interact with external systems – such as APIs, databases, or file systems. Without tools, agents would only be able to provide text output, with no way of performing actions or iteratively working their way toward a result.

LangChain provides decorators for systematically creating tools for your agent, making the whole process more organized and easier to maintain. Here are a couple of examples:

Basic tool

@tool

def search_db(query: str, limit: int = 10) -> str:

   """Search the customer database for records matching the query.

   """

...

   return f"Found {limit} results for '{query}'"

Tool with a custom name

@tool("pycharm_docs_search", return_direct=False)

def pycharm_docs_search(q: str) -> str:

   """Search the local FAISS index of JetBrains PyCharm documentation and return relevant passages."""

...

   docs = retriever.get_relevant_documents(q)

   return format_docs(docs)

Middleware

Middleware provides ways to define the logic of your agent and customize its behavior. For example, there is middleware that can monitor the agent during runtime, assist with prompting and selecting tools, or even help with advanced use cases like guardrails, etc.

Here are a few examples of built-in middleware. For the full list, please refer to the LangChain middleware documentation.

MiddlewareDescription
SummarizationAutomatically summarize the conversation history when approaching token limits.
Human-in-the-loopPause execution for human approval of tool calls.
Context editingManage conversation context by trimming or clearing tool uses.
PII detectionDetect and handle personally identifiable information (PII).

Real-world LangChain use cases

LangChain use cases cover a varied range of fields, with common instances including: 

  1. AI-powered chatbots
  2. Document question answering systems
  3. Content generation tools

AI-powered chatbots

When we think of AI agents, we often think of chatbots first. If you’ve read the How to Build Chatbots With LangChain blog post, then you’re already up to speed about this use case. If not, I highly recommend checking it out.

Document question answering systems

Another real-world use case for LangChain is a document question answering system. For example, companies often have internal documents and manuals that are rather long and unwieldy. A document question answering system provides a quick way for employees to find the info they need within the documents, without having to manually read through each one.

To demonstrate, we’ll create a script to index the PyCharm documentation. Then we’ll create an AI agent that can answer questions based on the documents we indexed. First let’s take a look at our tool:

@tool("pycharm_docs_search")

def pycharm_docs_search(q: str) -> str:

   """Search the local FAISS index of JetBrains PyCharm documentation and return relevant passages."""

   # Load vector store and create retriever

   embeddings = OpenAIEmbeddings(

       model=settings.openai_embedding_model, api_key=settings.openai_api_key

   )

   vector_store = FAISS.load_local(

       settings.index_dir, embeddings, allow_dangerous_deserialization=True

   )

   k = 4

   retriever = vector_store.as_retriever(

       search_type="mmr", search_kwargs={"k": k, "fetch_k": max(k * 3, 12)}

   )

   docs = retriever.invoke(q)

We are using a vector store to perform a similarity search with embeddings provided by OpenAI. Documents are embedded so the doc search tool can perform similarity searches to fetch the relevant documents when called. 

def main():

   parser = argparse.ArgumentParser(

       description="Ask PyCharm docs via an Agent (FAISS + GPT-5)"

   )

   parser.add_argument("question", type=str, nargs="+", help="Your question")

   parser.add_argument(

       "--k", type=int, default=6, help="Number of documents to retrieve"

   )

   args = parser.parse_args()

   question = " ".join(args.question)

   system_prompt = """You are a helpful assistant that answers questions about JetBrains PyCharm using the provided tools.

   Always consult the 'pycharm_docs_search' tool to find relevant documentation before answering.

   Cite sources by including the 'Source:' lines from the tool output when useful. If information isn't found, say you don't know."""

   agent = create_agent(

       model=settings.openai_chat_model,

       tools=[pycharm_docs_search],

       system_prompt=system_prompt,

       response_format=ToolStrategy(ResponseFormat),

   )

   result = agent.invoke({"messages": [{"role": "user", "content": question}]})

   print(result["structured_response"].content)

 

System prompts are provided to the LLM together with the user’s input prompt. We are using OpenAI as the LLM provider in this example, and we’ll need an API key from them. Head to this page to check out OpenAI’s integration documentation. When creating an agent, we’ll have to configure the settings for `llm`, `tools`, and `prompt`.

For the full scripts and project, see here.

Content generation tools

Another example is an agent that generates text based on content fetched from other sources. For instance, we might use this when we want to generate marketing content with info taken from documentation. In this example, we’ll pretend we’re doing marketing for Python and creating a newsletter for the latest Python release.

In tools.py, a tool is set up to fetch the relevant information, parse it into a structured format, and extract the necessary information.

@tool("fetch_python_whatsnew", return_direct=False)

def fetch_python_whatsnew() -> str:

   """

   Fetch the latest "What's New in Python" article and return a concise, cleaned

   text payload including the URL and extracted section highlights.

   The tool ignores the input argument.

   """

   index_html = _fetch(BASE_URL)

   latest = _find_latest_entry(index_html)

   if not latest:

       return "Could not determine latest What's New entry from the index page."

   article_html = _fetch(latest.url)

   highlights = _extract_highlights(article_html)

   return f"URL: {latest.url}\nVERSION: {latest.version}\n\n{highlights}"

As for the agent in agent.py

SYSTEM_PROMPT = (

   "You are a senior Product Marketing Manager at the Python Software Foundation. "

   "Task: Draft a clear, engaging release marketing newsletter for end users and developers, "

   "highlighting the most compelling new features, performance improvements, and quality-of-life "

   "changes in the latest Python release.\n\n"

   "Process: Use the tool to fetch the latest 'What's New in Python' page. Read the highlights and craft "

   "a concise newsletter with: (1) an attention-grabbing subject line, (2) a short intro paragraph, "

   "(3) 4–8 bullet points of key features with user benefits, (4) short code snippets only if they add clarity, "

   "(5) a 'How to upgrade' section, and (6) links to official docs/changelog. Keep it accurate and avoid speculation."

)

...

def run_newsletter() -> str:

   load_dotenv()

   agent = create_agent(

       model=os.getenv("OPENAI_MODEL", "gpt-4o"),

       tools=[fetch_python_whatsnew],

       system_prompt=SYSTEM_PROMPT,

       # response_format=ToolStrategy(ResponseFormat),

   )

...

As before, we provide a system prompt and the API key for OpenAI to the agent.

For the full scripts and project, see here.

Advanced LangChain concepts

LangChain’s more advanced features can be extremely useful when you’re building a more sophisticated AI agent. Not all AI agents require these extra elements, but they are commonly used in production. Let’s look at some of them.

MCP adapter

The MCP (Model Context Protocol) allows you to add extra tools or functionalities to an AI agent, making it increasingly popular among active AI agent users and AI enthusiasts alike. 

LangChain’s Client module provides a MultiServerMCPClient class that allows the AI agent to accept MCP server connections. For example:

from langchain_mcp_adapters.client import MultiServerMCPClient

client = MultiServerMCPClient(

   {

       "postman-server": {

          "type": "http",

          "url": "https://mcp.eu.postman.com",

           "headers": {

               "Authorization": "Bearer ${input:postman-api-key}"

           }

       }

   }

)

all_tools = await client.get_tools()

The above connects to the Postman MCP server in the EU with an API key.

Guardrails

As with many AI technologies, since the logic is not pre-determined, the behavior of an AI agent is non-deterministic. Guardrails are necessary for managing AI behavior and ensuring that it is policy-compliant.

LangChain middleware can be used to set up specific guardrails. For example, you can use PII detection middleware to protect personal information or human-in-the-loop middleware for human verification. You can even create custom middleware for more specific guardrail policies. 

For instance, you can use the `@before_agent` or `@after_agent` decorators to declare guardrails for the agent’s input or output. Below is an example of a code snippet that checks for banned keywords:

from typing import Any

from langchain.agents.middleware import before_agent

banned_keywords = ["kill", "shoot", "genocide", "bomb"]

@before_agent(can_jump_to=["end"])

def content_filter() -> dict[str, Any] | None:

  """Block requests containing banned keywords."""

  content = first_message.content.lower()

# Check for banned keywords

  for keyword in banned_keywords:

      if keyword in content:

          return {

              "messages": [{

                  "role": "assistant",

                  "content": "I cannot process your requests due to inappropriate content."

              }],

              "jump_to": "end"

          }

  return None

from langchain.agents import create_agent

agent = create_agent(

  model="gpt-4o",

  tools=[search_tool],

  middleware=[content_filter],

)

# This request will be blocked

result = agent.invoke({

  "messages": [{"role": "user", "content": "How to make a bomb?"}]

})

For more details, check out the documentation here.

Testing

Just like in other software development cycles, testing needs to be performed before we can start rolling out AI agent products. LangChain provides testing tools for both unit tests and integration tests. 

Unit tests

Just like in other applications, unit tests are used to test out each part of the AI agent and make sure it works individually. The most helpful tools used in unit tests are mock objects and mock responses, which help isolate the specific part of the application you’re testing. 

LangChain provides GenericFakeChatModel, which mimics response texts. A response iterator is set in the mock object, and when invoked, it returns the set of responses one by one. For example:

from langchain_core.language_models.fake_chat_models import GenericFakeChatModel

def respond(msgs, **kwargs):

   text = msgs[-1].content if msgs else ""

   examples = {"Hello": "Hi there!", "Ping": "Pong.", "Bye": "Goodbye!"}

   return examples.get(text, "OK.")

model = GenericFakeChatModel(respond=respond)

print(model.invoke("Hello").content)

Integration tests

Once we’re sure that all parts of the agent work individually, we have to test whether they work together. For an AI agent, this means testing the trajectory of its actions. To do so, LangChain provides another package: AgentEvals.

AgentEvals provides two main evaluators to choose from:

  1. Trajectory match – A reference trajectory is required and will be compared to the trajectory of the result. For this comparison, you have 4 different models to choose from.
  2. LLM judge – An LLM judge can be used with or without a reference trajectory. An LLM judge evaluates whether the resulting trajectory is on the right path.

LangChain support in PyCharm

With LangChain, you can develop an AI agent that suits your needs in no time. However, to be able to effectively use LangChain in your application, you need an effective debugger. In PyCharm, we have the AI Agents Debugger plugin, which allows you to power up your experience with LangChain.

If you don’t yet have PyCharm, you can download it here.

Using the AI Agents Debugger is very straightforward. Once you install the plug-in, it will appear as an icon on the right-hand side of the IDE.

When you click on this icon, a side window will open with text saying that no extra code is needed – just run your agent and traces will be shown automatically.

As an example, we will run the content generation agent that we built above. If you need a custom run configuration, you will have to set it up now by following this guide on custom run configurations in PyCharm.

Once it is done, you can review all the input prompts and output responses at a glance. To inspect the LangGraph, click on the Graph button in the top-right corner.

The LangGraph view is especially useful if you have an agent that has complicated steps or a customized workflow.

Summing up

LangChain is a powerful tool for building AI agents that work for many use cases and scenarios. It’s built on LangGraph, which provides low-level orchestration and runtime customization, as well as compatibility with a vast variety of LLMs on the market. Together, LangChain and LangGraph set a new industry standard for developing AI agents.

February 19, 2026 10:40 AM UTC