skip to navigation
skip to content

Planet Python

Last update: April 03, 2026 09:43 PM UTC

April 03, 2026


Real Python

Quiz: How to Add Features to a Python Project With Codex CLI

In this quiz, you’ll test your understanding of How to Add Features to a Python Project With Codex CLI.

By working through this quiz, you’ll revisit how to install, configure, and use Codex CLI to implement and refine features in a Python project using natural language prompts.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 03, 2026 12:00 PM UTC

Quiz: Class Concepts: Object-Oriented Programming in Python

In this quiz, you’ll test your understanding of Class Concepts: Object-Oriented Programming in Python.

By working through this quiz, you’ll revisit how to define classes, use instance and class attributes, write different types of methods, and apply the descriptor protocol through properties.

You can also deepen your knowledge with the tutorial Python Classes: The Power of Object-Oriented Programming.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 03, 2026 12:00 PM UTC


Rodrigo GirĂŁo SerrĂŁo

Indexable iterables

Learn how objects are automatically iterable if you implement integer indexing.

Introduction

An iterable in Python is any object you can traverse through with a for loop. Iterables are typically containers and iterating over the iterable object allows you to access the elements of the container.

This article will show you how you can create your own iterable objects through the implementation of integer indexing.

Indexing with __getitem__

To make an object that can be indexed you need to implement the method __getitem__.

As an example, you'll implement a class ArithmeticSequence that represents an arithmetic sequence, like \(5, 8, 11, 14, 17, 20\). An arithmetic sequence is defined by its first number (\(5\)), the step between numbers (\(3\)), and the total number of elements (\(6\)). The sequence \(5, 8, 11, 14, 17, 20\) is seq = ArithmeticSequence(5, 3, 6) and seq[3] should be \(14\). Using some arithmetic, you can implement indexing in __getitem__ directly:

class ArithmeticSequence:
    def __init__(self, start: int, step: int, total: int) -> None:
        self.start = start
        self.step = step
        self.total = total

    def __getitem__(self, index: int) -> int:
        if not 0 <= index < self.total:
            raise IndexError(f"Invalid index {index}.")

        return self.start + index * self.step

seq = ArithmeticSequence(5, 3, 6)
print(seq[3])  # 14

Turning an indexable object into an iterable

If your object accepts integer indices, then it is automatically an iterable. In fact, you can already iterate over the sequence you created above by simply using it in a for loop:

for value in seq:
    print(value, end=", ")
# 5, 8, 11, 14, 17, 20,

How Python distinguishes iterables from non-iterables

You might ask yourself “how does Python inspect __getitem__ to see it uses numeric indices?” It doesn't! If your object implements __getitem__ and you try to use it as an iterable, Python will try to iterate over it. It either works or it doesn't!

To illustrate this point, you can define a class DictWrapper that wraps a dictionary and implements __getitem__ by just grabbing the corresponding item out of a dictionary:

class DictWrapper:
    def __init__(self, values):
        self.values = values

    def __getitem__(self, index):
        return self.values[index]

Since DictWrapper implements __getitem__, if an instance of DictWrapper just happens to have some integer keys (starting at 0) then you'll be able to iterate partially over the dictionary:

d1 = DictWrapper({0: "hey", 1: "bye", "key": "value"})

for value in d1:
    print(value)
hey
bye
Traceback (most recent call last):
  File "<python-input-25>", line 3, in <module>
    for value in d1:
                 ^^
  File "<python-input-18>", line 6, in __getitem__
    return self.values[index]
           ~~~~~~~~~~~^^^^^^^
KeyError: 2

What's interesting is that you can see explicitly that Python tried to index the object d with the key 2 and it didn't work. In the ArithmeticSequence above, you didn't get an error because you raised IndexError when you reached the end and that's how Python understood the iteration was done. In this case, since you get a KeyError, Python doesn't understand what's going on and just...

April 03, 2026 11:41 AM UTC


Talk Python Blog

Announcing Course Completion Certificates

I’m very excited to share that you can now generate course completion certificates automatically at Talk Python Training. What’s even better is our certificates allow you to one-click add them as official licenses and certifications on LinkedIn.

Remember last week, I added some really nice features to your account page showing which courses are completed and which ones you’ve recently participated in. Just start there. Find a course you recently completed, click certificate, and there is a Share to LinkedIn UI right there. It’s nearly entirely automated.

April 03, 2026 02:39 AM UTC


ListenData

How to Build ChatGPT Clone in Python

In this article, we will see the steps involved in building a chat application and an answering bot in Python using the ChatGPT API and gradio.

Developing a chat application in Python provides more control and flexibility over the ChatGPT website. You can customize and extend the chat application as per your needs. It also help you to integrate with your existing systems and other APIs.

Steps to build ChatGPT Clone in Python
To read this article in full, please click here

April 03, 2026 12:42 AM UTC

How to Use Gemini API in Python

Integrating Gemini API with Python

In this tutorial, you will learn how to use Google's Gemini AI model through its API in Python.

Updated (April 3, 2026): This tutorial has been updated for the latest Gemini models, including Gemini 3.1 Flash and Gemini 3.1 Pro. It now supports real-time search, multimodal generation, and the latest Flash/Pro model aliases such as gemini-flash-latest and gemini-pro-latest.
Steps to Access Gemini API

Follow the steps below to access the Gemini API and then use it in python.

  1. Visit Google AI Studio website.
  2. Sign in using your Google account.
  3. Create an API key.
  4. Install the Google AI Python library for the Gemini API using the command below :
    pip install google-genai
    .
To read this article in full, please click here

April 03, 2026 12:40 AM UTC

How to Use Web Search in ChatGPT API

In this tutorial, we will explore how to use web search in OpenAI API.

Installation Step : Please make sure to install the openai library using the command - pip install openai.

Python Code
from openai import OpenAI
client = OpenAI(api_key="sk-xxxxxxxxx") # Replace with your actual API key

response = client.responses.create(
    model="gpt-5.4",
    tools=[{"type": "web_search_preview"}],
    input="Apple (AAPL) most recent stock price"
)

print(response.output_text)
Output

As of the latest available data (April 2, 2026), Apple Inc. (AAPL) stock is trading at $255.92 per share, reflecting an increase of $0.29 (approximately 0.11%) from the previous close.

Search Detail Level

In the openai latest models, the search_context_size setting controls how much information the tool gathers from the web to answer your question. A higher setting gives better answers but is slower and costs more while a lower setting is faster and cheaper but might not be as accurate. Possible values are high, medium or low.

Python Code
from openai import OpenAI
client = OpenAI(api_key="sk-xxxxxxxxx") # Replace with your actual API key

response = client.responses.create(
    model="gpt-5.4",
    tools=[{
        "type": "web_search_preview",
        "search_context_size": "high",
    }],
    input="Which team won the latest FIFA World Cup?"
)

print(response.output_text)
Filter Search Results by Location

You can improve the relevance of search results by providing approximate geographic details such as country, city, region or timezone. For example, use a two-letter country code like GB for the United Kingdom or free-form text for cities and regions like London. You may also specify the user's timezone using IANA format such as Europe/London.

from openai import OpenAI
client = OpenAI(api_key="sk-xxxxxxxxx")  # Use your actual API key

response = client.responses.create(
    model="gpt-5.4",
    tools=[{
        "type": "web_search_preview",
        "user_location": {
            "type": "approximate",
            "country": "GB",        # ISO 2-letter country code
            "city": "London",       # Free text for city
            "region": "London",     # Free text for region/state
            "timezone": "Europe/London"  # IANA timezone (optional)
        }
    }],
    input="What are the top-rated places to eat near Buckingham Palace?",
)

print(response.output_text)
Citations

You can use the following code to get the URL, title and location of the cited sources.

Python Code
# Citations
response = client.responses.create(
    model="gpt-5.4",
    tools=[{"type": "web_search_preview"}],
    input="most recent news from New York?"
)

annotations = response.output[1].content[0].annotations
print("Annotations:", annotations)
print("Annotations List:")
print("-" * 80)
for i, annotation in enumerate(annotations, 1):
    print(f"Annotation {i}:")
    print(f"  Title: {annotation.title}")
    print(f"  URL: {annotation.url}")
    print(f"  Type: {annotation.type}")
    print(f"  Start Index: {annotation.start_index}")
    print(f"  End Index: {annotation.end_index}")
    print("-" * 80)
Google's Custom Search API with ChatGPT

Alternative method to use web search is by integrating Google's Custom Search API with ChatGPT.

By using Google's Custom Search API, we can get real-time search results. Refer the steps below how to get an API key from the Google Developers Console and creating a custom search engine.

To read this article in full, please click here

April 03, 2026 12:38 AM UTC

4 Ways to Use ChatGPT API in Python

In this tutorial, we will explain how to use ChatGPT API in Python, along with examples.

Steps to Access ChatGPT API

Please follow the steps below to access the ChatGPT API.

  1. Visit the OpenAI Platform and sign up using your Google, Microsoft or Apple account.
  2. After creating your account, the next step is to generate a secret API key to access the API. The API key looks like this -sk-xxxxxxxxxxxxxxxxxxxx
  3. If your phone number has not been associated with any other OpenAI account previously, you may get free credits to test the API. Otherwise you have to add atleast 5 dollars into your account and charges will be based on the usage and the type of model you use. Check out the pricing details in the OpenAI website.
  4. Now you can call the API using the code below.
To read this article in full, please click here

April 03, 2026 12:25 AM UTC

April 02, 2026


death and gravity

reader 3.22 released – new web app

Hi there!

I'm happy to announce version 3.22 of reader, a Python feed reader library.

What's new? #

Here are the highlights since reader 3.20.

New feed reader web app #

The new web application is done! Features include:

In the next releases, I'll be adding back features already present in the legacy web app, stuff like full-text search, tags, read time, MP3 tag fixing, and more.

This is building towards a hosted version of reader, which should take the pain out of self-hosting, while still leaving it as an option; more to follow soonℱ. (Meanwhile, if this sounds like something you'd like to use, get in touch.)

For now, here are some screenshots:

main page (dark mode) main page (dark mode)
more filters (light mode) more filters (light mode)
feed page (dark mode) feed page (dark mode)
feeds page (dark mode) feeds page (dark mode)
article view (light mode) article view (light mode)
article view (dark mode) article view (dark mode)

Config and plugin loading #

Part of the hosted reader work, I've unified how configuration and plugins are loaded across make_reader(), the command-line interface, and the web app, by using Click to parse and validate configuration (it's not as wrong as it sounds, I promise).

As a consequence, the config file format changed from YAML to TOML and follows the shape of the CLI, and a few commands and environment variables were renamed; no other breaking changes are expected in the foreseeable future.

Scheduled updates by default #

Both update_feeds() and the CLI now limit how often feeds get updated by default; while this is a minor compatibility break, the previous behavior was arguably a bug – doing the right thing should not be opt-in.

Also, reader now honors the Cache-Control max-age and Expires HTTP headers when updating feeds, in addition to Retry-After.

AI contributions #

Finally, reader now has an AI contributions policy; tl;dr: they are banned, for now.

The reasoning is two-fold. First, after a few low-effort contributions, I decided I don't have time for this. Second, there are various issues surrounding LLMs I don't want to bother with; for more details, see the Servo, CPython, and LLVM policies.

I am open to revisiting this later (I'll do so on my own, though, thank you).


That's it for now. For more details, see the full changelog.

Want to contribute? Check out the docs and the roadmap.

Learned something new today? Share it with others, it really helps!

What is reader? #

reader takes care of the core functionality required by a feed reader, so you can focus on what makes yours different.

reader in action reader allows you to:

...all these with:

To find out more, check out the GitHub repo and the docs, or give the tutorial a try.

Why use a feed reader library? #

Have you been unhappy with existing feed readers and wanted to make your own, but:

Are you already working with feedparser, but:

... while still supporting all the feed types feedparser does?

If you answered yes to any of the above, reader can help.

The reader philosophy #

April 02, 2026 04:44 PM UTC


Mike Driscoll

Python Pop Quiz – Number Explosion

You will sometimes come across examples of code that use one or two asterisks. Depending on how the asterisks are used, they can mean different things to Python.

Check your understanding of what a single asterisk means in the following quiz!

The Quiz

What will be the output if you run this code?

numbers = range(3)
output = {*numbers}
print(output)

A) {range}

B) (range)

C) [0, 1, 2]

D) (0, 1, 2)

E) {0, 1, 2}

Hint

“Unpacking generalizations” is the term to look up if you get stuck..

The Python Quiz Book

The Answer

E) {0, 1, 2}

Explanation

A single asterisk before a Python dictionary or list is known as the unpacking operator. In this example, you tell Python to unpack three integers (0 – 2) into a set.

Here is the example running in a REPL:

>>> numbers = range(3)
>>> output = {*numbers}
>>> print(output)
{0, 1, 2}
>>> print(type(output))
<class 'set'>

The code output shows that you have created a set!

You can also use a single asterisk to unpack a dictionary’s keys:

>>> my_dict = {1: "one", 2: "two", 3: "three"}
>>> print({*my_dict})
{1, 2, 3}

If you want to take your knowledge of unpacking further, it can help to see Python functions use asterisks:

>>> def my_func(*args):
...     print(args)
... 
>>> my_func(1)
(1,)
>>> numbers = range(3)
>>> output = {*numbers}
>>> my_func(output)
({0, 1, 2},)
>>> my_func(*output)
(0, 1, 2)

When you see a single asterisk in a function definition, the asterisk means that the function can take an unlimited number of arguments. In the second example, you pass in the set as a single argument, while in the last example, you use a single asterisk to unpack the numbers and pass them in as three separate arguments.

For more information, see PEP 448 – Additional Unpacking Generalizations, which has many more examples!

Get the Book

Want to try out over one HUNDRED more quizzes? Check out the book!

Purchase at Gumroad or Leanpub or Amazon

The post Python Pop Quiz – Number Explosion appeared first on Mouse Vs Python.

April 02, 2026 12:29 PM UTC


Real Python

Quiz: Python's Counter: The Pythonic Way to Count Objects

In this quiz, you’ll test your understanding of Python’s Counter: The Pythonic Way to Count Objects.

By working through this quiz, you’ll revisit how to create Counter objects, update counts, find most common elements, and use counters as multisets with arithmetic operations.

This quiz covers practical Counter tasks such as constructing counters from different data types, accessing counts, and working with multiset operations. If you want a deeper walkthrough, review the tutorial linked above.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 02, 2026 12:00 PM UTC


Graham Dumpleton

Free Python decorator workshops

I've been working on a set of interactive workshops on Python decorators and they are now available for free on the labs page of this site. There are 22 workshops in total, covering everything from the fundamentals of how decorators work through to advanced topics like the descriptor protocol, async decorators and metaclasses. The workshops are hosted on the Educates training platform and accessed through the browser, so there is nothing to install.

An experiment in learning

We are well into the age of AI at this point. Need to know how to write a decorator in Python? Just ask ChatGPT or Claude and you will get an answer in seconds. Want to refactor some code to use decorators? Let an AI agent do it for you. The tools are genuinely impressive and I use them myself every day.

That said, as someone who spent years as a developer advocate helping people learn, the question that keeps coming up is whether there is still an appetite for actually learning how things work. Not just getting an answer, but understanding why the answer is what it is. Understanding the mechanics well enough that when the AI gives you something subtly wrong (and it will), you can spot it and fix it yourself.

These workshops are my experiment in finding out. If people are still interested in sitting down with a guided, hands-on environment and working through a topic step by step, then this format has a future. If not, it will at least help me work out whether developer advocacy even matters anymore, and whether the years I have put into building the Educates platform have been worth the effort.

What the workshops cover

The Python Decorators course starts from the ground up. The early workshops cover functions as first-class objects, closures, and the basic mechanics of how decorators work. From there the core path builds through decorator arguments, functools.wraps, stacking decorators, and class-based decorators.

Branching off from the core path are a set of elective workshops covering practical applications: input validation, caching and memoisation, access control, registration and plugin patterns, exception handling, deprecation warnings, and profiling. These are the kinds of things you would actually use decorators for in real projects.

The later workshops go deeper into territory that most tutorials skip over entirely. The descriptor protocol, how decorators interact with classes and inheritance, async decorators, class decoration, and metaclasses.

The aim here is to go well beyond the basics. There are plenty of introductory decorator tutorials out there already. What I wanted to create was something for people who want to genuinely broaden their Python knowledge and understand how the language works at a deeper level. If you have not completely given in to letting AI do all the thinking for you and still want to build real expertise, these workshops are for you.

If you have never experienced interactive online workshops like this before, I would recommend doing the Educates Walkthrough workshop first. It will get you familiar with how the platform works, how to navigate the workshop instructions, and how to use the integrated terminal and editor, before you jump into the Python content.

Keeping the workshops running

The system hosting the workshops has limited resources and capacity. I am running this on modest infrastructure and I am honestly not sure how long I will be able to keep it going. It depends in part on how much interest there is, and in part on whether I can sustain any hosting costs.

The system is also engineered to only let a limited number of people in at a time, so if you get a message about being at capacity, try again later. I will be monitoring how things hold up and if possible will increase the limits.

If you try the workshops and find them useful, it would mean a lot if you considered helping out through my GitHub Sponsors page. Even small contributions add up and would help keep the workshops available. If there is enough interest and support, I could move to a more capable hosting environment and handle more concurrent users. Right now capacity is limited and I am just seeing how things go.

What comes next

If things go well and there is demand for more, I have plans for additional workshop courses. The natural successor to the Python decorators course would be one built around my wrapt library, covering both its decorator utilities and its monkey patching features. After that I would look at WSGI and Python web hosting, which is another area where I have deep experience from years of working on mod_wsgi.

The common thread is topics that go into real depth. There is no shortage of surface-level content on the internet (and AI can generate more of it on demand). What I think is harder to find is material that takes a topic seriously, works through the edge cases, and gives you a genuine understanding of what is happening under the hood. That is what I am trying to provide.

A note on how these were made

I should be transparent about this: the workshops were created with the assistance of AI. I wrote about the process in some detail back in February, covering the approach to using AI for content, how I taught an AI about the Educates platform, and how I used AI to review the workshops.

I realise there is a certain irony in using AI to create workshops that are partly motivated by the belief that people should still learn things themselves. But I see a difference between using AI as a tool to help produce quality educational content and blindly publishing whatever an AI generates. Every workshop has been reviewed, tested, and refined based on my own knowledge and experience with Python decorators going back over a decade. Hopefully the result is something genuinely useful rather than AI slop. And hopefully the use of the Educates platform itself contributes to the experience, letting people focus on actually learning rather than spending time trying to get their own computer environment set up correctly. I hope people will judge the workshops on their quality rather than dismissing them because AI was involved in making them.

Head over to the labs page to get started. I would love to hear what you think.

April 02, 2026 04:38 AM UTC


Python Engineering at Microsoft

Python in Visual Studio Code – March 2026 Release

We’re excited to announce that the March 2026 release of the Python extension for Visual Studio Code are now available!

This release includes the following announcements:

If you’re interested, you can check the full list of improvements in our changelogs for the Python, and Pylance extensions.

Search Python Symbols in Installed Packages

When working in a new codebase or exploring an unfamiliar library, one of the most common needs is quickly locating where a function or class is defined — even if it lives outside your workspace. With this release, Pylance can now include symbols from packages installed in your active virtual environment in Workspace Symbol search (Cmd/Ctrl+T).

This is controlled by a new setting:

Python â€ș Analysis: Include Venv In Workspace Symbols

image

When enabled:

Because indexing installed packages can affect performance, this feature is opt-in by design. You can fine-tune the depth of indexing per-package using Python â€ș Analysis: Package Index Depths, which controls how deeply Pylance searches into sub-modules.

This gives you richer code exploration when you need it, without changing the default experience for everyone else.

To try it:

  1. Open Settings (Cmd+, / Ctrl+,)
  2. Search for “Include Venv In Workspace Symbols”
  3. Check the box under Python â€ș Analysis

Experimental: Rust-Based Parallel Indexer

We’re shipping an experimental setting that switches Pylance’s indexer — the engine behind completions, auto-imports, and workspace symbol search — to a new Rust-based parallel implementation that runs out-of-process.

In our testing, this indexer is on average 10× faster on large Python projects, which means faster completions after workspace open and a more responsive IntelliSense experience overall.

Python â€ș Analysis: Enable Parallel Indexing

This is intentionally experimental. We want to validate the performance gains and reliability across the wide variety of project setups and environments our users have before making it the default.

To try it:

  1. Open Settings (Cmd+, / Ctrl+,)
  2. Search for “Parallel Indexing”
  3. Check Enable Parallel Indexing (Experimental) under Python â€ș Analysis

Or add this to your settings.json:

"python.analysis.enableParallelIndexing": true

After enabling, reload VS Code (Cmd/Ctrl+Shift+P → Reload Window) to ensure the new indexer starts cleanly. This setting has the most impact on larger projects — small projects may see little difference.

We want your feedback. If you try it and notice faster completions, slower behavior, or anything unexpected, please let us know by filing an issue on the Pylance GitHub repo. Your real-world reports are what will help us get this to stable.

This is an experimental feature. If you run into issues, you can disable it at any time by unchecking the setting.

Python Environments extension

Try out these new improvements by downloading the Python extension and the Pylance extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.

The post Python in Visual Studio Code – March 2026 Release appeared first on Microsoft for Python Developers Blog.

April 02, 2026 12:27 AM UTC

April 01, 2026


Talk Python to Me

#543: Deep Agents: LangChain's SDK for Agents That Plan and Delegate

When you type a question into ChatGPT, the model only has what you typed to work with. But tools like Claude Code can plan, iterate, test, and recover from mistakes. They work more like we do. The difference is the agent harness: Planning tools, file system access, sub-agents, and carefully crafted system prompts that turn a raw LLM into something genuinely capable. <br/> <br/> Sydney Runkle is back on Talk Python representing LangChain and their new open source library, Deep Agents: A framework for building your own deep agents with plain Python functions, middleware hooks, and MCP support. This is how the magic works under the hood.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code talkpython26</a><br> <a href='https://talkpython.fm/temporal'>Temporal</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guest</strong><br/> <strong>Sydney Runkle</strong>: <a href="https://github.com/sydney-runkle?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>Claude Code uses</strong>: <a href="https://x.com/alexalbert__/status/1948765443776544885?ref=blog.langchain.com&amp;featured_on=talkpython" target="_blank" >x.com</a><br/> <strong>Deep Research</strong>: <a href="https://openai.com/index/introducing-deep-research/?ref=blog.langchain.com&amp;featured_on=talkpython" target="_blank" >openai.com</a><br/> <strong>Manus</strong>: <a href="https://manus.im/?ref=blog.langchain.com&amp;featured_on=talkpython" target="_blank" >manus.im</a><br/> <strong>Blog post announcement</strong>: <a href="https://blog.langchain.com/deep-agents/?featured_on=talkpython" target="_blank" >blog.langchain.com</a><br/> <strong>Claudes system prompt</strong>: <a href="https://github.com/kn1026/cc/blob/main/claudecode.md?ref=blog.langchain.com&amp;featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>sub agents</strong>: <a href="https://docs.anthropic.com/en/docs/claude-code/sub-agents?ref=blog.langchain.com&amp;featured_on=talkpython" target="_blank" >docs.anthropic.com</a><br/> <strong>the quick start</strong>: <a href="https://docs.langchain.com/oss/python/deepagents/quickstart?featured_on=talkpython" target="_blank" >docs.langchain.com</a><br/> <strong>CLIs</strong>: <a href="https://github.com/langchain-ai/deepagents?tab=readme-ov-file#deep-agents-cli" target="_blank" >github.com</a><br/> <strong>Talk Python's CLI</strong>: <a href="https://talkpython.fm/blog/posts/talk-python-now-has-a-cli/" target="_blank" >talkpython.fm</a><br/> <strong>custom tools</strong>: <a href="https://docs.langchain.com/oss/python/deepagents/overview#create-a-deep-agent" target="_blank" >docs.langchain.com</a><br/> <strong>DeepAgents Examples</strong>: <a href="https://github.com/langchain-ai/deepagents/tree/main/examples?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Custom Middleware</strong>: <a href="https://docs.langchain.com/langsmith/custom-middleware#how-to-add-custom-middleware" target="_blank" >docs.langchain.com</a><br/> <strong>Built in middleware</strong>: <a href="https://docs.langchain.com/oss/python/deepagents/customization#middleware" target="_blank" >docs.langchain.com</a><br/> <strong>Improving Deep Agents with harness engineering</strong>: <a href="https://blog.langchain.com/improving-deep-agents-with-harness-engineering/?featured_on=talkpython" target="_blank" >blog.langchain.com</a><br/> <strong>Prebuilt middleware</strong>: <a href="https://docs.langchain.com/oss/python/langchain/middleware/built-in?featured_on=talkpython" target="_blank" >docs.langchain.com</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=NRwA-fBNZZ4" target="_blank" >youtube.com</a><br/> <strong>Episode #543 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/543/deep-agents-langchains-sdk-for-agents-that-plan-and-delegate#takeaways-anchor" target="_blank" >talkpython.fm/543</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/543/deep-agents-langchains-sdk-for-agents-that-plan-and-delegate" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>đŸ„ Served in a Flask 🎾</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

April 01, 2026 05:20 PM UTC


"Michael Kennedy's Thoughts on Technology"

Cutting Python Web App Memory Over 31%

tl;dr; I cut 3.2 GB of memory usage from our Python web apps using five techniques: async workers, import isolation, the Raw+DC database pattern, local imports for heavy libraries, and disk-based caching. Here are the exact before-and-after numbers for each optimization.


Over the past few weeks, I’ve been ruthlessly focused on reducing memory usage on my web apps, APIs, and daemons. I’ve been following the one big server pattern for deploying all the Talk Python web apps, APIs, background services, and supporting infrastructure.

There are a ridiculous number of containers running to make everything go around here at Talk Python (23 apps, APIs, and database servers in total).

Even with that many apps running, the actual server CPU load is quite low. But memory usage is creeping up. The server was running at 65% memory usage on a 16GB server. While that may be fine - the server’s not that expensive - I decided to take some time and see if there were some code level optimizations available.

What I learned was interesting and much of it was a surprise to me. So, I thought I’d share it here with you. I was able to drop the memory usage by 3.2GB basically for free just by changing some settings, changing how I import packages in Python, and proper use of offloading some caching to disk.

How much memory were the Python apps using before optimization?

For this blog post, I’m going to focus on just two applications. However, I applied this to most of the apps that we own the source code for (as opposed to Umami, etc). Take these as concrete examples more than the entire use case.

Here are the initial stats we’ll be improving on along the way.

Application Starting Memory
Talk Python Training 1,280 MB
Training Search Indexer Daemon 708 MB
Total 1,988 MB

How async workers and Quart cut Python web app memory in half

I knew that starting with a core architectural change in how we run our apps and access our database would have huge implications. You see, we’re running our web apps as a web garden, one orchestrator, multiple worker processes via the lovely Granian.

I’ve wanted to migrate our remaining web applications to some fully asynchronous application framework. See Talk Python rewritten in Quart (async Flask) for a detailed discussion on this topic. If we have a truly async-capable application server (Granian) and a truly async web framework (Quart), then we can change our deployment style to one worker running fully asynchronous code. Much less blocking code means a single worker is more responsive now. Thus we can work with a single worker instance.

This one change alone would cut the memory usage nearly in half. To facilitate this, we needed two actions:

Action 1: Rewrite Talk Python Training in Quart

The first thing I had to do was rewrite Talk Python Training, the app I was mostly focused on at the time, in Quart. This was a lot of work. You might not know it from the outside, but Talk Python Training is a significant application.

178,000 lines of code! Rewriting this from the older framework, Pyramid, to async Flask (aka Quart), was a lot of work, but I pulled it off last week.

Action 2: Rewrite data access to raw + dc design pattern

Data access was based on MongoEngine, a barely maintained older database ODM for talking to MongoDB, which does not support async code and never will support async code. Even though we have Quart as a runtime option, we hardly can do anything async without the data access layer.

So I spent some time removing MongoEngine and implementing the Raw + DC design pattern. That saved us a ton of memory, facilitated writing async queries, and almost doubled our requests per second.

I actually wrote this up in isolation here with some nice graphs: Raw+DC Database Pattern: A Retrospective. Switching from a formalized ODM to raw database queries along with data classes with slots saved us 100 MB per worker process, or in this case, 200 MB of working memory. Given that it also sped up the app significantly, that’s a serious win.

Change Memory Saved Bonus
Rewrite to Quart (async Flask) Enabled single-worker mode Async capable
Raw + DC database pattern 200 MB (100 MB per worker) Almost 2x requests/sec

How switching to a single async Granian worker saved 542 MB

Now that our web app runs asynchronously and our database queries fully support it, we could trim our web garden down to a single, fully asynchronous worker process using Granian. When every request is run in a blocking mode, one worker not ideal. But now the requests all interleave using Python concurrency.

This brought things down to a whopping 536 MB in total (a savings of 542 MB!) I could have stopped there, and things would have been excellent compared to where we were before, but I wanted to see what else was a possibility.

Metric Value
Before (multi-worker) 1,280 MB
After (single async worker and raw+dc) 536 MB
Savings 542 MB

How isolating Python imports in a subprocess cut memory from 708 MB to 22 MB

The next biggest problem was that the Talk Python Training search indexer. It reads literally everything from the many gigabyte database backing Talk Python Training, indexes it, and stores it into a custom data structure that we use for our ultra-fast search. It was running at 708 MB in its own container.

Surely, this could be more efficient.

And boy, was it. There were two main takeaways here. I noticed first that even if no indexing ran, just at startup, this process was using almost 200 megabytes of memory. Why? Import chains.

The short version is it was importing almost all of the files of Talk Python Training and their in third-party dependencies because that was just the easiest way to write the code and because of PEP 8. When the app starts, it imports a few utilities from Talk Python Training. That, in turn, pulls in the entire mega application plus all of the dependencies that the application itself is using, bloating the memory way, way up.

All this little daemon needs to do is every few hours re-index the site. It sits there, does nothing in particular related to our app, loops around, waits for exit commands from Docker, and if enough time has elapsed, then it runs the search process with our code.

We could move all of that search indexing code into a subprocess. And only that subprocess’s code actually imports anything of significance. When the search index has to run, that process kicks off for maybe 30 seconds, builds the index, uses a bunch of memory, but once the indexing is done, it shuts down and even the imports are unloaded.

What was the change? Amazing. The search indexer went from 708 MB to just 22 MB! All we had to do was isolate imports into its own separate file and then run that separately using a Python subprocess. That’s it, 32x less memory used.

Metric Value
Before (monolithic process) 708 MB
After (subprocess isolation) 22 MB
Reduction 32x

How much memory do Python imports like boto3, pandas, and matplotlib use?

When we write simple code such as import boto3 it looks like no big deal. You’re just telling Python you need to use this library. But as I hinted at above, what it actually does is load up that library in total, and any static data or singleton-style data is created, as well as transitive dependencies for that library.

Unbeknownst to me, boto3 takes a ton of memory.

Import Statement Memory Cost (3.14)
import boto3 25 MB
import matplotlib 17 MB
import pandas 44 MB

Yet for our application, these are very rarely used. Maybe we need to upload a file to blob storage using boto3, or use matplotlib and pandas to generate some report that we rarely run.

By moving these to be local imports, we are able to save a ton of memory. What do I mean by that? Simply don’t follow PEP 8 here - instead of putting these at the top of your file, put them inside of the functions that use them, and they will only be imported if those functions are called.

def generate_usage_report():
  import matplotlib
  import pandas
  
  # Write code with these libs...

Now eventually, this generate_usage_report function probably will get called, but that’s where you go back to DevOps. We can simply set a time-to-live on the worker process. Granian will gracefully shut down the worker process and start a new one every six hours or once a day or whatever you choose.

PEP 810 – Explicit lazy imports

This makes me very excited for Python 3.15. That’s where the lazy imports feature will land. That should make this behavior entirely automatic without the need to jump through hoops.

How moving Python caches to diskcache reduced memory usage

Finally I addressed our caches. This was probably the smallest of the improvements, but still relevant. We had quite a few things that were small to medium-sized caches being kept in memory. For example, the site takes a fragment of markdown which is repeatedly used, and instead of regenerating it every time, we would stash the generated markdown and just return that from cache.

We moved most of this caching to diskcache. If you want to hear me and Vincent nerd out on how powerful this little library is, listen to the Talk Python episode diskcache: Your secret Python perf weapon.

Total memory savings: from 1,988 MB to 472 MB

So where are things today after applying these optimizations?

Application Before After Savings
Talk Python Training 1,280 MB 450 MB 1.8x
Training Search Indexer Daemon 708 MB 22 MB 32x
Total 1,988 MB 472 MB 3.2x

Applying these techniques and more to all of our web apps reduced our server load by 3.2 GB of memory. Memory is often the most expensive and scarce resource in production servers. This is a huge win for us.

April 01, 2026 03:52 PM UTC


Real Python

Python Classes: The Power of Object-Oriented Programming

Python classes are blueprints for creating objects that bundle data and behavior together. Using the class keyword, you define attributes to store state and methods to implement behavior, then create as many instances as you need. Classes are the foundation of object-oriented programming (OOP) in Python and help you write organized, reusable, and maintainable code.

By the end of this tutorial, you’ll understand that:

  • A Python class is a reusable blueprint that defines object attributes and methods.
  • Instance attributes hold data unique to each object, while class attributes are shared across all instances.
  • Python classes support single and multiple inheritance, enabling code reuse through class hierarchies.
  • Abstract base classes (ABCs) define formal interfaces that subclasses must implement.
  • Classes enable polymorphism, allowing you to use different object types interchangeably through shared interfaces.

To get the most out of this tutorial, you should be familiar with Python variables, data types, and functions. Some experience with object-oriented programming (OOP) is a plus, but you’ll cover all the key concepts you need here.

Get Your Code: Click here to download your free sample code that shows you how to build powerful object blueprints with classes in Python.

Take the Quiz: Test your knowledge with our interactive “Python Classes - The Power of Object-Oriented Programming” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Python Classes - The Power of Object-Oriented Programming

In this quiz, you'll test your understanding of Python classes, including attributes, methods, inheritance, and object-oriented programming concepts.

Getting Started With Python Classes

Python is a multiparadigm programming language that supports object-oriented programming (OOP) through classes that you can define with the class keyword. You can think of a class as a piece of code that specifies the data and behavior that represent and model a particular type of object.

What is a class in Python? A common analogy is that a class is like the blueprint for a house. You can use the blueprint to create several houses and even a complete neighborhood. Each concrete house is an object or instance that’s derived from the blueprint.

Each instance can have its own properties, such as color, owner, and interior design. These properties carry what’s commonly known as the object’s state. Instances can also have different behaviors, such as locking the doors and windows, opening the garage door, turning the lights on and off, watering the garden, and more.

In OOP, you commonly use the term attributes to refer to the properties or data associated with a specific object of a given class. In Python, attributes are variables defined inside a class with the purpose of storing all the required data for the class to work.

Similarly, you’ll use the term methods to refer to the different behaviors that objects will show. Methods are functions that you define within a class. These functions typically operate on or with the attributes of the underlying instance or class. Attributes and methods are collectively referred to as members of a class or object.

You can write classes to model the real world. These classes will help you better organize your code and solve complex programming problems.

For example, you can use classes to create objects that emulate people, animals, vehicles, books, buildings, cars, or other objects. You can also model virtual objects, such as a web server, directory tree, chatbot, file manager, and more.

Finally, you can use classes to build class hierarchies. This way, you’ll promote code reuse and remove repetition throughout your codebase.

In this tutorial, you’ll learn a lot about classes and all the cool things that you can do with them. To kick things off, you’ll start by defining your first class in Python. Then you’ll dive into other topics related to instances, attributes, and methods.

Defining a Class in Python

To define a class, you need to use the class keyword followed by the class name and a colon, just like you’d do for other compound statements in Python. Then you must define the class body, which will start at the next indentation level:

Python Syntax
class ClassName:
    <body>

In a class’s body, you can define attributes and methods as needed. As you already learned, attributes are variables that hold the class data, while methods are functions that provide behavior and typically act on the class data.

Note: In Python, the body of a given class works as a namespace where attributes and methods live. You can only access those attributes and methods through the class or its objects.

As an example of how to define attributes and methods, say that you need a Circle class to model different circles in a drawing application. Initially, your class will have a single attribute to hold the radius. It’ll also have a method to calculate the circle’s area:

Python circle.py
import math

class Circle:
    def __init__(self, radius):
        self.radius = radius

    def calculate_area(self):
        return math.pi * self.radius ** 2

In this code snippet, you define Circle using the class keyword. Inside the class, you write two methods. The .__init__() method has a special meaning in Python classes. This method is known as the object initializer because it defines and sets the initial values for the object’s attributes. You’ll learn more about this method in the Instance Attributes section.

The second method of Circle is conveniently named .calculate_area() and will compute the area of a specific circle by using its radius. In this example, you’ve used the math module to access the pi constant as it’s defined in that module.

Read the full article at https://realpython.com/python-classes/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 01, 2026 02:00 PM UTC


Peter Bengtsson

pytest "import file mismatch"

Make sure your test files in a pytest tested project sit in directories with a __init__.py

April 01, 2026 01:20 PM UTC


Real Python

Quiz: Exploring Keywords in Python

In this quiz, you’ll test your understanding of Exploring Keywords in Python.

By working through this quiz, you’ll revisit how to identify Python keywords, understand the difference between regular and soft keywords, categorize keywords by purpose, and avoid common pitfalls with deprecated keywords.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 01, 2026 12:00 PM UTC


Tryton News

Tryton News April 2026

During the last month we focused on fixing bugs, improving the behaviour of things, speeding-up performance issues - building on the changes from our last release. We also added some new features which we would like to introduce to you in this newsletter.

For an in depth overview of the Tryton issues please take a look at our issue tracker or see the issues and merge requests filtered by label.

Changes for the User

Sales, Purchases and Projects

Now we add the support for the pick-up delivery method from Shopify.

We now add a number field to the project work efforts, based on a default sequence.

Now we use the party lang functionality to set the language for the taxes and and chats.

We now add support for gift card products on Shopify.

We now add a field that indicates if an invoice is sent via Peppol to help on filtering.

Now we unpublish products from the web-shops when they are deactivated.

We now allow the Peppol admin access group to create incoming documents.

Now we improve the address schema for web-shop usage and add attn to Party Addresses.

We now allow to copy the resources from sale rental to its invoices.

Now we move the Purchase Lines in its own tab on the Production form.

We now allow the manual method on sale.

Now the sale is always confirmed when a Shopify order has payment terms.

Accounting, Invoicing and Payments

Now we set the related-to field of an account statement line to invoice when the statement line overpays an invoice.

We now filter the MIME-type of the additional documents in UBL to the ones which are allowed by the specification.

Now we forbid a zero amount in draft state and also processing state for manual payments.

We now accrue and allocate rounding errors when using multiple taxes to calculate correct total amounts.

Now we allow cash-rounding with the opposite method.

We now add UNECE codes to Belgian taxes.

Now we
add the VAT exemption code on tax for electronic invoices in the EU
.

We now add payment means to invoices to be used in UBL and UNCEFACT.

Now we add an optional account to the list of payable/receivable lines to be able to reconcile lines with the same accounts.

Stock, Production and Shipments

Now we implemented the management of ethanol in stock.

We now set the default locations when adding manually stock moves to production.

Now we add a reference field on quality inspections.

We now add a wizard to pack shipments.

Now we add the Stock Cancellation access group that allows to cancel shipments and productions in the states running or done.

User Interface

As a first step to add labels inside the widgets, we now add a generic management for the label style in form widgets.

Now we strip the user field from white-space characters in the login window.

We now add a new right drop-down menu in SAO, including Logout and Help menu items.

Now we add a visual hint on widgets of modified fields.

New Modules

We now support the common European requirements for excise products.

Now we add the sale_project_task module which creates tasks for services sold.

Now we add the account_payment_check module which manages and prints checks.

New Releases

We released bug fixes for the currently maintained long term support series 7.0 and 6.0, and for the penultimate series 7.8 and 7.6.

Security

Please update your systems to take care of a security related bug we found last month.

Changes for the System Administrator

We now use cookies to store authentication informations in Sao.

Now we order the cron logs by descending ID to show the last cron-runs first.

Changes for Implementers and Developers

Now we allow to specify subdirectories in the tryton.cfg file and include them when activating a module.

We now replace the ShopifyAPI library by shopifyapp.

Now we add a contextual _log key to force the logging of events.

We now add notify_user to ModelStorage to store them in an efficient way at
the end of the transaction
.

Now we add a deprecation warning when a new API version for Stripe is available.

We now show missing modules when running tests.

Now we remove the obsolete methods dump_values and load_values in ir.ModelData.

We now also check the button states when testing access.

Now we introduced the option model.fields.Binary.queue_for_removal which makes it possible to remove a file from the filestore on setting a binary field to None.

We now make the retrieval of metadata via documentation build very quiet.

Now we remove the default import of wizards in our cookicutter module
template
.

Now we upgrade to Psycopg 3.

We now upgrade our setup to pyproject using hatchling as build-system.

Now we replace C3.js by its fork billboard.js as it seems better maintained.

We now remove the Sao dependency to bower.

Now we remove the internal name from the record name of the trytond.model.fields.

We now implement a recursive search for unused XML views in our tests.

Changes for Translators

Now the translation mechanism in Tryton does find the translations of type view.

2 posts - 2 participants

Read full topic

April 01, 2026 06:00 AM UTC


Python⇒Speed

Timesliced reservoir sampling: a new(?) algorithm for profilers

Imagine you are processing a stream of events, of unknown length. It could end in 3 seconds, it could run for 3 months; you simply don’t know. As a result, storing the whole stream in memory or even on disk is not acceptable, but you still need to extract relevant information.

Depending on what information you need, choosing a random sample of the stream will give you almost as good information as storing all the data. For example, consider a performance profiler, used to find which parts of your running code are slowest. Many profilers records a program’s callstack every few microseconds, resulting a stream of unlimited size: you don’t know how long the program will run. For this use case, a random sample of callstacks, say 2000 of them, can usually give you sufficient information to do performance optimization.

Why does this work?

When you need to extract a random sample from a stream of unknown length, a common solution is the family of algorithms known as reservoir sampling. In this article you will learn:

Read more...

April 01, 2026 12:00 AM UTC

March 31, 2026


PyCoder’s Weekly

Issue #728: Django With Alpine, Friendly Classes, SQLAlchemy, and More (March 31, 2026)

#728 – MARCH 31, 2026
View in Browser »

The PyCoder’s Weekly Logo


Django Apps With Alpine AJAX, Revisited

The author has been modifying his approach to Django projects with Alpine AJAX over the last nine months. This post describes what he’s changed and how his process has improved.
LOOPWERK

Making Friendly Classes

What’s a friendly class? One that accepts sensible arguments, has a nice string representation, and supports equality checks. Read on to learn how to write them.
TREY HUNNER

Right-Size Your Celery & RQ Workers

alt

CPU doesn’t tell you if tasks are piling up. Queue latency does. Autoscale your workers based on the metric that matters →
JUDOSCALE sponsor

Understanding CRUD Operations in SQL

Learn how CRUD operations work in SQL by writing raw SQL queries with SQLite and using SQLAlchemy as an ORM in Python.
REAL PYTHON course

Starlette 1.0 Released

MARCELOTRYLE.COM

PyCon Austria April 19-20, Registrations Open

PYCON.AT ‱ Shared by Horst JENS

PyOhio 2026 Call for Proposals Now Open!

PRETALX.COM ‱ Shared by Anurag Saxena

Articles & Tutorials

When Vectorized Arrays Aren’t Enough

This is a deep dive post about vectorized arrays in NumPy and how some optimizations work and some do not. There is also a follow-up as well: Vectorized Hardware Instructions Rule Everything Around Me.
NRPOSNER

Zensical: A Modern Static Site Generator

Talk Python interviews Martin Donath a contributor to MKDocs and recent creator of the new Zensical package. They talk about why he has built something new and what lessons he’s applied to the new project.
TALK PYTHON podcast

Dignified Python: 10 Rules to Improve your LLM Agents

alt

At Dagster, we created “Dignified Python” to improve LLM-generated code by embedding clear coding principles into prompts. Instead of messy, pattern-based output, our agents produce code that reflects intent, consistency, and team standards. Here are the 10 rules from our Claude prompt →
DAGSTER LABS sponsor

Smello for HTTP Requests

Roman built Smello, an open-source tool that captures outgoing HTTP requests from your Python code and displays them in a local web dashboard. Learn why he did it and how he uses it to debug API access.
ROMAN IMANKULOV

Gotchas With SQLite in Production

What you need to know before putting a Django project that uses SQLite in production. This is part 5 of a series that includes information on write-ahead logging, locking errors, performance, and more.
ANĆœE

Comparing Portable DataFrame Tools in Python

This article explores three tools for DataFrame portability in Python: Ibis, Narwhals, and Fugue. Learn when to use each to write code that runs across multiple backends.
CODECUT.AI ‱ Shared by Khuyen Tran

Lessons From Pyre That Shaped Pyrefly

Pyrefly is a Python type checker from the same team that developed pyre. This article discusses lessons from developing Pyre that influenced how they designed Pyrefly.
PYREFLY

Connecting MongoDB to Python

This tutorial is a hands-on introduction to connecting MongoDB with Python using PyMongo, guiding readers through the essential first steps in just 10 minutes.
ANAIYA RAISINGHANI ‱ Shared by Tony Kim

How Do Large Companies Manage CI/CD at Scale?

What changes for CI/CD when your company grows to hundreds of developers, dozens of services, and thousands of daily builds?
PETE MILORAVAC

Apply to Join the PSF Meetup Pro Network

The PSF helps support approved Python Meetup groups and the process to become one has recently been re-opened.
PYTHON SOFTWARE FOUNDATION

Inspect a Lazy Import in Python 3.15

This quick “things I learned” post shows you how to inspect a lazy import object in Python 3.15.
MATHSPP.COM

Projects & Code

syrupy: The Sweeter pytest Snapshot Plugin

GITHUB.COM/SYRUPY-PROJECT

pendulum: Python Datetimes Made Easy

GITHUB.COM/PYTHON-PENDULUM

validatedata: An Easier Way to Validate Data in Python

GITHUB.COM/EDWARD-K1

awesome-marimo: Curated List of Awesome Marimo Things

GITHUB.COM/MARIMO-TEAM

dj-urls-panel: Visualize URL Routes in the Django Admin

GITHUB.COM/YASSI

Events

Weekly Real Python Office Hours Q&A (Virtual)

April 1, 2026
REALPYTHON.COM

Canberra Python Meetup

April 2, 2026
MEETUP.COM

Sydney Python User Group (SyPy)

April 2, 2026
SYPY.ORG

Python Leiden User Group

April 2, 2026
PYTHONLEIDEN.NL

PyDelhi User Group Meetup

April 4, 2026
MEETUP.COM

Melbourne Python Users Group, Australia

April 6, 2026
J.MP

PyBodensee Monthly Meetup

April 6, 2026
PYBODENSEE.COM

PyCon Lithuania 2026

April 8 to April 11, 2026
PYCON.LT


Happy Pythoning!
This was PyCoder’s Weekly Issue #728.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

March 31, 2026 07:30 PM UTC


Real Python

Adding Python to PATH

You may need to add Python to PATH if you’ve installed Python, but typing python on the command line doesn’t seem to work. You might see a message saying that python isn’t recognized, or you might end up running the wrong version of Python.

A common fix for these problems is adding Python to the PATH environment variable. In this video course, you’ll learn how to add Python to PATH. You’ll also learn what PATH is and why it’s essential for tools like the command line to be able to find your Python installation.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 31, 2026 02:00 PM UTC

Quiz: Test-Driven Development With pytest

In this quiz, you’ll test your understanding of Test-Driven Development With pytest.

By working through this quiz, you’ll revisit creating and executing Python unit tests with pytest, practicing test-driven development, finding bugs before users, and checking code coverage.

Use this quiz to confirm what you learned and spot gaps to review. Return to the video course for hands-on examples and guidance.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 31, 2026 12:00 PM UTC


PyCon

Introducing the 8 Companies on Startup Row at PyCon US 2026

Each year at PyCon US, Startup Row highlights a select group of early-stage companies building ambitious products with Python at their core. The 2026 cohort reflects a rapidly evolving landscape, where advances in AI, data infrastructure, and developer tooling are reshaping how software is built, deployed, and secured.

This year’s companies aim to solve an evolving set of problems facing independent developers and large-scale organizations alike: securing AI-driven applications, managing multimodal data, orchestrating autonomous agents, automating complex workflows, and extracting insight from increasingly unstructured information. Across these domains, Python continues to serve as a unifying layer: encouraging experimentation, enabling systems built to scale, and connecting open-source innovation with real-world impact.

Startup Row brings these emerging teams into direct conversation with the Python community at PyCon US. Throughout the conference, attendees can meet founders, explore new tools, and see firsthand how these companies are applying Python to solve meaningful problems. For the startups in attendance, it’s an opportunity to share their work, connect with users and collaborators, and contribute back to the ecosystem that helped shape them. Register now to experience Startup Row and much more at PyCon US 2026.

Supporting Startups at PyCon US

There are many ways to support Startup Row companies, during PyCon US and long after the conference wraps:
  • Stop by Startup Row: Spend a few minutes with each team, ask what they’re building, and see their products in action. 
  • Try their tools: Whether it’s an open-source library or a hosted service, hands-on usage (alongside constructive feedback) is one of the most valuable forms of support. If a startup seems compelling, consider a pilot project and become a design partner.
  • Share feedback: Early-stage teams benefit enormously from thoughtful questions, real-world use cases, and honest perspectives from the community.
  • Contribute to their open source projects: Many Startup Row companies are deeply rooted in open source. Startup Row companies with open-source roots welcome bug reports, documentation improvements, and pull requests. Contributions and constructive feedback are always appreciated.
  • Help spread the word: If you find something interesting, tell a friend, post about it, or share it with your team. (And if you're posting to social media, consider using tags like #PyConUS and #StartupRow to share the love.)
  • Explore opportunities to work together: Many of these companies are hiring, looking for design partners, or open to collaborations; don’t hesitate to ask.
  • But, most importantly, be supportive. Building a startup is hard, and every team is learning in real time. Curiosity, patience, and encouragement make a meaningful difference. 
Without further ado, let's...

Meet Startup Row at PyCon US 2026

We’re excited to introduce the companies selected for Startup Row at PyCon US 2026.

Arcjet

Embedding security directly into application code is fast becoming as indispensable as logging, especially as AI services open new attack surfaces. Arcjet offers a developer‑first platform that lets teams add bot detection, rate limiting and data‑privacy checks right where the request is processed.

The service ships open‑source JavaScript and Python SDKs that run a WebAssembly module locally before calling Arcjet’s low‑latency decision API, ensuring full application context informs every security verdict. Both SDKs are released under a permissive open‑source license, letting developers integrate the primitives without vendor lock‑in while scaling usage through Arcjet’s SaaS tiered pricing.

The JavaScript SDK alone has earned ≈1.7 k GitHub stars and the combined libraries have attracted over 1,000 developers protecting more than 500 production applications. Arcjet offers a free tier and usage‑based paid plans, mirroring Cloudflare’s model to serve startups and enterprises alike.

Arcjet is rolling out additional security tools and deepening integrations with popular frameworks such as FastAPI and Flask, aiming to broaden adoption across AI‑enabled services. In short, Arcjet aims to be the security‑as‑code layer every modern app ships with.

CapiscIO

As multi‑agent AI systems become the backbone of emerging digital workflows, developers lack a reliable way to verify agent identities and enforce governance. CapiscIO steps into that gap, offering an open‑core trust layer built for the nascent agent economy.

CapiscIO offers cryptographic Trust Badges, policy enforcement, and tamper‑evident chain‑of‑custody wrapped in a Python SDK. Released under Apache 2.0, it ships a CLI, LangChain integration, and an MCP SDK that let agents prove identity without overhauling existing infrastructure.

The capiscio‑core repository on GitHub hosts the open‑source core and SDKs under Apache 2.0, drawing early contributors building agentic pipelines.

Beon de Nood, Founder & CEO, brings two decades of enterprise development experience and a prior successful startup to the table. “AI governance should be practical, not bureaucratic. Organizations need visibility into what they have, confidence in what they deploy, and control over how agents behave in production,” he says.

CapiscIO is continuously adding new extensions, expanding its LangChain and MCP SDKs, and preparing a managed agent‑identity registry for enterprises. In short, CapiscIO aims to be the passport office of the agent economy, handing each autonomous component an unspoofable ID and clear permissions.

Chonkie

The explosion of retrieval‑augmented generation (RAG) is unlocking AI’s ability to reason over ever‑larger knowledge bases. Yet the first step of splitting massive texts into meaningful pieces still lags behind.

Chonkie offers an open‑core suite centered on Memchunk, a Python library with Cython acceleration that delivers up to 160 GB/s throughput and ten chunking strategies under a permissive license. It also ships Catsu, a unified embeddings client for nine providers, and a lightweight ingestion layer; the commercial Chonkie Labs service combines them into a SaaS that monitors the web and synthesises insights.

Co‑founder and CEO Shreyash  Nigam, who grew up in India and met his business partner in eighth grade, reflects the team’s open‑source ethos, saying “It’s fun to put a project on GitHub and see a community of developers crowd around it.” That enthusiasm underpins Chonkie’s decision to release its core tooling openly while building a commercial deep‑research service.

Backed by Y  Combinator’s Summer  2025 batch, Chonkie plans to grow from four to six engineers and launch the next version of Chonkie Labs later this year, adding real‑time web crawling and multi‑modal summarization. In short, Chonkie aims to be the Google of corporate intelligence.

Pixeltable

Multimodal generative AI is turning simple datasets into sprawling collections of video, images, audio and text, forcing engineers to stitch together ad‑hoc pipelines just to keep data flowing. That complexity has created a new bottleneck for teams trying to move from prototype to production.

The open‑source Python library from Pixeltable offers a declarative table API that lets developers store, query and version multimodal assets side by side while embedding custom Python functions. Built with incremental update capabilities, combined lineage and schema tracking, and a development‑to‑production mirror, the platform also provides orchestration capabilities that keep pipelines reproducible without rewriting code.

The project has earned ≈1.6 k  GitHub stars and a growing contributor base, closed a $5.5  million seed round in December 2024, and is already used by early adopters such as Obvio and Variata to streamline computer‑vision workflows.

Co‑founder and CTO Marcel  Kornacker, who previously founded Apache Impala and co-founded Apache Parquet, says “Just as relational databases revolutionized web development, Pixeltable is transforming AI application development.”

The company's roadmap centers on launching Pixeltable Cloud, a serverless managed service that will extend the open core with collaborative editing, auto‑scaling storage and built‑in monitoring. In short, Pixeltable aims to be the relational database of multimodal AI data.

Skyvern

Manual browser work remains a hidden bottleneck for many teams, turning simple data‑entry tasks into fragile scripts that break on the slightest UI change. Skyvern’s open‑source agent is one of the tools reshaping how developers and non‑technical users automate the web.

The Skyvern library lets anyone build a no‑code browser agent that combines computer‑vision models with a large language model to see, plan, act, and validate each step of a web workflow. Its planner–actor–validator loop compiles successful runs into deterministic code, while the free open‑source core can be run locally or via Skyvern Cloud on a per‑automation pricing model.

The GitHub repository has attracted ≈20 k stars, drawing an active community of contributors who extend the framework and share evaluation datasets. The company monetizes through Skyvern Cloud, letting teams run agents without managing infrastructure.

Skyvern is preparing a release that tightens vision‑model integration, adds support for additional LLM providers, and launches a self‑serve dashboard aimed at non‑technical teams. In short, Skyvern aspires to be the Django of browser‑automation, pairing developer friendliness with production reliability.

SubImage

The sheer complexity of modern multi‑cloud environments turns security visibility into a labyrinth, and  SubImage offers a graph‑first view that cuts through the noise.

It builds an infrastructure graph using the open‑source Cartography library (Apache‑2.0, Python), then highlights exploit chains as attack paths and applies AI models to prioritize findings based on ownership and contextual risk.

Cartography, originally developed at Lyft and now a Cloud Native Computing Sandbox project, has ≈3.7 k GitHub stars, is used by over 70 organizations, and SubImage’s managed service already protects security teams at Veriff and Neo4j; the company closed a $4.2  million seed round in November 2025.

Co‑founder Alex  Chantavy, an offensive‑security engineer, says “The most important tool was our internal cloud knowledge graph because it showed us a map of the easiest attack paths 
 One of the most effective ways to defend an environment is to see it the same way an attacker would.”

The startup is focusing on scaling its managed service and deepening AI integration as it targets larger enterprise customers. In short, SubImage aims to be the map of the cloud for defenders.

Tetrix

Private‑market data pipelines still rely on manual downloads and spreadsheet gymnastics, leaving analysts chasing yesterday’s numbers. Tetrix’s AI investment intelligence platform is part of a wave that brings automation to this lagging workflow.

Built primarily in Python, Tetrix automates document collection from fund portals and other sources, extracts structured data from PDFs and other unstructured sources using tool-using language models, then presents exposures, cash flows, and benchmarks through an interactive dashboard that also accepts natural‑language queries.

The company is growing quickly, doubling revenue quarter over quarter and, at least so far, maintains an impressive record of zero customer churn. In the coming year or so, Tetrix plans to triple its headcount from fifteen to forty‑five employees.

TimeCopilot

Time‑series forecasting has long been a tangled mix of scripts, dashboards, and domain expertise, and the recent surge in autonomous agents is finally giving it a unified voice. Enter TimeCopilot, an open‑source framework that brings agentic reasoning to the heart of forecasting.

The platform, built in Python under a permissive open‑source license, lets users request forecasts in plain English. It automatically orchestrates more than thirty models from seven families, including Chronos and TimesFM, while weaving large language model reasoning into each prediction. Its declarative API was born from co‑founder  Azul Garza‑Ramírez’s economics background and her earlier work on TimeGPT for Nixtla (featured SR'23), evolving from a weekend experiment started nearly seven years ago.

The TimeCopilot/timecopilot repository has amassed roughly 420  stars on GitHub, with the release of OpenClaw marking a notable spike in community interest.

Upcoming plans include a managed SaaS offering with enterprise‑grade scaling and support, the rollout of a benchmarking suite to measure agentic forecast quality, and targeted use cases such as predicting cloud‑compute expenses for AI workloads.

Thank You's and Acknowledgements

Startup Row is a volunteer-driven program, co-led by Jason D. Rowley and Shea Tate-Di Donna (SR'15; Zana, acquired Startups.com), in collaboration with the PyCon US organizing team. Thanks to everyone who makes PyCon US possible.

We also extend a gracious thank-you to all startup founders who submitted applications to Startup Row at PyCon US this year. Thanks again for taking the time to share what you're building. We hope to help out in whatever way we can.

Good luck to everyone, and see you in Long Beach, CA!

March 31, 2026 09:00 AM UTC

March 30, 2026


"Michael Kennedy's Thoughts on Technology"

Raw+DC Database Pattern: A Retrospective

TL;DR; After migrating three production Python web apps from MongoEngine to the Raw+DC database pattern, I measured nearly 2x the requests per second, 18% less memory, and gained native async support. Raw+DC delivered real-world performance gains, not just synthetic benchmarks.


About a month ago, I wrote about a new design pattern I’m seeing gain traction in the software space: Raw+DC: The ORM pattern of 2026. This article generated a lot of interest and a lot of debate. The short version: instead of using an ORM or ODM, you write raw database queries paired with Python dataclasses for type safety. This gives AI coding assistants a much larger training base to work from, reduces dependency risk, and delivers comparable or better performance.

Putting Raw+DC into practice

Now that some time has passed and I’ve thought about it more, I’ve had a chance to migrate three of my most important web apps to Raw+DC: Talk Python the podcast, Talk Python Courses, and Python Bytes.

So how did it go? From a pure functionality perspective, it went great. There were maybe one to three problems per web app. This might not sound great, and I didn’t love it, but given this is thousands and thousands of lines of code per app, that’s a small percentage of issues, given how many things went right.

More importantly, I was able to remove a dependency on two faltering database libraries. Mongoengine, the one that I’m going to pull numbers from for Talk Python Training below, has not had a meaningful release in years. It was one of the two core blockers that prevented me from using async programming patterns on the website entirely.

How much faster is Raw+DC than MongoEngine?

I said I imagined that we would save in memory and CPU costs, but did it actually pan out in a practical application? After all, we saw that Robyn, the web framework, is 25 times faster than Flask. However, in practice, it was almost a dead even heat.

I’m thrilled to report that yes, the web app is much faster using Raw+DC.

Below is an apples-to-apples comparison for Talk Python Training using MongoEngine and the Raw+DC pattern.

Metric MongoEngine (ODM) Raw+DC Improvement
Requests/sec baseline ~1.75x 1.75x faster
Response time baseline ~50% less ~50% faster
Memory usage baseline 200 MB less 18% less

Raw+DC vs ODM/ORM requests per second graph

The memory story is really great as well. After letting the web app run for over 24 hours for each mode, we saw a 200 MB memory usage decrease using Raw+DC.

Raw+DC vs ODM/ORM memory usage

That amount of memory might still look high to you. This Raw+DC transformation actually facilitates future work that will cut it in half again, down to about 500 MB for the full app, up and running in production at equilibrium.

Is Raw+DC worth migrating to?

To me, this seems 100% worth it. I’ve gained four important things with Raw+DC.

  1. 1.75x the requests per second on the exact same hardware and codebase (sans data layer swap)
  2. 18% less memory usage with much more savings on the horizon
  3. New data layer natively supports async/await
  4. Removal of problematic, core data access library

All of these benefits and none of that even touches on whether or not this new programming model is better for AI (it is).

March 30, 2026 03:31 PM UTC