skip to navigation
skip to content

Planet Python

Last update: April 03, 2026 01:43 AM UTC

April 02, 2026


death and gravity

reader 3.22 released – new web app

April 02, 2026 04:44 PM UTC


Mike Driscoll

Python Pop Quiz – Number Explosion

You will sometimes come across examples of code that use one or two asterisks. Depending on how the asterisks are used, they can mean different things to Python. Check your understanding of what a single asterisk means in the following quiz! The Quiz What will be the output if you run this code? numbers = […]

The post Python Pop Quiz – Number Explosion appeared first on Mouse Vs Python.

April 02, 2026 12:29 PM UTC


Real Python

Quiz: Python's Counter: The Pythonic Way to Count Objects

Test your understanding of Python's Counter class from the collections module, including construction, counting, and multiset operations.

April 02, 2026 12:00 PM UTC


Graham Dumpleton

Free Python decorator workshops

I've been working on a set of interactive workshops on Python decorators and they are now available for free on the labs page of this site. There are 22 workshops in total, covering everything from the fundamentals of how decorators work through to advanced topics like the descriptor protocol, async decorators and metaclasses. The workshops are hosted on the Educates training platform and accessed through the browser, so there is nothing to install.

An experiment in learning

We are well into the age of AI at this point. Need to know how to write a decorator in Python? Just ask ChatGPT or Claude and you will get an answer in seconds. Want to refactor some code to use decorators? Let an AI agent do it for you. The tools are genuinely impressive and I use them myself every day.

That said, as someone who spent years as a developer advocate helping people learn, the question that keeps coming up is whether there is still an appetite for actually learning how things work. Not just getting an answer, but understanding why the answer is what it is. Understanding the mechanics well enough that when the AI gives you something subtly wrong (and it will), you can spot it and fix it yourself.

These workshops are my experiment in finding out. If people are still interested in sitting down with a guided, hands-on environment and working through a topic step by step, then this format has a future. If not, it will at least help me work out whether developer advocacy even matters anymore, and whether the years I have put into building the Educates platform have been worth the effort.

What the workshops cover

The Python Decorators course starts from the ground up. The early workshops cover functions as first-class objects, closures, and the basic mechanics of how decorators work. From there the core path builds through decorator arguments, functools.wraps, stacking decorators, and class-based decorators.

Branching off from the core path are a set of elective workshops covering practical applications: input validation, caching and memoisation, access control, registration and plugin patterns, exception handling, deprecation warnings, and profiling. These are the kinds of things you would actually use decorators for in real projects.

The later workshops go deeper into territory that most tutorials skip over entirely. The descriptor protocol, how decorators interact with classes and inheritance, async decorators, class decoration, and metaclasses.

The aim here is to go well beyond the basics. There are plenty of introductory decorator tutorials out there already. What I wanted to create was something for people who want to genuinely broaden their Python knowledge and understand how the language works at a deeper level. If you have not completely given in to letting AI do all the thinking for you and still want to build real expertise, these workshops are for you.

If you have never experienced interactive online workshops like this before, I would recommend doing the Educates Walkthrough workshop first. It will get you familiar with how the platform works, how to navigate the workshop instructions, and how to use the integrated terminal and editor, before you jump into the Python content.

Keeping the workshops running

The system hosting the workshops has limited resources and capacity. I am running this on modest infrastructure and I am honestly not sure how long I will be able to keep it going. It depends in part on how much interest there is, and in part on whether I can sustain any hosting costs.

The system is also engineered to only let a limited number of people in at a time, so if you get a message about being at capacity, try again later. I will be monitoring how things hold up and if possible will increase the limits.

If you try the workshops and find them useful, it would mean a lot if you considered helping out through my GitHub Sponsors page. Even small contributions add up and would help keep the workshops available. If there is enough interest and support, I could move to a more capable hosting environment and handle more concurrent users. Right now capacity is limited and I am just seeing how things go.

What comes next

If things go well and there is demand for more, I have plans for additional workshop courses. The natural successor to the Python decorators course would be one built around my wrapt library, covering both its decorator utilities and its monkey patching features. After that I would look at WSGI and Python web hosting, which is another area where I have deep experience from years of working on mod_wsgi.

The common thread is topics that go into real depth. There is no shortage of surface-level content on the internet (and AI can generate more of it on demand). What I think is harder to find is material that takes a topic seriously, works through the edge cases, and gives you a genuine understanding of what is happening under the hood. That is what I am trying to provide.

A note on how these were made

I should be transparent about this: the workshops were created with the assistance of AI. I wrote about the process in some detail back in February, covering the approach to using AI for content, how I taught an AI about the Educates platform, and how I used AI to review the workshops.

I realise there is a certain irony in using AI to create workshops that are partly motivated by the belief that people should still learn things themselves. But I see a difference between using AI as a tool to help produce quality educational content and blindly publishing whatever an AI generates. Every workshop has been reviewed, tested, and refined based on my own knowledge and experience with Python decorators going back over a decade. Hopefully the result is something genuinely useful rather than AI slop. And hopefully the use of the Educates platform itself contributes to the experience, letting people focus on actually learning rather than spending time trying to get their own computer environment set up correctly. I hope people will judge the workshops on their quality rather than dismissing them because AI was involved in making them.

Head over to the labs page to get started. I would love to hear what you think.

April 02, 2026 04:38 AM UTC


Python Engineering at Microsoft

Python in Visual Studio Code – March 2026 Release

The March 2026 release of the Python and Jupyter extensions for Visual Studio Code is now available. Keep on reading to learn more!

The post Python in Visual Studio Code – March 2026 Release appeared first on Microsoft for Python Developers Blog.

April 02, 2026 12:27 AM UTC

April 01, 2026


Talk Python to Me

#543: Deep Agents: LangChain's SDK for Agents That Plan and Delegate

When you type a question into ChatGPT, the model only has what you typed to work with. But tools like Claude Code can plan, iterate, test, and recover from mistakes. They work more like we do. The difference is the agent harness: Planning tools, file system access, sub-agents, and carefully crafted system prompts that turn a raw LLM into something genuinely capable. Sydney Runkle is back on Talk Python representing LangChain and their new open source library, Deep Agents: A framework for building your own deep agents with plain Python functions, middleware hooks, and MCP support. This is how the magic works under the hood.

April 01, 2026 05:20 PM UTC


"Michael Kennedy's Thoughts on Technology"

Cutting Python Web App Memory Over 31%

tl;dr; I cut 3.2 GB of memory usage from our Python web apps using five techniques: async workers, import isolation, the Raw+DC database pattern, local imports for heavy libraries, and disk-based caching. Here are the exact before-and-after numbers for each optimization.


Over the past few weeks, I’ve been ruthlessly focused on reducing memory usage on my web apps, APIs, and daemons. I’ve been following the one big server pattern for deploying all the Talk Python web apps, APIs, background services, and supporting infrastructure.

There are a ridiculous number of containers running to make everything go around here at Talk Python (23 apps, APIs, and database servers in total).

Even with that many apps running, the actual server CPU load is quite low. But memory usage is creeping up. The server was running at 65% memory usage on a 16GB server. While that may be fine - the server’s not that expensive - I decided to take some time and see if there were some code level optimizations available.

What I learned was interesting and much of it was a surprise to me. So, I thought I’d share it here with you. I was able to drop the memory usage by 3.2GB basically for free just by changing some settings, changing how I import packages in Python, and proper use of offloading some caching to disk.

How much memory were the Python apps using before optimization?

For this blog post, I’m going to focus on just two applications. However, I applied this to most of the apps that we own the source code for (as opposed to Umami, etc). Take these as concrete examples more than the entire use case.

Here are the initial stats we’ll be improving on along the way.

Application Starting Memory
Talk Python Training 1,280 MB
Training Search Indexer Daemon 708 MB
Total 1,988 MB

How async workers and Quart cut Python web app memory in half

I knew that starting with a core architectural change in how we run our apps and access our database would have huge implications. You see, we’re running our web apps as a web garden, one orchestrator, multiple worker processes via the lovely Granian.

I’ve wanted to migrate our remaining web applications to some fully asynchronous application framework. See Talk Python rewritten in Quart (async Flask) for a detailed discussion on this topic. If we have a truly async-capable application server (Granian) and a truly async web framework (Quart), then we can change our deployment style to one worker running fully asynchronous code. Much less blocking code means a single worker is more responsive now. Thus we can work with a single worker instance.

This one change alone would cut the memory usage nearly in half. To facilitate this, we needed two actions:

Action 1: Rewrite Talk Python Training in Quart

The first thing I had to do was rewrite Talk Python Training, the app I was mostly focused on at the time, in Quart. This was a lot of work. You might not know it from the outside, but Talk Python Training is a significant application.

178,000 lines of code! Rewriting this from the older framework, Pyramid, to async Flask (aka Quart), was a lot of work, but I pulled it off last week.

Action 2: Rewrite data access to raw + dc design pattern

Data access was based on MongoEngine, a barely maintained older database ODM for talking to MongoDB, which does not support async code and never will support async code. Even though we have Quart as a runtime option, we hardly can do anything async without the data access layer.

So I spent some time removing MongoEngine and implementing the Raw + DC design pattern. That saved us a ton of memory, facilitated writing async queries, and almost doubled our requests per second.

I actually wrote this up in isolation here with some nice graphs: Raw+DC Database Pattern: A Retrospective. Switching from a formalized ODM to raw database queries along with data classes with slots saved us 100 MB per worker process, or in this case, 200 MB of working memory. Given that it also sped up the app significantly, that’s a serious win.

Change Memory Saved Bonus
Rewrite to Quart (async Flask) Enabled single-worker mode Async capable
Raw + DC database pattern 200 MB (100 MB per worker) Almost 2x requests/sec

How switching to a single async Granian worker saved 542 MB

Now that our web app runs asynchronously and our database queries fully support it, we could trim our web garden down to a single, fully asynchronous worker process using Granian. When every request is run in a blocking mode, one worker not ideal. But now the requests all interleave using Python concurrency.

This brought things down to a whopping 536 MB in total (a savings of 542 MB!) I could have stopped there, and things would have been excellent compared to where we were before, but I wanted to see what else was a possibility.

Metric Value
Before (multi-worker) 1,280 MB
After (single async worker and raw+dc) 536 MB
Savings 542 MB

How isolating Python imports in a subprocess cut memory from 708 MB to 22 MB

The next biggest problem was that the Talk Python Training search indexer. It reads literally everything from the many gigabyte database backing Talk Python Training, indexes it, and stores it into a custom data structure that we use for our ultra-fast search. It was running at 708 MB in its own container.

Surely, this could be more efficient.

And boy, was it. There were two main takeaways here. I noticed first that even if no indexing ran, just at startup, this process was using almost 200 megabytes of memory. Why? Import chains.

The short version is it was importing almost all of the files of Talk Python Training and their in third-party dependencies because that was just the easiest way to write the code and because of PEP 8. When the app starts, it imports a few utilities from Talk Python Training. That, in turn, pulls in the entire mega application plus all of the dependencies that the application itself is using, bloating the memory way, way up.

All this little daemon needs to do is every few hours re-index the site. It sits there, does nothing in particular related to our app, loops around, waits for exit commands from Docker, and if enough time has elapsed, then it runs the search process with our code.

We could move all of that search indexing code into a subprocess. And only that subprocess’s code actually imports anything of significance. When the search index has to run, that process kicks off for maybe 30 seconds, builds the index, uses a bunch of memory, but once the indexing is done, it shuts down and even the imports are unloaded.

What was the change? Amazing. The search indexer went from 708 MB to just 22 MB! All we had to do was isolate imports into its own separate file and then run that separately using a Python subprocess. That’s it, 32x less memory used.

Metric Value
Before (monolithic process) 708 MB
After (subprocess isolation) 22 MB
Reduction 32x

How much memory do Python imports like boto3, pandas, and matplotlib use?

When we write simple code such as import boto3 it looks like no big deal. You’re just telling Python you need to use this library. But as I hinted at above, what it actually does is load up that library in total, and any static data or singleton-style data is created, as well as transitive dependencies for that library.

Unbeknownst to me, boto3 takes a ton of memory.

Import Statement Memory Cost (3.14)
import boto3 25 MB
import matplotlib 17 MB
import pandas 44 MB

Yet for our application, these are very rarely used. Maybe we need to upload a file to blob storage using boto3, or use matplotlib and pandas to generate some report that we rarely run.

By moving these to be local imports, we are able to save a ton of memory. What do I mean by that? Simply don’t follow PEP 8 here - instead of putting these at the top of your file, put them inside of the functions that use them, and they will only be imported if those functions are called.

def generate_usage_report():
  import matplotlib
  import pandas
  
  # Write code with these libs...

Now eventually, this generate_usage_report function probably will get called, but that’s where you go back to DevOps. We can simply set a time-to-live on the worker process. Granian will gracefully shut down the worker process and start a new one every six hours or once a day or whatever you choose.

PEP 810 – Explicit lazy imports

This makes me very excited for Python 3.15. That’s where the lazy imports feature will land. That should make this behavior entirely automatic without the need to jump through hoops.

How moving Python caches to diskcache reduced memory usage

Finally I addressed our caches. This was probably the smallest of the improvements, but still relevant. We had quite a few things that were small to medium-sized caches being kept in memory. For example, the site takes a fragment of markdown which is repeatedly used, and instead of regenerating it every time, we would stash the generated markdown and just return that from cache.

We moved most of this caching to diskcache. If you want to hear me and Vincent nerd out on how powerful this little library is, listen to the Talk Python episode diskcache: Your secret Python perf weapon.

Total memory savings: from 1,988 MB to 472 MB

So where are things today after applying these optimizations?

Application Before After Savings
Talk Python Training 1,280 MB 450 MB 1.8x
Training Search Indexer Daemon 708 MB 22 MB 32x
Total 1,988 MB 472 MB 3.2x

Applying these techniques and more to all of our web apps reduced our server load by 3.2 GB of memory. Memory is often the most expensive and scarce resource in production servers. This is a huge win for us.

April 01, 2026 03:52 PM UTC


Real Python

Python Classes: The Power of Object-Oriented Programming

Learn how to define and use Python classes to implement object-oriented programming. Dive into attributes, methods, inheritance, and more.

April 01, 2026 02:00 PM UTC


Peter Bengtsson

pytest "import file mismatch"

Make sure your test files in a pytest tested project sit in directories with a __init__.py

April 01, 2026 01:20 PM UTC


Real Python

Quiz: Exploring Keywords in Python

Test your understanding of Python keywords, including the difference between regular and soft keywords, keyword categories, and common pitfalls.

April 01, 2026 12:00 PM UTC


Tryton News

Tryton News April 2026

During the last month we focused on fixing bugs, improving the behaviour of things, speeding-up performance issues - building on the changes from our last release. We also added some new features which we would like to introduce to you in this newsletter.

For an in depth overview of the Tryton issues please take a look at our issue tracker or see the issues and merge requests filtered by label.

Changes for the User

Sales, Purchases and Projects

Now we add the support for the pick-up delivery method from Shopify.

We now add a number field to the project work efforts, based on a default sequence.

Now we use the party lang functionality to set the language for the taxes and and chats.

We now add support for gift card products on Shopify.

We now add a field that indicates if an invoice is sent via Peppol to help on filtering.

Now we unpublish products from the web-shops when they are deactivated.

We now allow the Peppol admin access group to create incoming documents.

Now we improve the address schema for web-shop usage and add attn to Party Addresses.

We now allow to copy the resources from sale rental to its invoices.

Now we move the Purchase Lines in its own tab on the Production form.

We now allow the manual method on sale.

Now the sale is always confirmed when a Shopify order has payment terms.

Accounting, Invoicing and Payments

Now we set the related-to field of an account statement line to invoice when the statement line overpays an invoice.

We now filter the MIME-type of the additional documents in UBL to the ones which are allowed by the specification.

Now we forbid a zero amount in draft state and also processing state for manual payments.

We now accrue and allocate rounding errors when using multiple taxes to calculate correct total amounts.

Now we allow cash-rounding with the opposite method.

We now add UNECE codes to Belgian taxes.

Now we
add the VAT exemption code on tax for electronic invoices in the EU
.

We now add payment means to invoices to be used in UBL and UNCEFACT.

Now we add an optional account to the list of payable/receivable lines to be able to reconcile lines with the same accounts.

Stock, Production and Shipments

Now we implemented the management of ethanol in stock.

We now set the default locations when adding manually stock moves to production.

Now we add a reference field on quality inspections.

We now add a wizard to pack shipments.

Now we add the Stock Cancellation access group that allows to cancel shipments and productions in the states running or done.

User Interface

As a first step to add labels inside the widgets, we now add a generic management for the label style in form widgets.

Now we strip the user field from white-space characters in the login window.

We now add a new right drop-down menu in SAO, including Logout and Help menu items.

Now we add a visual hint on widgets of modified fields.

New Modules

We now support the common European requirements for excise products.

Now we add the sale_project_task module which creates tasks for services sold.

Now we add the account_payment_check module which manages and prints checks.

New Releases

We released bug fixes for the currently maintained long term support series 7.0 and 6.0, and for the penultimate series 7.8 and 7.6.

Security

Please update your systems to take care of a security related bug we found last month.

Changes for the System Administrator

We now use cookies to store authentication informations in Sao.

Now we order the cron logs by descending ID to show the last cron-runs first.

Changes for Implementers and Developers

Now we allow to specify subdirectories in the tryton.cfg file and include them when activating a module.

We now replace the ShopifyAPI library by shopifyapp.

Now we add a contextual _log key to force the logging of events.

We now add notify_user to ModelStorage to store them in an efficient way at
the end of the transaction
.

Now we add a deprecation warning when a new API version for Stripe is available.

We now show missing modules when running tests.

Now we remove the obsolete methods dump_values and load_values in ir.ModelData.

We now also check the button states when testing access.

Now we introduced the option model.fields.Binary.queue_for_removal which makes it possible to remove a file from the filestore on setting a binary field to None.

We now make the retrieval of metadata via documentation build very quiet.

Now we remove the default import of wizards in our cookicutter module
template
.

Now we upgrade to Psycopg 3.

We now upgrade our setup to pyproject using hatchling as build-system.

Now we replace C3.js by its fork billboard.js as it seems better maintained.

We now remove the Sao dependency to bower.

Now we remove the internal name from the record name of the trytond.model.fields.

We now implement a recursive search for unused XML views in our tests.

Changes for Translators

Now the translation mechanism in Tryton does find the translations of type view.

2 posts - 2 participants

Read full topic

April 01, 2026 06:00 AM UTC


Python⇒Speed

Timesliced reservoir sampling: a new(?) algorithm for profilers

April 01, 2026 12:00 AM UTC

March 31, 2026


PyCoder’s Weekly

Issue #728: Django With Alpine, Friendly Classes, SQLAlchemy, and More (March 31, 2026)

March 31, 2026 07:30 PM UTC


Real Python

Adding Python to PATH

Learn how to add Python to your PATH environment variable on Windows, macOS, and Linux so you can run Python from the command line.

March 31, 2026 02:00 PM UTC

Quiz: Test-Driven Development With pytest

Test your TDD skills with pytest. Practice writing unit tests, following pytest conventions, and measuring code coverage.

March 31, 2026 12:00 PM UTC


PyCon

Introducing the 8 Companies on Startup Row at PyCon US 2026

March 31, 2026 09:00 AM UTC

March 30, 2026


"Michael Kennedy's Thoughts on Technology"

Raw+DC Database Pattern: A Retrospective

TL;DR; After migrating three production Python web apps from MongoEngine to the Raw+DC database pattern, I measured nearly 2x the requests per second, 18% less memory, and gained native async support. Raw+DC delivered real-world performance gains, not just synthetic benchmarks.


About a month ago, I wrote about a new design pattern I’m seeing gain traction in the software space: Raw+DC: The ORM pattern of 2026. This article generated a lot of interest and a lot of debate. The short version: instead of using an ORM or ODM, you write raw database queries paired with Python dataclasses for type safety. This gives AI coding assistants a much larger training base to work from, reduces dependency risk, and delivers comparable or better performance.

Putting Raw+DC into practice

Now that some time has passed and I’ve thought about it more, I’ve had a chance to migrate three of my most important web apps to Raw+DC: Talk Python the podcast, Talk Python Courses, and Python Bytes.

So how did it go? From a pure functionality perspective, it went great. There were maybe one to three problems per web app. This might not sound great, and I didn’t love it, but given this is thousands and thousands of lines of code per app, that’s a small percentage of issues, given how many things went right.

More importantly, I was able to remove a dependency on two faltering database libraries. Mongoengine, the one that I’m going to pull numbers from for Talk Python Training below, has not had a meaningful release in years. It was one of the two core blockers that prevented me from using async programming patterns on the website entirely.

How much faster is Raw+DC than MongoEngine?

I said I imagined that we would save in memory and CPU costs, but did it actually pan out in a practical application? After all, we saw that Robyn, the web framework, is 25 times faster than Flask. However, in practice, it was almost a dead even heat.

I’m thrilled to report that yes, the web app is much faster using Raw+DC.

Below is an apples-to-apples comparison for Talk Python Training using MongoEngine and the Raw+DC pattern.

Metric MongoEngine (ODM) Raw+DC Improvement
Requests/sec baseline ~1.75x 1.75x faster
Response time baseline ~50% less ~50% faster
Memory usage baseline 200 MB less 18% less

Raw+DC vs ODM/ORM requests per second graph

The memory story is really great as well. After letting the web app run for over 24 hours for each mode, we saw a 200 MB memory usage decrease using Raw+DC.

Raw+DC vs ODM/ORM memory usage

That amount of memory might still look high to you. This Raw+DC transformation actually facilitates future work that will cut it in half again, down to about 500 MB for the full app, up and running in production at equilibrium.

Is Raw+DC worth migrating to?

To me, this seems 100% worth it. I’ve gained four important things with Raw+DC.

  1. 1.75x the requests per second on the exact same hardware and codebase (sans data layer swap)
  2. 18% less memory usage with much more savings on the horizon
  3. New data layer natively supports async/await
  4. Removal of problematic, core data access library

All of these benefits and none of that even touches on whether or not this new programming model is better for AI (it is).

March 30, 2026 03:31 PM UTC


PyCharm

What’s New in PyCharm 2026.1

Welcome to PyCharm 2026.1. This release doesn’t just add features – it rethinks how you build, debug, and scale Python projects. From a brand-new debugging engine powered by debugpy to first-class uv support on remote targets and expanded JavaScript support in the free tier, this version is all about removing friction and letting you focus […]

March 30, 2026 03:31 PM UTC


Real Python

How to Use Ollama to Run Large Language Models Locally

Learn how to use Ollama to run large language models locally. Install it, pull models, and start chatting from your terminal without needing API keys.

March 30, 2026 02:00 PM UTC


PyCon

Support PyLadies: Donate to the PyLadies Auction at PyCon US 2026!

March 30, 2026 12:53 PM UTC


Mike Driscoll

Vibe Coding Pong with Python and pygame

Pong is one of the first computer games ever created, way back in 1972. If you have never heard of Pong, you can think of it as a kind of “tennis” game. There are two paddles, on each side of the screen. They move up and down. The goal is to bounce a ball between […]

The post Vibe Coding Pong with Python and pygame appeared first on Mouse Vs Python.

March 30, 2026 12:29 PM UTC


Real Python

Quiz: Using Jupyter Notebooks

Test your Jupyter Notebook skills: cells, modes, shortcuts, Markdown, server tools, and exporting notebooks to HTML.

March 30, 2026 12:00 PM UTC


Python Bytes

#475 Haunted warehouses

Topics include Lock the Ghost, Fence for Sandboxing, MALUS: Liberate Open Source, and Harden your GitHub Actions Workflows with zizmor, dependency pinning, and dependency cooldowns.

March 30, 2026 08:00 AM UTC

March 29, 2026


"Michael Kennedy's Thoughts on Technology"

Fire and Forget at Textual

If you read my Fire and Forget (or Never) about Python and asynchronous programming, you could think it’s a super odd edge case. But a reader/listener, Richard, pointed me at Will McGugan’s article The Heisenbug lurking in your async code. This is basically the same article, but in Will-style.

Will does say “This behavior is well documented, as you can see from this excerpt.” True, but the documentation got this emphasis and warning in Python 3.12 whereas the feature create_task was added in Python 3.6/3.5 timeframe. So it’s not just a matter of did we read the docs carefully. It’s a matter of did we reread the docs carefully, years later?

Luckily Will added some nice concrete numbers I didn’t have:

https://github.com/search?q=%22asyncio.create_task%28%22&type=code

This appears in over 0.5M separate code files on GitHub. To be clear, not every search result for create_task uses the fire-and-forget pattern, but just on the first page of results there are 5 instances.

If the design pattern to fix this is to:

  1. Create a global set
  2. When a task is added to the event loop, add it to the set
  3. Remove it from the set when it’s done

Wouldn’t it have been better for the Python team to add this to the event loop internally once and solve this problem for everyone globally across the entire Python ecosystem?

It doesn’t look like that’s going to happen. So make sure you double check your code for create_task. And don’t let the Heisenbugs bite.

And yes, I know about task groups. Several people told me that we could use task groups to hang on to the task. Yes, that’s true. But task groups are incongruent with the fire-and-forget design pattern. Why? Because you create the group in a context manager and then you wait for all the tasks in the group to be finished. That doesn’t allow you to fire off a task and then continue working. So task groups may or may not have fixed Will’s problem, but they don’t solve the one I was originally talking about.

March 29, 2026 04:37 PM UTC

March 28, 2026


EuroPython

Humans of EuroPython: Jodie Burchell

What does it take to run Europe’s largest Python conference? 🐍 Not budgets or venues—it’s people.

EuroPython isn’t powered by code alone, but by a vibrant network of volunteers who shape every session and welcome every attendee. From ensuring talks run seamlessly

March 28, 2026 06:20 PM UTC