skip to navigation
skip to content

Planet Python

Last update: January 20, 2026 04:44 AM UTC

January 20, 2026


PyBites

“I’m worried about layoffs”

I’ve had some challenging conversations this week.

Lately, my calendar has been filled with calls from developers reaching out for advice because layoffs were just announced at their company.

Having been in their shoes myself, I could really empathise with their anxiety.

The thing is though, when we’d dig into why there was such anxiety, a common confession surfaced. It often boiled down to something like this:

“I got comfortable. I stopped learning. I haven’t touched a new framework or built anything serious in two years because things were okay.”

They were enjoying “Peace Time.”

I like to think of life in two modes: War Mode and Peace Time.

The deadly mistake most developers make is waiting for War Mode before they start training.

They wait until the severance package arrives to finally decide, “Okay, time to really learn Python/FastAPI/Cloud.”

It’s a recipe for disaster. Trying to learn complex engineering skills when you’re terrified about paying the mortgage is almost impossible. You’re just too stressed. You can’t focus which means you can’t dive into the deep building necessary to learn.

You absolutely have to train and skill up during Peace Time.

When things are boring and stable, that’s the exact moment you should be aggressive about your growth.

That’s when you have the mental bandwidth to struggle through a hard coding problem without the threat of redundancy hanging over your head. It’s the perfect time to sharpen the saw.

If you’re currently in a stable job, you’re in Peace Time. Don’t waste it.

Here’s what you need to do: 

Does this resonate with you? Are you guilty of coasting during Peace Time?

I know I’ve been there! (I often think back and wonder where I’d be now had I not spent so much time coasting through my life’s peaceful periods!)

Let’s get you back on track. Fill out this Portfolio Assessment form we’ve created to help you formulate your goals and ideas. We read every submission, Pybites Portfolio Assessment Tool.

Julian

This note was originally sent to our email list. Join here: https://pybit.es/newsletter

January 20, 2026 12:15 AM UTC

January 19, 2026


Kevin Renskers

Django 6.0 Tasks: a framework without a worker

Background tasks have always existed in Django projects. They just never existed in Django itself.

For a long time, Django focused almost exclusively on the request/response cycle. Anything that happened outside that flow, such as sending emails, running cleanups, or processing uploads, was treated as an external concern. The community filled that gap with tools like Celery, RQ, and cron-based setups.

That approach worked but it was never ideal. Background tasks are not an edge case. They are a fundamental part of almost every non-trivial web application. Leaving this unavoidable slice entirely to third-party tooling meant that every serious Django project had to make its own choices, each with its own trade-offs, infrastructure requirements, and failure modes. It’s one more thing that makes Django complex to deploy.

Django 6.0 is the first release that acknowledges this problem at the framework level by introducing a built-in tasks framework. That alone makes it a significant release. But my question is whether it actually went far enough.

What Django 6.0 adds

Django 6.0 introduces a brand new tasks framework. It’s not a queue, not a worker system, and not a scheduler. It only defines background work in a first-party, Django-native way, and provides hooks for someone else to execute that work.

As an abstraction, this is clean and sensible. It gives Django a shared language for background execution and removes a long-standing blind spot in the framework. But it also stops there.

Django’s task system only supports one-off execution. There is no notion of scheduling, recurrence, retries, persistence, or guarantees. There is no worker process and no production-ready backend. That limitation would be easier to accept if one-off tasks were the primary use case for background work, but they are not. In real applications, background work is usually time-based, repeatable, and failure-prone. Tasks need to run later, run again, or keep retrying until they succeed.

A missed opportunity

What makes this particularly frustrating is that Django had a clear opportunity to do more.

DEP 14 explicitly talks about a database backend, deferring tasks to run at a specific time in the future, and a new email backend that offloads work to the background. None of that has made it into Django itself yet. Why wasn’t the database worker from django-tasks at least added to Django , or something equivalent? This would have covered a large percentage of real-world use cases with minimal operational complexity.

Instead, we got an abstraction without an implementation.

I understand that building features takes time. What I struggle to understand is why shipping such a limited framework was preferred over waiting longer and delivering a more complete story. You only get to introduce a feature once, and in its current form the tasks framework feels more confusing than helpful for newcomers. The official documentation even acknowledges this incompleteness, yet offers little guidance beyond a link to the Community Ecosystem page. Developers are left guessing whether they are missing an intended setup or whether the feature is simply unfinished.

What Django should focus on next

Currently, with Django 6.0, serious background processing still requires third-party tools for scheduling, retries, delayed execution, monitoring, and scaling workers. That was true before, and it remains true now. Even if one-off fire-and-forget tasks are all you need, you still need to install a third party package to get a database backend and worker.

DEP 14 also explicitly states that the intention is not to build a replacement for Celery or RQ, because “that is a complex and nuanced undertaking”. I think this is a mistake. The vast majority of Django applications need a robust task framework. A database-backed worker that handles delays, retries, and basic scheduling would cover most real-world needs without any of Celery’s operational complexity. Django positions itself as a batteries-included framework, and background tasks are not an advanced feature. They are basic application infrastructure.

Otherwise, what is the point of Django’s Task framework? Let’s assume that it’ll get a production-ready backend and worker soon. What then? It can still only run one-off tasks. As soon as you need to schedule tasks, you still need to reach for a third-party solution. I think it should have a first-party answer for the most common cases, even if it’s complex.

Conclusion

Django 6.0’s task system is an important acknowledgement of a long-standing gap in the framework. It introduces a clean abstraction and finally gives background work a place in Django itself. This is good! But by limiting that abstraction to one-off tasks and leaving execution entirely undefined, Django delivers the least interesting part of the solution.

If I sound disappointed, it’s because I am. I just don’t understand the point of adding such a bare-bones Task framework when the reality is that most real-world projects still need to use third-party packages. But the foundation is there now. I hope that Django builds something on top that can replace django-apscheduler, django-rq, and django-celery. I believe that it can, and that it should.

January 19, 2026 08:00 PM UTC


Talk Python Blog

Announcing Talk Python AI Integrations

We’ve just added two new and exciting features to the Talk Python To Me website to allow deeper and richer integration with AI and LLMs.

  1. A full MCP server at talkpython.fm/api/mcp/docs
  2. A LLMs summary to guide non-MCP use-cases: talkpython.fm/llms.txt

The MCP Server

New to the idea of an MCP server? MCP (Model Context Protocol) servers are lightweight services that expose data and functionality to AI assistants through a standardized interface, allowing models like Claude to query external systems and access real-time information beyond their training data. The Talk Python To Me MCP server acts as a bridge between AI conversations and the podcast’s extensive catalog. This enables you to search episodes, look up guest appearances, retrieve transcripts, and explore course content directly within your AI workflow, making research and content discovery seamless.

January 19, 2026 05:49 PM UTC


Mike Driscoll

New Book: Vibe Coding Video Games with Python

My latest book, Vibe Coding Video Games with Python, is now available as an eBook. The paperback will be coming soon, hopefully by mid-February at the latest. The book is around 183 pages in length and is 6×9” in size.

Vibe Coding Video Games with Python

In this book, you will learn how to use artificial intelligence to create mini-games. You will attempt to recreate the look and feel of various classic video games. The intention is not to violate copyright or anything of the sort, but instead to learn the limitations and the power of AI.

Instead, you will simply be learning about whether or not you can use AI to help you know how to create video games. Can you do it with no previous knowledge, as the AI proponents say? Is it really possible to create something just by writing out questions to the ether?

You will use various large language models (LLMs), such as Google Gemini, Grok, Mistral, and CoPilot, to create these games. You will discover the differences and similarities between these tools. You may be surprised to find that some tools give much more context than others.

AI is certainly not a cure-all and is far from perfect. You will quickly discover AI’s limitations and learn some strategies for solving those kinds of issues.

What You’ll Learn

You’ll be creating “clones” of some popular games. However, these games will only be the first level and may or may not be fully functional.

Where to Purchase

You can get Vibe Coding Video Games with Python at the following websites:

The post New Book: Vibe Coding Video Games with Python appeared first on Mouse Vs Python.

January 19, 2026 02:25 PM UTC


Real Python

How to Integrate ChatGPT's API With Python Projects

Python’s openai library provides the tools you need to integrate the ChatGPT API into your Python applications. With it, you can send text prompts to the API and receive AI-generated responses. You can also guide the AI’s behavior with developer role messages and handle both simple text generation and more complex code creation tasks. Here’s an example:

ChatGPT Python API ExamplePython Script Output from a ChatGPT API Call Using openai

After reading this tutorial, you’ll understand how examples like this work under the hood. You’ll learn the fundamentals of using the ChatGPT API from Python and have code examples you can adapt for your own projects.

Get Your Code: Click here to download the free sample code that you’ll use to integrate ChatGPT’s API with Python projects.

Take the Quiz: Test your knowledge with our interactive “How to Integrate ChatGPT's API With Python Projects” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

How to Integrate ChatGPT's API With Python Projects

Test your knowledge of the ChatGPT API in Python. Practice sending prompts with openai and handling text and code responses in this quick quiz.

Prerequisites

To follow along with this tutorial, you’ll need the following:

Don’t worry if you’re new to working with APIs. This tutorial will guide you through everything you need to know to get started with the ChatGPT API and implement AI features in your applications.

Step 1: Obtain Your API Key and Install the OpenAI Package

Before you can start making calls to the ChatGPT Python API, you need to obtain an API key and install the OpenAI Python library. You’ll start by getting your API key from the OpenAI platform, then install the required package and verify that everything works.

Obtain Your API Key

You can obtain an API key from the OpenAI platform by following these steps:

  1. Navigate to platform.openai.com and sign in to your account or create a new one if you don’t have an account yet.
  2. Click on the settings icon in the top-right corner and select API keys from the left-hand menu.
  3. Click the Create new secret key button to generate a new API key.
  4. In the dialog that appears, give your key a descriptive name like “Python Tutorial Key” to help you identify it later.
  5. For the Project field, select your preferred project.
  6. Under Permissions, select All to give your key full access to the API for development purposes.
  7. Click Create secret key to generate your API key.
  8. Copy the generated key immediately, as you won’t be able to see it again after closing the dialog.

Now that you have your API key, you need to store it securely.

Warning: Never hard-code your API key directly in your Python scripts or commit it to version control. Always use environment variables or secure key management services to keep your credentials safe.

The OpenAI Python library automatically looks for an environment variable named OPENAI_API_KEY when creating a client connection. By setting this variable in your terminal session, you’ll authenticate your API requests without exposing your key in your code.

Set the OPENAI_API_KEY environment variable in your terminal session:

Windows PowerShell
PS> $env:OPENAI_API_KEY="your-api-key-here"
Shell
$ export OPENAI_API_KEY="your-api-key-here"

Replace your-api-key-here with the actual API key you copied from the OpenAI platform.

Install the OpenAI Package

With your API key configured, you can now install the OpenAI Python library. The openai package is available on the Python Package Index (PyPI), and you can install it with pip.

Open a terminal or command prompt, create a new virtual environment, and then install the library:

Read the full article at https://realpython.com/chatgpt-api-python/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 19, 2026 02:00 PM UTC

Quiz: How to Integrate ChatGPT's API With Python Projects

In this quiz, you’ll test your understanding of How to Integrate ChatGPT’s API With Python Projects.

By working through this quiz, you’ll revisit how to send prompts with the openai library, guide behavior with developer role messages, and handle text and code outputs. You’ll also see how to integrate AI responses into your Python scripts for practical tasks.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 19, 2026 12:00 PM UTC


Python Bytes

#466 PSF Lands $1.5 million

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong>Better Django management commands with django-click and django-typer</strong></li> <li><strong><a href="https://pyfound.blogspot.com?featured_on=pythonbytes">PSF Lands a $1.5 million sponsorship from Anthropic</a></strong></li> <li><strong><a href="https://nesbitt.io/2025/12/26/how-uv-got-so-fast.html?featured_on=pythonbytes">How uv got so fast</a></strong></li> <li><strong><a href="https://pyview.rocks?featured_on=pythonbytes">PyView Web Framework</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=3jaIv4VvmgY' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="466">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: Better Django management commands with django-click and django-typer</strong></p> <ul> <li>Lacy Henschel</li> <li>Extend Django <a href="http://manage.py?featured_on=pythonbytes">&lt;code>manage.py&lt;/code></a> commands for your own project, for things like <ul> <li>data operations</li> <li>API integrations</li> <li>complex data transformations</li> <li>development and debugging</li> </ul></li> <li>Extending is built into Django, but it looks easier, less code, and more fun with either <a href="https://github.com/django-commons/django-click?featured_on=pythonbytes">&lt;code>django-click&lt;/code></a> or <a href="https://github.com/django-commons/django-typer?featured_on=pythonbytes">&lt;code>django-typer&lt;/code></a>, two projects supported through <a href="https://github.com/django-commons?featured_on=pythonbytes">Django Commons</a></li> </ul> <p><strong>Michael #2: <a href="https://pyfound.blogspot.com?featured_on=pythonbytes">PSF Lands a $1.5 million sponsorship from Anthropic</a></strong></p> <ul> <li>Anthropic is partnering with the Python Software Foundation in a landmark funding commitment to support both security initiatives and the PSF's core work.</li> <li>The funds will enable new automated tools for proactively reviewing all packages uploaded to PyPI, moving beyond the current reactive-only review process.</li> <li>The PSF plans to build a new dataset of known malware for capability analysis</li> <li>The investment will sustain programs like the Developer in Residence initiative, community grants, and infrastructure like PyPI.</li> </ul> <p><strong>Brian #3: <a href="https://nesbitt.io/2025/12/26/how-uv-got-so-fast.html?featured_on=pythonbytes">How uv got so fast</a></strong></p> <ul> <li>Andrew Nesbitt</li> <li>It’s not just be cause “it’s written in Rust”.</li> <li>Recent-ish standards, PEPs 518 (2016), 517 (2017), 621 (2020), and 658 (2022) made many <code>uv</code> design decisions possible</li> <li>And <code>uv</code> drops many backwards compatible decisions kept by <code>pip</code>.</li> <li>Dropping functionality speeds things up. <ul> <li>“Speed comes from elimination. Every code path you don’t have is a code path you don’t wait for.”</li> </ul></li> <li>Some of what uv does could be implemented in pip. Some cannot.</li> <li>Andrew discusses different speedups, why they could be done in Python also, or why they cannot.</li> <li>I read this article out of interest. But it gives me lots of ideas for tools that could be written faster just with Python by making design and support decisions that eliminate whole workflows.</li> </ul> <p><strong>Michael #4: <a href="https://pyview.rocks?featured_on=pythonbytes">PyView Web Framework</a></strong></p> <ul> <li>PyView brings the <a href="https://github.com/phoenixframework/phoenix_live_view?featured_on=pythonbytes">Phoenix LiveView</a> paradigm to Python</li> <li>Recently <a href="https://www.youtube.com/watch?v=g0RDxN71azs">interviewed Larry on Talk Python</a></li> <li>Build dynamic, real-time web applications using server-rendered HTML</li> <li>Check out <a href="https://examples.pyview.rocks?featured_on=pythonbytes">the examples</a>. <ul> <li>See the Maps demo for some real magic</li> </ul></li> <li>How does this possibly work? See the <a href="https://pyview.rocks/core-concepts/liveview-lifecycle/?featured_on=pythonbytes">LiveView Lifecycle</a>.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://upgradedjango.com?featured_on=pythonbytes">Upgrade Django</a>, has a great discussion of how to upgrade version by version and why you might want to do that instead of just jumping ahead to the latest version. And also who might want to save time by leapfrogging <ul> <li>Also has all the versions and dates of release and end of support.</li> </ul></li> <li>The <a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">Lean TDD</a> book 1st draft is done. <ul> <li>Now available through both <a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">pythontest</a> and <a href="https://leanpub.com/lean-tdd?featured_on=pythonbytes">LeanPub</a> <ul> <li>I set it as 80% done because of future drafts planned.</li> </ul></li> <li>I’m working through a few submitted suggestions. Not much feedback, so the 2nd pass might be fast and mostly my own modifications. It’s possible.</li> <li>I’m re-reading it myself and already am disappointed with page 1 of the introduction. I gotta make it pop more. I’ll work on that.</li> <li>Trying to decide how many suggestions around using AI I should include. <ul> <li>It’s not mentioned in the book yet, but I think I need to incorporate some discussion around it.</li> </ul></li> </ul></li> </ul> <p>Michael:</p> <ul> <li><a href="https://thenewstack.io/python-whats-coming-in-2026/?utm_campaign=trueanthem&utm_medium=social&utm_source=linkedin&featured_on=pythonbytes">Python: What’s Coming in 2026</a></li> <li>Python Bytes rewritten in Quart + async (very similar to <a href="https://talkpython.fm/blog/posts/talk-python-rewritten-in-quart-async-flask/?featured_on=pythonbytes">Talk Python’s journey</a>)</li> <li>Added <a href="https://talkpython.fm/api/mcp/docs?featured_on=pythonbytes">a proper MCP server</a> at Talk Python To Me (you don’t need a formal MCP framework btw) <ul> <li>Example one: <a href="https://blobs.pythonbytes.fm/latest-episodes-mcp.png?cache_id=b76dc6">latest-episodes-mcp.png</a></li> <li>Example two: <a href="https://blobs.pythonbytes.fm/which-episodes-mcp.webp?cache_id=2079d2">which-episodes-mcp.webp</a></li> </ul></li> <li><a href="https://llmstxt.org?featured_on=pythonbytes">Implmented /llms.txt</a> for Talk Python To Me (see <a href="http://talkpython.fm/llms.txt?featured_on=pythonbytes">talkpython.fm/llms.txt</a> )</li> </ul> <p><strong>Joke: <a href="https://www.linkedin.com/feed/update/urn:li:activity:7351943843409248256/?featured_on=pythonbytes">Reverse Superman</a></strong></p>

January 19, 2026 08:00 AM UTC

January 18, 2026


EuroPython

Humans of EuroPython: Doreen Peace Nangira Wanyama

EuroPython thrives thanks to dedicated volunteers who invest hundreds of hours into each conference. From speaker coordination and fundraising to workshop preparation, their commitment ensures every year surpasses the last.

Below is our latest interview with Doreen Peace Nangira Wanyama. Doreen wore many hats at EuroPython 2025, including being the lead organizer of the Django Girls workshop during the Beginners’ Day, helping in the Financial Aid Team, as well as volunteering on-site.

Thank you for contributing to the conference, Doreen!

altDoreen Peace Nangira Wanyama, Django Girls Organizer at EuroPython 2025

EP: What first inspired you to volunteer for EuroPython? 

What inspired me was the diversity and inclusivity aspect in the EuroPython community. I had been following the EuroPython community since 2024 and what stood out for me was how inclusive it was. This was open not only to people from the EU but worldwide. I saw people from Africa getting the stage to speak and even the opportunity grants were there for everyone. I told myself wow! I should be part of this community. All I can say I will still choose EuroPython over and over.

EP: What was your primary role as a volunteer, and what did a typical day look like for you?

I had the opportunity to play two main roles. I was the Django Girls organizer and also part of the Financial Aid organizing team. In the Django Girls, I was in charge of putting out the call for coaches and Django Girls mentees. I ensured proper logistics were in place for all attendees and also worked with the communications team to ensure enough social media posts were made about the event. I also worked with coaches to set up the PCs for mentees for the workshop i.e. Django installation.In the Financial Aid Team, I worked with fellow team mates by putting out the call for finaid grants, reviewing applications and sending out acknowledgement emails. We prepared visa letters to accepted grant recipients to help with their visa application. We issued the conference tickets to both accepted online and onsite attendees. After the conference we did reimbursements for each grant recipient and followed up with emails to ensure everyone had been reimbursed.

EP: Did you make any lasting friendships or professional connections through contributing to the conference?

Yes. Contributing to this conference earned me new friends and professional connections. I got to meet and talk to people I would have hardly met out there. First of all, when I attended the conference I thought I would be the only database administrator there, well the EuroPython had a surprise for me. I met a fellow DBA from Germany and we would not stop talking about the importance of Python in our field. I got the opportunity of meeting the DSF president Thibaud Colas for the first time, someone who is down to earth and one who loves giving back to the community.I also got to meet Daria Linhart, a loving soul. Someone who is always ready to help. I remember getting stuck in Czech when I was looking for my accommodation. Daria used her Czech language skills to speak with my host and voila!

EP: How has volunteering at EuroPython impacted your own career or learning journey?

Volunteering at EuroPython made me realize that people can make you go far. Doing it all alone is possible but doing it as a team makes a big difference. Working with different people during this conference and attending talks made me realize the different areas I need to improve on.  

EP: What&aposs your favorite memory from contributing at EuroPython?

My favourite memory is the daily social events after the conference. Wow! EuroPython made me explore the Czech Republic to the fullest. From the speakers&apos dinner on the first day to the Django birthday cake we cut, I really had great moments. I also can’t forget the variety of food we were offered. I enjoyed the whole cuisine and can’t wait to experience this again in the next EuroPython.

EP: If you were to invite someone else, what do you think are the top 3 reasons to join the EuroPython organizing team?

A. Freedom of expression — EuroPython is a free and open space. Everyone is allowed to express their views without bias.

B. Learning opportunities — Whether you are a first timer or a seasoned conference organizer, there is always something to learn here. You will learn new ways of doing things.

C. Loving and welcoming community — Want a place that feels like home, EuroPython community is the place.

EP: Thank you, Doreen!

January 18, 2026 05:07 PM UTC


Eli Bendersky

Compiling Scheme to WebAssembly

One of my oldest open-source projects - Bob - has celebrated 15 a couple of months ago. Bob is a suite of implementations of the Scheme programming language in Python, including an interpreter, a compiler and a VM. Back then I was doing some hacking on CPython internals and was very curious about how CPython-like bytecode VMs work; Bob was an experiment to find out, by implementing one from scratch for R5RS Scheme.

Several months later I added a C++ VM to Bob, as an exercise to learn how such VMs are implemented in a low-level language without all the runtime support Python provides; most importantly, without the built-in GC. The C++ VM in Bob implements its own mark-and-sweep GC.

After many quiet years (with just a sprinkling of cosmetic changes, porting to GitHub, updates to Python 3, etc), I felt the itch to work on Bob again just before the holidays. Specifically, I decided to add another compiler to the suite - this one from Scheme directly to WebAssembly.

The goals of this effort were two-fold:

  1. Experiment with lowering a real, high-level language like Scheme to WebAssembly. Experiments like the recent Let's Build a Compiler compile toy languages that are at the C level (no runtime). Scheme has built-in data structures, lexical closures, garbage collection, etc. It's much more challenging.
  2. Get some hands-on experience with the WASM GC extension [1]. I have several samples of using WASM GC in the wasm-wat-samples repository, but I really wanted to try it for something "real".

Well, it's done now; here's an updated schematic of the Bob project:

Bob project diagram with all the components it includes

The new part is the rightmost vertical path. A WasmCompiler class lowers parsed Scheme expressions all the way down to WebAssembly text, which can then be compiled to a binary and executed using standard WASM tools [2].

Highlights

The most interesting aspect of this project was working with WASM GC to represent Scheme objects. As long as we properly box/wrap all values in refs, the underlying WASM execution environment will take care of the memory management.

For Bob, here's how some key Scheme objects are represented:

;; PAIR holds the car and cdr of a cons cell.
(type $PAIR (struct (field (mut (ref null eq))) (field (mut (ref null eq)))))

;; BOOL represents a Scheme boolean. zero -> false, nonzero -> true.
(type $BOOL (struct (field i32)))

;; SYMBOL represents a Scheme symbol. It holds an offset in linear memory
;; and the length of the symbol name.
(type $SYMBOL (struct (field i32) (field i32)))

$PAIR is of particular interest, as it may contain arbitrary objects in its fields; (ref null eq) means "a nullable reference to something that has identity". ref.test can be used to check - for a given reference - the run-time type of the value it refers to.

You may wonder - what about numeric values? Here WASM has a trick - the i31 type can be used to represent a reference to an integer, but without actually boxing it (one bit is used to distinguish such an object from a real reference). So we don't need a separate type to hold references to numbers.

Also, the $SYMBOL type looks unusual - how is it represented with two numbers? The key to the mystery is that WASM has no built-in support for strings; they should be implemented manually using offsets to linear memory. The Bob WASM compiler emits the string values of all symbols encountered into linear memory, keeping track of the offset and length of each one; these are the two numbers placed in $SYMBOL. This also allows to fairly easily implement the string interning feature of Scheme; multiple instances of the same symbol will only be allocated once.

Consider this trivial Scheme snippet:

(write '(10 20 foo bar))

The compiler emits the symbols "foo" and "bar" into linear memory as follows [3]:

(data (i32.const 2048) "foo")
(data (i32.const 2051) "bar")

And looking for one of these addresses in the rest of the emitted code, we'll find:

(struct.new $SYMBOL (i32.const 2051) (i32.const 3))

As part of the code for constructing the constant cons list representing the argument to write; address 2051 and length 3: this is the symbol bar.

Speaking of write, implementing this builtin was quite interesting. For compatibility with the other Bob implementations in my repository, write needs to be able to print recursive representations of arbitrary Scheme values, including lists, symbols, etc.

Initially I was reluctant to implement all of this functionality by hand in WASM text, but all alternatives ran into challenges:

  1. Deferring this to the host is difficult because the host environment has no access to WASM GC references - they are completely opaque.
  2. Implementing it in another language (maybe C?) and lowering to WASM is also challenging for a similar reason - the other language is unlikely to have a good representation of WASM GC objects.

So I bit the bullet and - with some AI help for the tedious parts - just wrote an implementation of write directly in WASM text; it wasn't really that bad. I import only two functions from the host:

(import "env" "write_char" (func $write_char (param i32)))
(import "env" "write_i32" (func $write_i32 (param i32)))

Though emitting integers directly from WASM isn't hard, I figured this project already has enough code and some host help here would be welcome. For all the rest, only the lowest level write_char is used. For example, here's how booleans are emitted in the canonical Scheme notation (#t and #f):

(func $emit_bool (param $b (ref $BOOL))
    (call $emit (i32.const 35)) ;; '#'
    (if (i32.eqz (struct.get $BOOL 0 (local.get $b)))
        (then (call $emit (i32.const 102))) ;; 'f'
        (else (call $emit (i32.const 116))) ;; 't'
    )
)

Conclusion

This was a really fun project, and I learned quite a bit about realistic code emission to WASM. Feel free to check out the source code of WasmCompiler - it's very well documented. While it's a bit over 1000 LOC in total [4], more than half of that is actually WASM text snippets that implement the builtin types and functions needed by a basic Scheme implementation.


[1]The GC proposal is documented here. It was officially added to the WASM spec in Oct 2023.
[2]

In Bob this is currently done with bytecodealliance/wasm-tools for the text-to-binary conversion and Node.js for the execution environment, but this can change in the future.

I actually wanted to use Python bindings to wasmtime, but these don't appear to support WASM GC yet.

[3]2048 is just an arbitrary offset the compiler uses as the beginning of the section for symbols in memory. We could also use the multiple memories feature of WASM and dedicate a separate linear memory just for symbols.
[4]To be clear, this is just the WASM compiler class; it uses the Expr representation of Scheme that is created by Bob's parser (and lexer); the code of these other components is shared among all Bob implementations and isn't counted here.

January 18, 2026 06:40 AM UTC

Revisiting "Let's Build a Compiler"

There's an old compiler-building tutorial that has become part of the field's lore: the Let's Build a Compiler series by Jack Crenshaw (published between 1988 and 1995).

I ran into it in 2003 and was very impressed, but it's now 2025 and this tutorial is still being mentioned quite often in Hacker News threads. Why is that? Why does a tutorial from 35 years ago, built in Pascal and emitting Motorola 68000 assembly - technologies that are virtually unknown for the new generation of programmers - hold sway over compiler enthusiasts? I've decided to find out.

The tutorial is easily available and readable online, but just re-reading it seemed insufficient. So I've decided on meticulously translating the compilers built in it to Python and emit a more modern target - WebAssembly. It was an enjoyable process and I want to share the outcome and some insights gained along the way.

The result is this code repository. Of particular interest is the TUTORIAL.md file, which describes how each part in the original tutorial is mapped to my code. So if you want to read the original tutorial but play with code you can actually easily try on your own, feel free to follow my path.

A sample

To get a taste of the input language being compiled and the output my compiler generates, here's a sample program in the KISS language designed by Jack Crenshaw:

var X=0

 { sum from 0 to n-1 inclusive, and add to result }
 procedure addseq(n, ref result)
     var i, sum  { 0 initialized }
     while i < n
         sum = sum + i
         i = i + 1
     end
     result = result + sum
 end

 program testprog
 begin
     addseq(11, X)
 end
 .

It's from part 13 of the tutorial, so it showcases procedures along with control constructs like the while loop, and passing parameters both by value and by reference. Here's the WASM text generated by my compiler for part 13:

(module
  (memory 8)
  ;; Linear stack pointer. Used to pass parameters by ref.
  ;; Grows downwards (towards lower addresses).
  (global $__sp (mut i32) (i32.const 65536))

  (global $X (mut i32) (i32.const 0))

  (func $ADDSEQ (param $N i32) (param $RESULT i32)
    (local $I i32)
    (local $SUM i32)
    loop $loop1
      block $breakloop1
        local.get $I
        local.get $N
        i32.lt_s
        i32.eqz
        br_if $breakloop1
        local.get $SUM
        local.get $I
        i32.add
        local.set $SUM
        local.get $I
        i32.const 1
        i32.add
        local.set $I
        br $loop1
      end
    end
    local.get $RESULT
    local.get $RESULT
    i32.load
    local.get $SUM
    i32.add
    i32.store
  )

  (func $main (export "main") (result i32)
    i32.const 11
    global.get $__sp      ;; make space on stack
    i32.const 4
    i32.sub
    global.set $__sp
    global.get $__sp
    global.get $X
    i32.store
    global.get $__sp    ;; push address as parameter
    call $ADDSEQ
    ;; restore parameter X by ref
    global.get $__sp
    i32.load offset=0
    global.set $X
    ;; clean up stack for ref parameters
    global.get $__sp
    i32.const 4
    i32.add
    global.set $__sp
    global.get $X
  )
)

You'll notice that there is some trickiness in the emitted code w.r.t. handling the by-reference parameter (my previous post deals with this issue in more detail). In general, though, the emitted code is inefficient - there is close to 0 optimization applied.

Also, if you're very diligent you'll notice something odd about the global variable X - it seems to be implicitly returned by the generated main function. This is just a testing facility that makes my compiler easy to test. All the compilers are extensively tested - usually by running the generated WASM code [1] and verifying expected results.

Insights - what makes this tutorial so special?

While reading the original tutorial again, I had on opportunity to reminisce on what makes it so effective. Other than the very fluent and conversational writing style of Jack Crenshaw, I think it's a combination of two key factors:

  1. The tutorial builds a recursive-descent parser step by step, rather than giving a long preface on automata and table-based parser generators. When I first encountered it (in 2003), it was taken for granted that if you want to write a parser then lex + yacc are the way to go [2]. Following the development of a simple and clean hand-written parser was a revelation that wholly changed my approach to the subject; subsequently, hand-written recursive-descent parsers have been my go-to approach for almost 20 years now.
  2. Rather than getting stuck in front-end minutiae, the tutorial goes straight to generating working assembly code, from very early on. This was also a breath of fresh air for engineers who grew up with more traditional courses where you spend 90% of the time on parsing, type checking and other semantic analysis and often run entirely out of steam by the time code generation is taught.

To be honest, I don't think either of these are a big problem with modern resources, but back in the day the tutorial clearly hit the right nerve with many people.

What else does it teach us?

Jack Crenshaw's tutorial takes the syntax-directed translation approach, where code is emitted while parsing, without having to divide the compiler into explicit phases with IRs. As I said above, this is a fantastic approach for getting started, but in the latter parts of the tutorial it starts showing its limitations. Especially once we get to types, it becomes painfully obvious that it would be very nice if we knew the types of expressions before we generate code for them.

I don't know if this is implicated in Jack Crenshaw's abandoning the tutorial at some point after part 14, but it may very well be. He keeps writing how the emitted code is clearly sub-optimal [3] and can be improved, but IMHO it's just not that easy to improve using the syntax-directed translation strategy. With perfect hindsight vision, I would probably use Part 14 (types) as a turning point - emitting some kind of AST from the parser and then doing simple type checking and analysis on that AST prior to generating code from it.

Conclusion

All in all, the original tutorial remains a wonderfully readable introduction to building compilers. This post and the GitHub repository it describes are a modest contribution that aims to improve the experience of folks reading the original tutorial today and not willing to use obsolete technologies. As always, let me know if you run into any issues or have questions!


[1]This is done using the Python bindings to wasmtime.
[2]By the way, gcc switched from YACC to hand-written recursive-descent parsing in the 2004-2006 timeframe, and Clang has been implemented with a recursive-descent parser from the start (2007).
[3]

Concretely: when we compile subexpr1 + subexpr2 and the two sides have different types, it would be mighty nice to know that before we actually generate the code for both sub-expressions. But the syntax-directed translation approach just doesn't work that way.

To be clear: it's easy to generate working code; it's just not easy to generate optimal code without some sort of type analysis that's done before code is actually generated.

January 18, 2026 06:40 AM UTC


Armin Ronacher

Agent Psychosis: Are We Going Insane?

You can use Polecats without the Refinery and even without the Witness or Deacon. Just tell the Mayor to shut down the rig and sling work to the polecats with the message that they are to merge to main directly. Or the polecats can submit MRs and then the Mayor can merge them manually. It’s really up to you. The Refineries are useful if you have done a LOT of up-front specification work, and you have huge piles of Beads to churn through with long convoys.

Gas Town Emergency User Manual, Steve Yegge

Many of us got hit by the agent coding addiction. It feels good, we barely sleep, we build amazing things. Every once in a while that interaction involves other humans, and all of a sudden we get a reality check that maybe we overdid it. The most obvious example of this is the massive degradation of quality of issue reports and pull requests. As a maintainer many PRs now look like an insult to one’s time, but when one pushes back, the other person does not see what they did wrong. They thought they helped and contributed and get agitated when you close it down.

But it’s way worse than that. I see people develop parasocial relationships with their AIs, get heavily addicted to it, and create communities where people reinforce highly unhealthy behavior. How did we get here and what does it do to us?

I will preface this post by saying that I don’t want to call anyone out in particular, and I think I sometimes feel tendencies that I see as negative, in myself as well. I too, have thrown some vibeslop up to other people’s repositories.

Our Little Dæmons

In His Dark Materials, every human has a dæmon, a companion that is an externally visible manifestation of their soul. It lives alongside as an animal, but it talks, thinks and acts independently. I’m starting to relate our relationship with agents that have memory to those little creatures. We become dependent on them, and separation from them is painful and takes away from our new-found identity. We’re relying on these little companions to validate us and to collaborate with. But it’s not a genuine collaboration like between humans, it’s one that is completely driven by us, and the AI is just there for the ride. We can trick it to reinforce our ideas and impulses. And we act through this AI. Some people who have not programmed before, now wield tremendous powers, but all those powers are gone when their subscription hits a rate limit and their little dæmon goes to sleep.

Then, when we throw up a PR or issue to someone else, that contribution is the result of this pseudo-collaboration with the machine. When I see an AI pull request come in, or on another repository, I cannot tell how someone created it, but I can usually after a while tell when it was prompted in a way that is fundamentally different from how I do it. Yet it takes me minutes to figure this out. I have seen some coding sessions from others and it’s often done with clarity, but using slang that someone has come up with and most of all: by completely forcing the AI down a path without any real critical thinking. Particularly when you’re not familiar with how the systems are supposed to work, giving in to what the machine says and then thinking one understands what is going on creates some really bizarre outcomes at times.

But people create these weird relationships with their AI agent and once you see how some prompt their machines, you realize that it dramatically alters what comes out of it. To get good results you need to provide context, you need to make the tradeoffs, you need to use your knowledge. It’s not just a question of using the context badly, it’s also the way in which people interact with the machine. Sometimes it’s unclear instructions, sometimes it’s weird role-playing and slang, sometimes it’s just swearing and forcing the machine, sometimes it’s a weird ritualistic behavior. Some people just really ram the agent straight towards the most narrow of all paths towards a badly defined goal with little concern about the health of the codebase.

Addicted to Prompts

These dæmon relationships change not just how we work, but what we produce. You can completely give in and let the little dæmon run circles around you. You can reinforce it to run towards ill defined (or even self defined) goals without any supervision.

It’s one thing when newcomers fall into this dopamine loop and produce something. When Peter first got me hooked on Claude, I did not sleep. I spent two months excessively prompting the thing and wasting tokens. I ended up building and building and creating a ton of tools I did not end up using much. “You can just do things” was what was on my mind all the time but it took quite a bit longer to realize that just because you can, you might not want to. It became so easy to build something and in comparison it became much harder to actually use it or polish it. Quite a few of the tools I built I felt really great about, just to realize that I did not actually use them or they did not end up working as I thought they would.

The thing is that the dopamine hit from working with these agents is so very real. I’ve been there! You feel productive, you feel like everything is amazing, and if you hang out just with people that are into that stuff too, without any checks, you go deeper and deeper into the belief that this all makes perfect sense. You can build entire projects without any real reality check. But it’s decoupled from any external validation. For as long as nobody looks under the hood, you’re good. But when an outsider first pokes at it, it looks pretty crazy. And damn some things look amazing. I too was blown away (and fully expected at the same time) when Cursor’s AI written Web Browser landed. It’s super impressive that agents were able to bootstrap a browser in a week! But holy crap! I hope nobody ever uses that thing or would try to build an actual browser out of it, at least with this generation of agents, it’s still pure slop with little oversight. It’s an impressive research and tech demo, not an approach to building software people should use. At least not yet.

There is also another side to this slop loop addiction: token consumption.

Consider how many tokens these loops actually consume. A well-prepared session with good tooling and context can be remarkably token-efficient. For instance, the entire port of MiniJinja to Go took only 2.2 million tokens. But the hands-off approaches—spinning up agents and letting them run wild—burn through tokens at staggering rates. Patterns like Ralph are particularly wasteful: you restart the loop from scratch each time, which means you lose the ability to use cached tokens or reuse context.

We should also remember that current token pricing is almost certainly subsidized. These patterns may not be economically viable for long. And those discounted coding plans we’re all on? They might not last either.

Slop Loop Cults

And then there are things like Beads and Gas Town, Steve Yegge’s agentic coding tools, which are the complete celebration of slop loops. Beads, which is basically some sort of issue tracker for agents, is 240,000 lines of code that … manages markdown files in GitHub repositories. And the code quality is abysmal.

There appears to be some competition in place to run as many of these agents in parallel with almost no quality control in some circles. And to then use agents to try to create documentation artifacts to regain some confidence of what is actually going on. Except those documents themselves read like slop.

Looking at Gas Town (and Beads) from the outside, it looks like a Mad Max cult. What are polecats, refineries, mayors, beads, convoys doing in an agentic coding system? If the maintainer is in the loop, and the whole community is in on this mad ride, then everyone and their dæmons just throw more slop up. As an external observer the whole project looks like an insane psychosis or a complete mad art project. Except, it’s real? Or is it not? Apparently a reason for slowdown in Gas Town is contention on figuring out the version of Beads, which takes 7 subprocess spawns. Or using the doctor command times out completely. Beads keeps growing and growing in complexity and people who are using it, are realizing that it’s almost impossible to uninstall. And they might not even work well together even though one apparently depends on the other.

I don’t want to pick on Gas Town or these projects, but they are just the most visible examples of this in-group behavior right now. But you can see similar things in some of the AI builder circles on Discord and X where people hype each other up with their creations, without much critical thinking and sanity checking of what happens under the hood.

Asymmetric and Maintainer’s Burden

It takes you a minute of prompting and waiting a few minutes for code to come out of it. But actually honestly reviewing a pull request takes many times longer than that. The asymmetry is completely brutal. Shooting up bad code is rude because you completely disregard the time of the maintainer. But everybody else is also creating AI-generated code, but maybe they passed the bar of it being good. So how can you possibly tell as a maintainer when it all looks the same? And as the person writing the issue or the PR, you felt good about it. Yet what you get back is frustration and rejection.

I’m not sure how we will go ahead here, but it’s pretty clear that in projects that don’t submit themselves to the slop loop, it’s going to be a nightmare to deal with all the AI-generated noise.

Even for projects that are fully AI-generated but are setting some standard for contributions, some folks now prefer actually just getting the prompts over getting the actual code. Because then it’s clearer what the person actually intended. There is more trust in running the agent oneself than having other people do it.

Is Agent Psychosis Real?

Which really makes me wonder: am I missing something here? Is this where we are going? Am I just not ready for this new world? Are we all collectively getting insane?

Particularly if you want to opt out of this craziness right now, it’s getting quite hard. Some projects no longer accept human contributions until they have vetted the people completely. Others are starting to require that you submit prompts alongside your code, or just the prompts alone.

I am a maintainer who uses AI myself, and I know others who do. We’re not luddites and we’re definitely not anti-AI. But we’re also frustrated when we encounter AI slop on issue and pull request trackers. Every day brings more PRs that took someone a minute to generate and take an hour to review.

There is a dire need to say no now. But when one does, the contributor is genuinely confused: “Why are you being so negative? I was trying to help.” They were trying to help. Their dæmon told them it was good.

Maybe the answer is that we need better tools — better ways to signal quality, better ways to share context, better ways to make the AI’s involvement visible and reviewable. Maybe the culture will self-correct as people hit walls. Maybe this is just the awkward transition phase before we figure out new norms.

Or maybe some of us are genuinely losing the plot, and we won’t know which camp we’re in until we look back. All I know is that when I watch someone at 3am, running their tenth parallel agent session, telling me they’ve never been more productive — in that moment I don’t see productivity. I see someone who might need to step away from the machine for a bit. And I wonder how often that someone is me.

Two things are both true to me right now: AI agents are amazing and a huge productivity boost. They are also massive slop machines if you turn off your brain and let go completely.

January 18, 2026 12:00 AM UTC

January 16, 2026


PyCon

Building the Future with Python? Apply for Free Booth Space on Startup Row at PyCon US 2026



Consult just about any guide about how to build a tech startup and one of the very first pieces of advice you’ll be given is: Talk to Your Customers. If your target market just so happens to be Python-fluent developers, data scientists, researchers, students, and open-source software enthusiasts, there’s probably no better place than PyCon US to share your startup’s products and services with the Python community.

If you’re a founder of an early-stage startup that’s building something cool with Python and want to apply for free (yes, free) booth space, conference passes, and (optionally) a table at the PyCon US Job Fair for your team at PyCon US 2026 in lovely Long Beach, California this upcoming May, we have some great news for you: Applications for booth space on Startup Row are open, but not for long…

Applications close Friday, January 30, 2026. You’ll hear back with an acceptance decision from us by mid-February, so you’ll have plenty of time to book travel and get your booth materials together in time for the conference.

TL;DR: How/where to Apply. For all the action-oriented types who want to skip the rest of this post and just get to the point, here’s the Startup Row page again (where you can find eligibility criteria, etc.) and a direct link to the application form (for which you’ll need to be logged in or create an account to access). Good luck! We look forward to reviewing your application, and hope to see you at PyCon US 2026.

What Startup Row Companies Receive

Since 2011, organizers of PyCon US set aside a row of booths for early-stage startups, straightforwardly named: Startup Row. The goal is to give early-stage companies access to the best of what PyCon US has to offer.

At no cost to them, Startup Row companies receive:Two included conference passes, with additional passes available for your team at a discount.
The only catch? If you’re granted a spot on Startup Row, as part of the onboarding process, PyCon US organizers ask for a fully refundable $400 deposit to discourage no-shows. Teams also cover their own transportation, lodging, and booth materials (banners, swag, table coverings, etc.). Startup Row organizers will partner with your team to make sure everything runs smoothly. After the conference, PyCon US refunds deposits to startups that successfully attended.

If your company is building something cool with Python, it’s hard to beat PyCon US for sharing your work and meeting the Python software community. Startup Row is where some companies launch publicly, where others find their earliest customers and contributors, and where attendees can discover exciting, meaningful job opportunities.

What kinds of companies get a spot on Startup Row?

Python is a flexible language and has applications up and down the stack.

Over the years, Startup Row has featured software and hardware companies, consumer and enterprise offerings, open-source and proprietary codebases, and teams from a surprisingly broad range of industries—from familiar categories like developer tools and ML frameworks to foundation model developers and the occasional wonderfully weird idea (think: an e-ink portable typewriter with cloud sync, or an online wedding-planning platform).

Want recent examples? Take a look at the PyCon US blog announcements for the 2025, 2024, 2023, and 2022 batches.

When scoring applications, the selection committee is encouraged to weigh:
  • Market upside: could this be a big business?
  • Problem/solution fit: does the product truly address the stated need?
  • Team strength: does the founding team have the credibility and capability to execute?
  • “X factor”: would appearing on Startup Row materially accelerate outcomes for the company and/or the Python community?
If you can make a credible case for any one of those points, your startup stands a chance of getting featured on Startup Row at PyCon US 2026.

Who do I contact with questions about Startup Row at PyCon US 2026?

For specific Startup Row-related questions between now and the application deadline, reach out to pycon-startups@python.org.

January 16, 2026 02:43 PM UTC


Python Morsels

Self-concatenation

Strings and other sequences can be multiplied by numbers to self-concatenate them.

Table of contents

  1. Concatenating strings to numbers doesn't work
  2. Multiplying strings by numbers
  3. Self-concatenation only works with integers
  4. Sequences usually support self-concatenation
  5. Practical uses of self-concatenation
  6. Self-concatenation does not copy
  7. Don't self-concatenate lists of mutable items
  8. When to use self-concatenation

Concatenating strings to numbers doesn't work

You can't use the plus sign (+) between a string and a number in Python:

>>> prefix = "year: "
>>> year = 1999
>>> prefix + year
Traceback (most recent call last):
  File "<python-input-4>", line 1, in <module>
    prefix + year
    ~~~~~~~^~~~~~
TypeError: can only concatenate str (not "int") to str

You can use the plus sign to add two numbers:

>>> year + 1
2000

Or to concatenate two strings:

>>> prefix + str(year)
'year: 1999'

But it doesn't work between strings and numbers.

More on that: Fixing TypeError: can only concatenate str (not "int") to str.

Multiplying strings by numbers

Interestingly, you can multiply a …

Read the full article: https://www.pythonmorsels.com/self-concatenation/

January 16, 2026 02:15 PM UTC


Real Python

The Real Python Podcast – Episode #280: Considering Fast and Slow in Python Programming

How often have you heard about the speed of Python? What's actually being measured, where are the bottlenecks---development time or run time---and which matters more for productivity? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 16, 2026 12:00 PM UTC


Daniel Roy Greenfeld

Writing tools to download everything

Over the years, Audrey and I have accumulated photos across a variety of services. Flickr, SmugMug, and others all have chunks of our memories sitting on their servers. Some of these services we haven't touched in years, others we pay for but rarely use. It was time to bring everything home.

Why Bother?

Two reasons pushed me to finally tackle this.

First, money. Subscriptions add up. Paying for storage on services we barely use felt wasteful. As a backup even more so because there are services that are cheaper and easier to use for that purpose, like Backblaze.

Second, simplicity. Having photos scattered across multiple services means hunting through different interfaces when looking for a specific memory. Consolidating everything into one place makes our photo library actually usable.

Using Claude to Write a Downloader

I decided to start with SmugMug since that had the largest collection. I could have written this script myself. I've done plenty of API work over the years. But I'm busy, and this felt like a perfect use case for AI assistance.

My approach was straightforward:

  1. Wrote a specification for a Smugmug downloader. I linked to the docs for the service then told it to make a CLI for downloading things off that service. For the CLI I insist on typer but otherwise I didn't specify dependencies.

  2. Told Claude to generate code based on the spec. I provided the specification and let Claude produce a working Python script.

  3. Tested by running the scripts against real data. I started with small batches to verify the downloads worked correctly. Claude got everything right when iy came to downloads on the first go, which was impressive.

  4. Adjust for volume. We had over 5,000 files on Smugmug. Downloading everything at once took longer than I expected. I asked Claude to track files so if the script was interrupted it could resume where it left off. Claude kept messing this up, and after the 5th or 6th attempt I gave up trying to use Claude to write this part.

I Wrote Some Code

I wrote a super simple image ID cache using a plaintext file for storage. It was simple, effective, and worked on the first go. Sometimes it's easier to just write the code yourself than try to get an AI to do it for you.

The SmugMug Downloader

The project is here at SmugMug downloader. It authenticates, enumerates all albums, and downloads every photo while preserving the album structure. Nothing fancy, just practical.

I'll be working on the Flickr downloader soon, following the same pattern. There's a few other services on the list too; I'm scanning our bank statements to see what else we have accounts on that we've let linger for too long.

Was It Worth It?

Absolutely. What would have taken me a day of focused coding took an hour of iterating with Claude. Our photos are off Smugmug and we're canceling a subscription we no longer need. I think this is what they mean by "vibe engineering".

Summary

These are files which in some cases we thought we lost. Or had forgotten. So the emotional and financial investment in a vibe engineered effort was low. If this were something that was touching our finances or wedding/baby photos I would have been much more cautious. But for now, this is a fun experiment in using AI to handle the mundane parts of coding so I can focus on more critical tasks.

January 16, 2026 11:22 AM UTC

January 15, 2026


Django Weblog

DSF member of the month - Omar Abou Mrad

For January 2026, we welcome Omar Abou Mrad as our DSF member of the month! ⭐

Omar sitting on a gaming chair

Omar is a helper in Django Discord server, he has helped and continuesly help folks around the world in their Django journey! He is part of the Discord Staff Team. He has been a DSF member since June 2024.

You can learn more about Omar by visiting Omar's website and his GitHub Profile.

Let’s spend some time getting to know Omar better!

Can you tell us a little about yourself? (hobbies, education, etc)

Hello! My name is Omar Abou Mrad, a 47-year-old husband to a beautiful wife and father of three teenage boys. I’m from Lebanon (Middle East), have a Computer Science background, and currently work as a Technical Lead on a day-to-day basis. I’m mostly high on life and quite enthusiastic about technology, sports, food, and much more!

I love learning new things and I love helping people. Most of my friends, acquaintances, and generally people online know me as Xterm.

I have already an idea but where your nickname "Xterm" comes from?

xterm is simply the terminal emulator for the X Window System. I first encountered it back in the mid to late 90s when I started using Redhat 2.0 operating system. things weren’t easy to set up back then, and the terminal was where you spent most of your time.

Nevertheless, I had to wait months (or was it years?) on end for the nickname "Xterm" to expire on Freenode back in mid 2000s, before I snatched and registered it.

Alas, I did! Xterm, c'est moi! >:-]

How did you start using Django?

We landed on Django (~1.1) fairly early at work, as we wanted to use Python with an ORM while building websites for different clients. The real challenge came when we took on a project responsible for managing operations, traceability, and reporting at a pipe-manufacturing company.

By that time, most of the team was already well-versed in Django (~1.6), and we went head-on into building one of the most complicated applications we had done to date, everything from the back office to operators’ devices connected to a Django-powered system.

Since then, most of our projects have been built with Django at the core.

We love Django.

What other framework do you know and if there is anything you would like to have in Django if you had magical powers?

I've used a multitude of frameworks professionally before Django, primarily in Java (EE, SeamFramework, ...) and .NET (ASP.NET, ASP.NET MVC) as well as sampling different frameworks for educational purposes.

I suppose if I could snap my fingers and get things to exist in django it wouldn't be something new as much as it is official support of:

But since we're finger-snapping things to existence, it would be awesome if every component of django (core, orm, templates, forms, "all") could be installed separately in such a way that you could cherry pick what you want to install, so we could dismiss those pesky (cough) arguments (cough) about Django being bulky.

What projects are you working on now?

I'm involved in numerous projects currently at work, most of which are based on Django, but the one I'm working right now consists of doing integrations and synchronizations with SAP HANA for different modules, in different applications.

It's quite the challenge, which makes it twice the fun.

Which Django libraries are your favorite (core or 3rd party)?

I would like to mention that I'm extremely thankful for any and all core and 3rd Party libraries out there!

What are the top three things in Django that you like?

In no particular order:

You are helping a lot of folks in Django Discord, what do you think is needed to be a good helper according to you?

First and foremost, I want to highlight what an excellent staff team we have on the Official Django Discord. While I don’t feel I hold a candle to what the rest of the team does daily, we complement each other very well.

To me, being a good helper means:

Dry ORM is really appreciated! What motivated you to create the project?

Imagine you're having a discussion with a djangonaut friend or colleague about some data modeling, or answering some question or concern they have, or reviewing some ORM code in a repository on github, or helping someone on IRC, Slack, Discord, the forums... or simply you want to do some quick ORM experiment but not disturb your current project. The most common ways people deal with this, is by having a throw-away project that they add models to, generate migrations, open the shell, run the queries they want, reset the db if needed, copy the models and the shell code into some code sharing site, then send the link to the recipient. Not to mention needing to store the code they experiment with in either separate scripts or management commands so they can have them as references for later.

I loved what DDT gave me with the queries transparency, I loved experimenting in the shell with shell_plus --print-sql and I needed to share things online. All of this was cumbersome and that’s when DryORM came into existence, simplifying the entire process into a single code snippet.

The need grew massively when I became a helper on Official Django Discord and noticed we (Staff) could greatly benefit from having this tool not only to assist others, but share knowledge among ourselves. While I never truly wanted to go public with it, I was encouraged by my peers on Discord to share it and since then, they've been extremely supportive and assisted in its evolution.

The unexpected thing however, was for DryORM to be used in the official code tracker, or the forums, or even in Github PRs! Ever since, I've decided to put a lot of focus and effort on having features that can support the django contributors in their quest evolve Django.

So here's a shout-out to everyone that use DryORM!

I believe you are the main maintainer, do you need help on something?

Yes, I am and thank you! I think the application has reached a point where new feature releases will slow down, so it’s entering more of a maintenance phase now, which I can manage.

Hopefully soon we'll have the discord bot executing ORM snippet :-]

What are your hobbies or what do you do when you’re not working?

Oh wow, not working, what's that like! :-]

Early mornings are usually reserved for weight training.\ Followed by a long, full workday.\ Then escorting and watching the kids at practice.\ Evenings are spent with my wife.\ Late nights are either light gaming or some tech-related reading and prototyping.\

Weekends look very similar, just with many more kids sports matches!

Is there anything else you’d like to say?

I want to thank everyone who helped make Django what it is today.

If you’re reading this and aren’t yet part of the Discord community, I invite you to join us! You’ll find many like-minded people to discuss your interests with. Whether you’re there to help, get help, or just hang around, it’s a fun place to be.


Thank you for doing the interview, Omar!

January 15, 2026 02:14 PM UTC

January 14, 2026


Mike Driscoll

How to Type Hint a Decorator in Python

Decorators are a concept that can trip up new Python users. You may find this definition helpful: A decorator is a function that takes in another function and adds new functionality to it without modifying the original function.

Functions can be used just like any other data type in Python. A function can be passed to a function or returned from a function, just like a string or integer.

If you have jumped on the type-hinting bandwagon, you will probably want to add type hints to your decorators. That has been difficult until fairly recently.

Let’s see how to type hint a decorator!

Type Hinting a Decorator the Wrong Way

You might think that you can use a TypeVar to type hint a decorator. You will try that first.

Here’s an example:

from functools import wraps
from typing import Any, Callable, TypeVar


Generic_function = TypeVar("Generic_function", bound=Callable[..., Any])

def info(func: Generic_function) -> Generic_function:
    @wraps(func)
    def wrapper(*args: Any, **kwargs: Any) -> Any:
        print('Function name: ' + func.__name__)
        print('Function docstring: ' + str(func.__doc__))
        result = func(*args, **kwargs)
        return result
    return wrapper

@info
def doubler(number: int) -> int:
    """Doubles the number passed to it"""
    return number * 2

print(doubler(4))

If you run mypy —strict info_decorator.py you will get the following output:

info_decorator.py:14: error: Incompatible return value type (got "_Wrapped[[VarArg(Any), KwArg(Any)], Any, [VarArg(Any), KwArg(Any)], Any]", expected "Generic_function")  [return-value]
Found 1 error in 1 file (checked 1 source file)

That’s a confusing error! Feel free to search for an answer.

The answers that you find will probably vary from just ignoring the function (i.e. not type hinting it at all) to using something called a ParamSpec.

Let’s try that next!

Using a ParamSpec for Type Hinting

The ParamSpec is a class in Python’s typing module. Here’s what the docstring says about ParamSpec:

class ParamSpec(object):
  """ Parameter specification variable.
  
  The preferred way to construct a parameter specification is via the
  dedicated syntax for generic functions, classes, and type aliases,
  where the use of '**' creates a parameter specification::
  
      type IntFunc[**P] = Callable[P, int]
  
  For compatibility with Python 3.11 and earlier, ParamSpec objects
  can also be created as follows::
  
      P = ParamSpec('P')
  
  Parameter specification variables exist primarily for the benefit of
  static type checkers.  They are used to forward the parameter types of
  one callable to another callable, a pattern commonly found in
  higher-order functions and decorators.  They are only valid when used
  in ``Concatenate``, or as the first argument to ``Callable``, or as
  parameters for user-defined Generics. See class Generic for more
  information on generic types.
  
  An example for annotating a decorator::
  
      def add_logging[**P, T](f: Callable[P, T]) -> Callable[P, T]:
          '''A type-safe decorator to add logging to a function.'''
          def inner(*args: P.args, **kwargs: P.kwargs) -> T:
              logging.info(f'{f.__name__} was called')
              return f(*args, **kwargs)
          return inner
  
      @add_logging
      def add_two(x: float, y: float) -> float:
          '''Add two numbers together.'''
          return x + y
  
  Parameter specification variables can be introspected. e.g.::
  
      >>> P = ParamSpec("P")
      >>> P.__name__
      'P'
  
  Note that only parameter specification variables defined in the global
  scope can be pickled.
   """

In short, you use a ParamSpec to construct a parameter specification for a generic function, class, or type alias.

To see what that means in code, you can update the previous decorator to look like this: 

from functools import wraps
from typing import Callable, ParamSpec, TypeVar


P = ParamSpec("P")
R = TypeVar("R")

def info(func: Callable[P, R]) -> Callable[P, R]:
    @wraps(func)
    def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
        print('Function name: ' + func.__name__)
        print('Function docstring: ' + str(func.__doc__))
        return func(*args, **kwargs)
    return wrapper

@info
def doubler(number: int) -> int:
    """Doubles the number passed to it"""
    return number * 2

print(doubler(4))

Here, you create a ParamSpec and a TypeVar. You tell the decorator that it takes in a Callable with a generic set of parameters (P), and you use TypeVar (R) to specify a generic return type.

If you run mypy on this updated code, it will pass! Good job!

What About PEP 695?

PEP 695 adds a new wrinkle to adding type hints to decorators by updating the parameter specification in Python in 3.12.

The main thrust of this PEP is to “simplify” the way you specify type parameters within a generic class, function, or type alias.

In a lot of ways, it does clean up the code as you no longer need to import ParamSpec of TypeVar when using this new syntax. Instead, it feels almost magical.

Here’s the updated code:

from functools import wraps
from typing import Callable


def info[**P, R](func: Callable[P, R]) -> Callable[P, R]:
    @wraps(func)
    def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
        print('Function name: ' + func.__name__)
        print('Function docstring: ' + str(func.__doc__))
        return func(*args, **kwargs)
    return wrapper

@info
def doubler(number: int) -> int:
    """Doubles the number passed to it"""
    return number * 2

print(doubler(4))

Notice that at the beginning of the function you have square brackets. That is basically declaring your ParamSpec implicitly. The “R” is again the return type. The rest of the code is the same as before.

When you run mypy against this version of the type hinted decorator, you will see that it passes happily.

Wrapping Up

Type hinting can still be a hairy subject, but the newer the Python version that you use, the better the type hinting capabilities are.

Of course, since Python itself doesn’t enforce type hinting, you can just skip all this too. But if your employer like type hinting, hopefully this article will help you out.

Related Reading

The post How to Type Hint a Decorator in Python appeared first on Mouse Vs Python.

January 14, 2026 05:04 PM UTC


Real Python

How to Create a Django Project

Before you can start building your Django web application, you need to set up your Django project. In this guide you’ll learn how to create a new Django project in four straightforward steps and only six commands:

Step Description Command
1a Set up a virtual environment python -m venv .venv
1b Activate the virtual environment source .venv/bin/activate
2a Install Django python -m pip install django
2b Pin your dependencies python -m pip freeze > requirements.txt
3 Set up a Django project django-admin startproject <projectname>
4 Start a Django app python manage.py startapp <appname>

The tutorial focuses on the initial steps you’ll always need to start a new web application.

Use this tutorial as your go-to reference until you’ve built so many projects that the necessary commands become second nature. Until then, follow the steps outlined below and in the command reference, or download the PDF cheatsheet as a printable reference:

Free Bonus: Click here to download the Django Project cheat sheet that assembles all important commands and tips on one page that’s easy to print.

There are also a few exercises throughout the tutorial to help reinforce what you’re learning, and you can test your knowledge in the associated quiz:

Take the Quiz: Test your knowledge with our interactive “How to Create a Django Project” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

How to Create a Django Project

Check your Django setup skills. Install safely and pin requirements, create a project and an app. Start building your first site.

Get Your Code: Click here to download the free sample code that shows you how to create a Django project.

Prerequisites

Before you start creating your Django project, make sure you have the right tools and knowledge in place. This tutorial assumes you’re comfortable working with the command line, but you don’t need to be an expert. Here’s what you’ll need to get started:

You don’t need any prior Django experience to complete this guide. However, to build functionality beyond the basic scaffolding, you’ll need to know Python basics and at least some Django.

Step 1: Prepare Your Environment

When you’re ready to start your new Django web application, create a new folder and navigate into it. In this folder, you’ll set up a new virtual environment using your terminal:

Windows PowerShell
PS> python -m venv .venv
Shell
$ python3 -m venv .venv

This command sets up a new virtual environment named .venv in your current working directory. Once the process is complete, you also need to activate the virtual environment:

Windows PowerShell
PS> .venv\Scripts\activate
Shell
$ source .venv/bin/activate

If the activation was successful, then you’ll see the name of your virtual environment, (.venv), at the beginning of your command prompt. This means that your environment setup is complete.

You can learn more about how to work with virtual environments in Python, and how to perfect your Python development setup, but for your Django setup, you have all you need. You can continue with installing the django package.

Step 2: Install Django and Pin Your Dependencies

Read the full article at https://realpython.com/django-setup/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 14, 2026 02:00 PM UTC

Quiz: How to Create a Django Project

In this quiz, you’ll test your understanding of creating a Django project.

By working through this quiz, you’ll revisit how to create and activate a virtual environment, install Django and pin your dependencies, start a Django project, and start a Django app. You will also see how isolating dependencies helps others reproduce your setup.

To revisit and keep learning, watch the video course on How to Set Up a Django Project.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 14, 2026 12:00 PM UTC


Armin Ronacher

Porting MiniJinja to Go With an Agent

Turns out you can just port things now. I already attempted this experiment in the summer, but it turned out to be a bit too much for what I had time for. However, things have advanced since. Yesterday I ported MiniJinja (a Rust Jinja2 template engine) to native Go, and I used an agent to do pretty much all of the work. In fact, I barely did anything beyond giving some high-level guidance on how I thought it could be accomplished.

In total I probably spent around 45 minutes actively with it. It worked for around 3 hours while I was watching, then another 7 hours alone. This post is a recollection of what happened and what I learned from it.

All prompting was done by voice using pi, starting with Opus 4.5 and switching to GPT-5.2 Codex for the long tail of test fixing.

What is MiniJinja

MiniJinja is a re-implementation of Jinja2 for Rust. I originally wrote it because I wanted to do a infrastructure automation project in Rust and Jinja was popular for that. The original project didn’t go anywhere, but MiniJinja itself continued being useful for both me and other users.

The way MiniJinja is tested is with snapshot tests: inputs and expected outputs, using insta to verify they match. These snapshot tests were what I wanted to use to validate the Go port.

Test-Driven Porting

My initial prompt asked the agent to figure out how to validate the port. Through that conversation, the agent and I aligned on a path: reuse the existing Rust snapshot tests and port incrementally (lexer -> parser -> runtime).

This meant the agent built Go-side tooling to:

This resulted in a pretty good harness with a tight feedback loop. The agent had a clear goal (make everything pass) and a progression (lexer -> parser -> runtime). The tight feedback loop mattered particularly at the end where it was about getting details right. Every missing behavior had one or more failing snapshots.

Branching in Pi

I used Pi’s branching feature to structure the session into phases. I rewound back to earlier parts of the session and used the branch switch feature to inform the agent automatically what it had already done. This is similar to compaction, but Pi shows me what it puts into the context. When Pi switches branches it does two things:

  1. It stays in the same session so I can navigate around, but it makes a new branch off an earlier message.
  2. When switching, it adds a summary of what it did as a priming message into where it branched off. I found this quite helpful to avoid the agent doing vision quests from scratch to figure out how far it had already gotten.

Without switching branches, I would probably just make new sessions and have more plan files lying around or use something like Amp’s handoff feature which also allows the agent to consult earlier conversations if it needs more information.

First Signs of Divergence

What was interesting is that the agent went from literal porting to behavioral porting quite quickly. I didn’t steer it away from this as long as the behavior aligned. I let it do this for a few reasons. First, the code base isn’t that large, so I felt I could make adjustments at the end if needed. Letting the agent continue with what was already working felt like the right strategy. Second, it was aligning to idiomatic Go much better this way.

For instance, on the runtime it implemented a tree-walking interpreter (not a bytecode interpreter like Rust) and it decided to use Go’s reflection for the value type. I didn’t tell it to do either of these things, but they made more sense than replicating my Rust interpreter design, which was partly motivated by not having a garbage collector or runtime type information.

Where I Had to Push Back

On the other hand, the agent made some changes while making tests pass that I disagreed with. It completely gave up on all the “must fail” tests because the error messages were impossible to replicate perfectly given the runtime differences. So I had to steer it towards fuzzy matching instead.

It also wanted to regress behavior I wanted to retain (e.g., exact HTML escaping semantics, or that range must return an iterator). I think if I hadn’t steered it there, it might not have made it to completion without going down problematic paths, or I would have lost confidence in the result.

Grinding to Full Coverage

Once the major semantic mismatches were fixed, the remaining work was filling in all missing pieces: missing filters and test functions, loop extras, macros, call blocks, etc. Since I wanted to go to bed, I switched to Codex 5.2 and queued up a few “continue making all tests pass if they are not passing yet” prompts, then let it work through compaction. I felt confident enough that the agent could make the rest of the tests pass without guidance once it had the basics covered.

This phase ran without supervision overnight.

Final Cleanup

After functional convergence, I asked the agent to document internal functions and reorganize (like moving filters to a separate file). I also asked it to document all functions and filters like in the Rust code base. This was also when I set up CI, release processes, and talked through what was created to come up with some finalizing touches before merging.

Parting Thoughts

There are a few things I find interesting here.

First: these types of ports are possible now. I know porting was already possible for many months, but it required much more attention. This changes some dynamics. I feel less like technology choices are constrained by ecosystem lock-in. Sure, porting NumPy to Go would be a more involved undertaking, and getting it competitive even more so (years of optimizations in there). But still, it feels like many more libraries can be used now.

Second: for me, the value is shifting from the code to the tests and documentation. A good test suite might actually be worth more than the code. That said, this isn’t an argument for keeping tests secret — generating tests with good coverage is also getting easier. However, for keeping code bases in different languages in sync, you need to agree on shared tests, otherwise divergence is inevitable.

Lastly, there’s the social dynamic. Once, having people port your code to other languages was something to take pride in. It was a sign of accomplishment — a project was “cool enough” that someone put time into making it available elsewhere. With agents, it doesn’t invoke the same feelings. Will McGugan also called out this change.

Session Stats

Lastly, some boring stats for the main session:

This did not count the adding of doc strings and smaller fixups.

January 14, 2026 12:00 AM UTC

January 13, 2026


Gaël Varoquaux

Stepping up as probabl’s CSO to supercharge scikit-learn and its ecosystem

Note

Probabl’s get together, in falls 2025

I’m thrilled to announce that I’m stepping up as Probabl’s CSO (Chief Science Officer) to supercharge scikit-learn and its ecosystem, pursuing my dreams of tools that help go from data to impact.

Scikit-learn, a central tool

Scikit-learn is central to data-scientists’ work: it is the most used machine-learning package. It has grown over more than a decade, supported by volunteers’ time, donations, and grant funding, with a central role of Inria.

Scikit-learn download numbers; reproduce and explore on clickpy

And the usage numbers keep going up…

Scikit-learn keeps growing because it enables crucial applications: machine-learning that can be easily adapted to a given application. This type of AI does not make the headlines, but it is central to the value brought by data science. It is used across the board to extract insights from data and automate business-specific processes, thus ensuring function and efficiency of a wide variety of activities.


And scikit-learn is quietly but steadily advancing. The recent releases bring progress in all directions: computational foundations (the array API enabling GPU support), user interface (rich HTML displays), new models (eg HDBSCAN, temperature-scaling recalibration …), and always algorithmic improvements (release 1.8 brought marked speed ups to linear models or trees with MAE).

A new opportunity to boost scikit-learn and its ecosystem

Probabl recently raised a beautiful seed funding from investors who really understand the value and perspective of scikit-learn. We have a unique opportunity to accelerate scikit-learn’s development. Our analysis is that enterprises need dedicated tooling and partners to build best on scikit-learn, and we’re hard at work to provide this.

2/3rd of probabl’s founders are scikit-learn contributors and we have been investing in all aspects of scikit-learn: features, releases, communication, documentation, and training. In addition, part of scikit-learn’s success has always been to nurture an ecosystem, for instance via its simple API that has become a standard. Thus Probabl is not only consolidating scikit-learn, but also this ecosystem: the skops project, to put scikit-learn based models in production, the skrub project, that facilitates data preparation, the young skore project to track data science, fairlearn to help avoiding machine learning that discriminates, and more upstream projects, such as joblib for parallel computing.

My obsession as Probabl CSO: serving the data scientists

As CSO (Chief Science Officer) at Probabl, my role is to nourish our development strategy with understanding of machine learning, data science, and open source. Making sure that scikit-learn and its ecosystem are enterprise ready will bring resources for scikit-learn’s sustainability, enabling its ecosystem to grow into a standard-setting platform for the industry, that continues to serve data scientists. This mission will require consolidating the existing tools and patterns, and inventing new ones.


Probabl is in a unique position for this endeavor: Our core is an amazing team of engineers with deep knowledge of data science. Working directly with businesses gives us an acute understanding of where the ecosystem can be improved. On this topic, I also profoundly enjoy working with people who have a different DNA than the historical DNA of scikit-learn, with product research, marketing, and business mindsets. I believe that the union of our different cultures will make the scikit-learn ecosystem better.

Beyond the Probabl team, we have an amazing community, with a broader group of scikit-learn contributors who do an amazing job bringing together what makes scikit-learn so versatile, with a deep ecosystem of Python data tools enriched by so many different actors. I’m deeply greatful to the many scikit-learn and pydata contributors. At Probabl, we are very attuned to enabling the open-source contributor community. Such a community is what enables a single tool, scikit-learn, to serve a long tail of diverse usages.

January 13, 2026 11:00 PM UTC


PyCoder’s Weekly

Issue #717: Unit Testing Performance, Cursor, Recursive match, and More (Jan. 13, 2026)

#717 – JANUARY 13, 2026
View in Browser »

The PyCoder’s Weekly Logo


Unit Testing Your Code’s Performance

Testing your code is important, but not just for correctness also for performance. One approach is to check performance degradation as data sizes go up, also known as Big-O scaling.
ITAMA TURNER-TRAURING

Tips for Using the AI Coding Editor Cursor

Learn Cursor fast: AI-powered coding with agents, project-aware chat, inline edits, and VS Code workflow – ship smarter, sooner.
REAL PYTHON course

AI Code Review With Comments You’ll Actually Implement

alt

Unblocked is the AI code review that surfaces real issues and meaningful feedback instead of flooding your PRs with stylistic nitpicks and low-value comments. “Unblocked made me reconsider my AI fatigue. ” - Senior developer, Clio. Try now for Free →
UNBLOCKED sponsor

Recursive Structural Pattern Matching

Learn how to use structural pattern matching (the match statement) to work recursively through tree-like structures.
RODRIGO GIRÃO SERRÃO

PEP 822: Dedented Multiline String (d-String) (Draft)

PYTHON.ORG

PEP 820: PySlot: Unified Slot System for the C API (Draft)

PYTHON.ORG

PEP 819: JSON Package Metadata (Draft)

PYTHON.ORG

Django Bugfix Release: 5.2.10, 6.0.1

DJANGO SOFTWARE FOUNDATION

Articles & Tutorials

Coding Python With Confidence: Live Course Participants

Are you looking for that solid foundation to begin your Python journey? Would the accountability of scheduled group classes help you get through the basics and start building something? This week, two members of the Python for Beginners live course discuss their experiences.
REAL PYTHON podcast

Regex: Searching for the Tiger

Python’s re module is a robust toolset for writing regular expressions, but its behavior often deviates from other engines. Understanding the nuances of the interpreter and the Unicode standard is essential for writing predictable patterns.
SUBSTACK.COM • Shared by Vivis Dev

The Ultimate Guide to Docker Build Cache

alt

Docker builds feel slow because cache invalidation is working against you. Depot explains how BuildKit’s layer caching works, when to use bind mounts vs cache mounts, and how to optimize your Dockerfile so Gradle dependencies don’t rebuild on every code change →
DEPOT sponsor

How We Made Python’s Packaging Library 3x Faster

Underneath pip, and many other packaging tools, is the packaging library which deals with version numbers and other associated markers. Recent work on the library has shown significant speed-up and this post talks about how it was done.
HENRY SCHREINER

Django Quiz 2025

Last month, Adam held another quiz at the December edition of Django London. This is an annual tradition at the meetup, now you can take it yourself or just skim the answers.
ADAM JOHNSON

Live Python Courses: Already 50% Sold for 2026

Real Python’s instructor-led cohorts are filling up. Python for Beginners builds your foundation right the first time. Intermediate Python Deep Dive covers decorators, OOP, and production patterns with real-time expert feedback. Grab a seat before they’re gone at realpython.com/live →
REAL PYTHON sponsor

A Different Way to Think About Python API Clients

Paul is frustrated with how clients interact with APIs in Python, so he’s proposing a new approach inspired by the many decorator-based API server libraries.
PAULWRITES.SOFTWARE • Shared by Paul Hallett

Learn From 2025’s Most Popular Python Tutorials and Courses

Pick from the best Python tutorials and courses of 2025. Revisit core skills, 3.14 updates, AI coding tools, and project walkthroughs. Kickstart your 2026!
REAL PYTHON

Debugging With F-Strings

If you’re debugging Python code with print calls, consider using f-strings with self-documenting expressions to make your debugging a little bit easier.
TREY HUNNER

How to Switch to ty From Mypy

The folks at Astral have created a type checker known as “ty”. This post describes how to move from Mypy to ty, including in your GitHub Actions.
MIKE DRISCOLL

Recent Optimizations in Python’s Reference Counting

This article highlights some of the many optimizations to reference counting that have occurred in recent CPython releases.
ARTEM GOLUBIN

Projects & Code

yastrider: Defensive String Cleansing and Tidying

GITHUB.COM/BARRANK

gazetteer: Offline Reverse Geocoding Library

GITHUB.COM/SOORAJTS2001

bengal: High-Performance Static Site Generator

GITHUB.COM/LBLIII

PyPDFForm: Fire: The Python Library for PDF Forms

GITHUB.COM/CHINAPANDAMAN

pyauto-desktop: A Desktop Automation Toool

GITHUB.COM/OMAR-F-RASHED

Events

Weekly Real Python Office Hours Q&A (Virtual)

January 14, 2026
REALPYTHON.COM

PyData Bristol Meetup

January 15, 2026
MEETUP.COM

PyLadies Dublin

January 15, 2026
PYLADIES.COM

Chattanooga Python User Group

January 16 to January 17, 2026
MEETUP.COM

DjangoCologne

January 20, 2026
MEETUP.COM

Inland Empire Python Users Group Monthly Meeting

January 21, 2026
MEETUP.COM


Happy Pythoning!
This was PyCoder’s Weekly Issue #717.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

January 13, 2026 07:30 PM UTC


Real Python

Intro to Object-Oriented Programming (OOP) in Python

Object-oriented programming (OOP) is one of the most significant and essential topics in programming. This course will give you a foundational conceptual understanding of object-oriented programming to help you elevate your Python skills.

You’ll learn how to define custom types using classes and how to instantiate those classes into Python objects that can be used throughout your program.

Finally, you’ll discover how classes can inherit from one another, with a brief introduction to inheritance, enabling you to write maintainable and less redundant Python code.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 13, 2026 02:00 PM UTC


Python Software Foundation

Anthropic invests $1.5 million in the Python Software Foundation and open source security

We are thrilled to announce that Anthropic has entered into a two-year partnership with the Python Software Foundation (PSF) to contribute a landmark total of $1.5 million to support the foundation’s work, with an emphasis on Python ecosystem security. This investment will enable the PSF to make crucial security advances to CPython and the Python Package Index (PyPI) benefiting all users, and it will also sustain the foundation’s core work supporting the Python language, ecosystem, and global community.

Innovating open source security

Anthropic’s funds will enable the PSF to make progress on our security roadmap, including work designed to protect millions of PyPI users from attempted supply-chain attacks. Planned projects include creating new tools for automated proactive review of all packages uploaded to PyPI, improving on the current process of reactive-only review. We intend to create a new dataset of known malware that will allow us to design these novel tools, relying on capability analysis. One of the advantages of this project is that we expect the outputs we develop to be transferable to all open source package repositories. As a result, this work has the potential to ultimately improve security across multiple open source ecosystems, starting with the Python ecosystem.

This work will build on PSF Security Developer in Residence Seth Larson’s security roadmap with contributions from PyPI Safety and Security Engineer Mike Fiedler, both roles generously funded by Alpha-Omega

Sustaining the Python language, ecosystem, and community

Anthropic’s support will also go towards the PSF’s core work, including the Developer in Residence program driving contributions to CPython, community support through grants and other programs, running core infrastructure such as PyPI, and more. We couldn’t be more grateful for Anthropic’s remarkable support, and we hope you will join us in thanking them for their investment in the PSF and the Python community.

About Anthropic


Anthropic is the AI research and development company behind Claude — the frontier model used by millions of people worldwide.

About the PSF

The Python Software Foundation is a non-profit whose mission is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers. The PSF supports the Python community using corporate sponsorships, grants, and donations. Are you interested in sponsoring or donating to the PSF so we can continue supporting Python and its community? Check out our sponsorship program, donate directly here, or contact our team!


January 13, 2026 08:00 AM UTC


Talk Python to Me

#534: diskcache: Your secret Python perf weapon

Your cloud SSD is sitting there, bored, and it would like a job. Today we’re putting it to work with DiskCache, a simple, practical cache built on SQLite that can speed things up without spinning up Redis or extra services. Once you start to see what it can do, a universe of possibilities opens up. We're joined by Vincent Warmerdam to dive into DiskCache.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br> <a href='https://talkpython.fm/devopsbook'>Python in Production</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>diskcache docs</strong>: <a href="https://grantjenks.com/docs/diskcache/?featured_on=talkpython" target="_blank" >grantjenks.com</a><br/> <strong>LLM Building Blocks for Python course</strong>: <a href="https://training.talkpython.fm/courses/llm-building-blocks-for-python" target="_blank" >training.talkpython.fm</a><br/> <strong>JSONDisk</strong>: <a href="https://grantjenks.com/docs/diskcache/api.html#jsondisk" target="_blank" >grantjenks.com</a><br/> <strong>Git Code Archaeology Charts</strong>: <a href="https://koaning.github.io/gitcharts/#django/versioned" target="_blank" >koaning.github.io</a><br/> <strong>Talk Python Cache Admin UI</strong>: <a href="https://blobs.talkpython.fm/talk-python-cache-admin.png?cache_id=cd0d7f" target="_blank" >blobs.talkpython.fm</a><br/> <strong>Litestream SQLite streaming</strong>: <a href="https://litestream.io?featured_on=talkpython" target="_blank" >litestream.io</a><br/> <strong>Plash hosting</strong>: <a href="https://pla.sh?featured_on=talkpython" target="_blank" >pla.sh</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=ze7N_RE9KU0" target="_blank" >youtube.com</a><br/> <strong>Episode #534 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/534/diskcache-your-secret-python-perf-weapon#takeaways-anchor" target="_blank" >talkpython.fm/534</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/534/diskcache-your-secret-python-perf-weapon" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

January 13, 2026 05:32 AM UTC