skip to navigation
skip to content

Planet Python

Last update: February 03, 2026 09:44 PM UTC

February 03, 2026


PyCoder’s Weekly

Issue #720: Subprocess, Memray, Callables, and More (Feb. 3, 2026)

#720 – FEBRUARY 3, 2026
View in Browser »

The PyCoder’s Weekly Logo


Ending 15 Years of subprocess Polling

Python’s standard library subprocess module relies on busy-loop polling to determine whether a process has completed yet. Modern operating systems have callback mechanisms to do this, and Python 3.15 will now take advantage of these.
GIAMPAOLO RODOLA

Django: Profile Memory Usage With Memray

Memory usage can be hard to keep under control in Python projects. Django projects can be particularly susceptible to memory bloat, as they may import many large dependencies. Learn how to use memray to learn what is going on.
ADAM JOHNSON

B2B Authentication for any Situation - Fully Managed or BYO

alt

What your sales team needs to close deals: multi-tenancy, SAML, SSO, SCIM provisioning, passkeys…What you’d rather be doing: almost anything else. PropelAuth does it all for you, at every stage. →
PROPELAUTH sponsor

Create Callable Instances With Python’s .__call__()

Learn Python callables: what “callable” means, how to use dunder call, and how to build callable objects with step-by-step examples.
REAL PYTHON course

Take the Python Developers Survey 2026

PYTHON SOFTWARE FOUNDATION

Articles & Tutorials

The C-Shaped Hole in Package Management

System package managers and language package managers are solving different problems that happen to overlap in the middle. This causes complications when languages like Python depend on system libraries. This article is a deep dive on the different pieces involved and why it is the way it is.
ANDREW NESBITT

Use \z Not $ With Python Regular Expressions

The $ in a regular expression is used to matching the end of a line, but in Python, it matches a line end both with and without a \n. Python 3.14 added support for \z, which is widely supported by other languages, to get around this problem.
SETH LARSON

Python errors? Fix ‘em fast for FREE with Honeybadger

alt

If you support web apps in production, you need intelligent logging with error alerts and de-duping. Honeybadger filters out the noise and transforms Python logs into contextual issues so you can find and fix errors fast. Get your FREE account →
HONEYBADGER sponsor

Speeding Up Pillow’s Open and Save

Hugo used the Tachyon profiler to examine the open and save calls in the Pillow image processing module. He found ways to optimize the calls and has submitted a PR, this post tells you about it.
HUGO VAN KEMENADE

Some Notes on Starting to Use Django

Julia has decided to add Django to her coding skills and has written some notes from her first experiences. See also the associated HN discussion
JULIA EVANS

How Long Does It Take to Learn Python?

This guide breaks down how long it takes to learn Python, with realistic timelines, weekly study plans, and strategies to speed up your progress.
REAL PYTHON

uv Cheatsheet

uv cheatsheet that lists the most common and useful uv commands across project management, working with scripts, installing tools, and more!
MATHSPP.COM • Shared by Rodrigo Girão Serrão

What’s New in pandas 3

pandas 3.0 has just been released. This article uses a real‑world example to explain the most important differences between pandas 2 and 3.
MARC GARCIA

GeoPandas Basics: Maps, Projections, and Spatial Joins

Dive into GeoPandas with this tutorial covering data loading, mapping, CRS concepts, projections, and spatial joins for intuitive analysis.
REAL PYTHON

Quiz: GeoPandas Basics: Maps, Projections, and Spatial Joins

REAL PYTHON

Things I’ve Learned in My 10 Years as an Engineering Manager

Non-obvious advice that Jampa wishes he’d learned sooner. Associated HN Discussion
JAMPA UCHOA

Django Views Versus the Zen of Python

Django’s generic class-based views often clash with the Zen of Python. Here’s why the base View class feels more Pythonic.
KEVIN RENSKERS

Projects & Code

oban: Job Orchestration Framework for Python

GITHUB.COM/OBAN-BG • Shared by Parker Selbert

dj-celery-panel: Celery Task Inspector for Django Admin

GITHUB.COM/YASSI

cmd-chat: Peer-to-Peer Encrypted CLI Chat

GITHUB.COM/DIORWAVE

jetbase: Database Migration Tool for Python

GITHUB.COM/JETBASE-HQ

calgebra: Set Operations for Calendar Intervals

GITHUB.COM/ASHENFAD

Events

Weekly Real Python Office Hours Q&A (Virtual)

February 4, 2026
REALPYTHON.COM

Canberra Python Meetup

February 5, 2026
MEETUP.COM

Sydney Python User Group (SyPy)

February 5, 2026
SYPY.ORG

PyDelhi User Group Meetup

February 7, 2026
MEETUP.COM

PiterPy Meetup

February 10, 2026
PITERPY.COM

Leipzig Python User Group Meeting

February 10, 2026
MEETUP.COM


Happy Pythoning!
This was PyCoder’s Weekly Issue #720.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

February 03, 2026 07:30 PM UTC


Mike Driscoll

Python Typing Book Kickstarter

Python has had type hinting support since Python 3.5, over TEN years ago! However, Python’s type annotations have changed repeatedly over the years. In Python Typing: Type Checking for Python Programmers, you will learn all you need to know to add type hints to your Python applications effectively.

You will also learn how to use Python type checkers, configure them, and set them up in pre-commit or GitHub Actions. This knowledge will give you the power to check your code and your team’s code automatically before merging, hopefully catching defects before they make it into your products.

Python Typing Book Cover

Support the Book!

What You’ll Learn

You will learn all about Python’s support for type hinting (annotations). Specifically, you will learn about the following topics:

Rewards to Choose From

There are several different rewards you can get in this Kickstarter:

Kickstart the Book

The post Python Typing Book Kickstarter appeared first on Mouse Vs Python.

February 03, 2026 06:17 PM UTC


Django Weblog

Django security releases issued: 6.0.2, 5.2.11, and 4.2.28

In accordance with our security release policy, the Django team is issuing releases for Django 6.0.2, Django 5.2.11, and Django 4.2.28. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.

CVE-2025-13473: Username enumeration through timing difference in mod_wsgi authentication handler

The django.contrib.auth.handlers.modwsgi.check_password() function for authentication via mod_wsgi allowed remote attackers to enumerate users via a timing attack.

Thanks to Stackered for the report.

This issue has severity "low" according to the Django security policy.

CVE-2025-14550: Potential denial-of-service vulnerability via repeated headers when using ASGI

When receiving duplicates of a single header, ASGIRequest allowed a remote attacker to cause a potential denial-of-service via a specifically created request with multiple duplicate headers. The vulnerability resulted from repeated string concatenation while combining repeated headers, which produced super-linear computation resulting in service degradation or outage.

Thanks to Jiyong Yang for the report.

This issue has severity "moderate" according to the Django security policy.

CVE-2026-1207: Potential SQL injection via raster lookups on PostGIS

Raster lookups on GIS fields (only implemented on PostGIS) were subject to SQL injection if untrusted data was used as a band index.

As a reminder, all untrusted user input should be validated before use.

Thanks to Tarek Nakkouch for the report.

This issue has severity "high" according to the Django security policy.

CVE-2026-1285: Potential denial-of-service vulnerability in django.utils.text.Truncator HTML methods

django.utils.text.Truncator.chars() and Truncator.words() methods (with html=True) and truncatechars_html and truncatewords_html template filters were subject to a potential denial-of-service attack via certain inputs with a large number of unmatched HTML end tags, which could cause quadratic time complexity during HTML parsing.

Thanks to Seokchan Yoon for the report.

This issue has severity "moderate" according to the Django security policy.

CVE-2026-1287: Potential SQL injection in column aliases via control characters

FilteredRelation was subject to SQL injection in column aliases via control characters, using a suitably crafted dictionary, with dictionary expansion, as the **kwargs passed to QuerySet methods annotate(), aggregate(), extra(), values(), values_list(), and alias().

Thanks to Solomon Kebede for the report.

This issue has severity "high" according to the Django security policy.

CVE-2026-1312: Potential SQL injection via QuerySet.order_by and FilteredRelation

QuerySet.order_by() was subject to SQL injection in column aliases containing periods when the same alias was, using a suitably crafted dictionary, with dictionary expansion, used in FilteredRelation.

Thanks to Solomon Kebede for the report.

This issue has severity "high" according to the Django security policy.

Affected supported versions

  • Django main
  • Django 6.0
  • Django 5.2
  • Django 4.2

Resolution

Patches to resolve the issue have been applied to Django's main, 6.0, 5.2, and 4.2 branches. The patches may be obtained from the following changesets.

CVE-2025-13473: Username enumeration through timing difference in mod_wsgi authentication handler

CVE-2025-14550: Potential denial-of-service vulnerability via repeated headers when using ASGI

CVE-2026-1207: Potential SQL injection via raster lookups on PostGIS

CVE-2026-1285: Potential denial-of-service vulnerability in django.utils.text.Truncator HTML methods

CVE-2026-1287: Potential SQL injection in column aliases via control characters

CVE-2026-1312: Potential SQL injection via QuerySet.order_by and FilteredRelation

The following releases have been issued

The PGP key ID used for this release is Jacob Walls: 131403F4D16D8DC7

General notes regarding security reporting

As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum. Please see our security policies for further information.

February 03, 2026 02:13 PM UTC


Real Python

Getting Started With Google Gemini CLI

This video course will teach you how to use Gemini CLI to bring Google’s AI-powered coding assistance directly into your terminal. After you authenticate with your Google account, this tool will be ready to help you analyze code, identify bugs, and suggest fixes—all without leaving your familiar development environment.

Imagine debugging code without switching between your console and browser, or picture getting instant explanations for unfamiliar projects. Like other command-line AI assistants, Google’s Gemini CLI brings AI-powered coding assistance directly into your command line, allowing you to stay focused in your development workflow.

Whether you’re troubleshooting a stubborn bug, understanding legacy code, or generating documentation, this tool acts as an intelligent pair-programming partner that understands your codebase’s context.

You’re about to install Gemini CLI, authenticate with Google’s free tier, and put it to work on an actual Python project. You’ll discover how natural language queries can help you understand code faster and catch bugs that might slip past manual review.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

February 03, 2026 02:00 PM UTC


PyBites

Coding can be super lonely

I hate coding solo.

Not in the moment or when I’m in the zone, I mean in the long run.

I love getting into that deep focus where I’m locked in and hours pass by in a second!

But I hate snapping out of it and not having anyone to chat with about it. (I’m lucky that’s not the case anymore though – thanks Bob!)

So it’s no surprise that many of the devs I chat with on Zoom calls or in person share the same sentiment.

Not everyone has a Bob though. Many people don’t have anyone in their circle that they can talk to about code.

It can be a lonely experience.

And just as bad, it leads to stagnation. You can spend years coding in a silo and feel like you haven’t grown at all. That feeling of being a junior dev becomes unshakable.

When you work in isolation, you’re operating in a vacuum. Without external input, your vacuum becomes an echo chamber.

As funny as it sounds, as devs I think we all need other devs around us who will create friction. Without the friction of other developers looking at your work, you don’t grow.

Some of my most memorable learning experiences in my first dev job were with my colleague, sharing ideas on a whiteboard and talking through code. (Thanks El!)

If you haven’t had the experience of this kind of community and support, then you’re missing out. Here’s what I want you to do this week:

  1. Go seek out a Code Review: Find someone more senior than you and ask them to give you their two cents on your coding logic. Note I’m suggesting logic and not your syntax. Let’s target your thought process!
  2. Build for Someone Else: Go build a tool for a colleague or a friend. The second another person uses your code it breaks the cycle/vacuum because you’re now accountable for the bugs, suggestions and UX.
  3. Public Accountability: Join our community, tell us what you’re going to build and document your progress! If no one is watching, it’s too easy to quit when the engineering gets hard (believe me, I know!).

At the end of the day, you don’t become a Senior Developer and break through to the next level of your Python journey by typing in a dark room alone (as enjoyable as that may be sometimes 😅)

You become one by engaging with the community, sharing what you’re doing and learning from others.

If you’re stuck in a vacuum, join the community, reply to my welcome DM, and check out our community calendar.

Julian

This was originally sent to our email list. Join here.

February 03, 2026 11:02 AM UTC


Python Bytes

#468 A bolt of Django

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://github.com/FarhanAliRaza/django-bolt?featured_on=pythonbytes">django-bolt: Faster than FastAPI, but with Django ORM, Django Admin, and Django packages</a></strong></li> <li><strong><a href="https://github.com/deepankarm/pyleak?featured_on=pythonbytes">pyleak</a></strong></li> <li><strong>More Django (three articles)</strong></li> <li><strong><a href="https://data-star.dev?featured_on=pythonbytes">Datastar</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=DhfAWhLrT78' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="468">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: <a href="https://github.com/FarhanAliRaza/django-bolt?featured_on=pythonbytes">django-bolt : Faster than FastAPI, but with Django ORM, Django Admin, and Django packages</a></strong></p> <ul> <li>Farhan Ali Raza</li> <li>High-Performance Fully Typed API Framework for Django</li> <li>Inspired by DRF, FastAPI, Litestar, and Robyn</li> <li><a href="https://bolt.farhana.li?featured_on=pythonbytes">Django-Bolt docs</a></li> <li><a href="https://djangochat.com/episodes/building-a-django-api-framework-faster-than-fastapi?featured_on=pythonbytes">Interview with Farhan on Django Chat Podcast</a></li> <li><a href="https://www.youtube.com/watch?v=Pukr-fT4MFY">And a walkthrough video</a></li> </ul> <p><strong>Michael #2: <a href="https://github.com/deepankarm/pyleak?featured_on=pythonbytes">pyleak</a></strong></p> <ul> <li>Detect leaked asyncio tasks, threads, and event loop blocking with stack trace in Python. Inspired by goleak.</li> <li>Has patterns for <ul> <li>Context managers</li> <li>decorators</li> </ul></li> <li>Checks for <ul> <li>Unawaited asyncio tasks</li> <li>Threads</li> <li>Blocking of an asyncio loop</li> <li>Includes a pytest plugin so you can do <code>@pytest.mark.no_leaks</code></li> </ul></li> </ul> <p><strong>Brian #3: More Django (three articles)</strong></p> <ul> <li><a href="https://paultraylor.net/blog/2026/migrating-from-celery-to-django-tasks/?featured_on=pythonbytes"><strong>Migrating From Celery to Django Tasks</strong></a> <ul> <li>Paul Taylor</li> <li>Nice intro of how easy it is to get started with Django Tasks</li> </ul></li> <li><a href="https://jvns.ca/blog/2026/01/27/some-notes-on-starting-to-use-django/?featured_on=pythonbytes">Some notes on starting to use Django</a> <ul> <li>Julia Evans</li> <li>A handful of reasons why Django is a great choice for a web framework <ul> <li>less magic than Rails</li> <li>a built-in admin</li> <li>nice ORM</li> <li>automatic migrations</li> <li>nice docs</li> <li>you can use sqlite in production</li> <li>built in email</li> </ul></li> <li><a href="https://alldjango.com/articles/definitive-guide-to-using-django-sqlite-in-production?featured_on=pythonbytes">The definitive guide to using Django with SQLite in production</a> <ul> <li>I’m gonna have to study this a bit.</li> <li>The conclusion states one of the benefits is “reduced complexity”, but, it still seems like quite a bit to me.</li> </ul></li> </ul></li> </ul> <p><strong>Michael #4: <a href="https://data-star.dev?featured_on=pythonbytes">Datastar</a></strong></p> <ul> <li><p>Sent to us by Forrest Lanier</p></li> <li><p>Lots of work by Chris May</p></li> <li><p>Out <a href="https://talkpython.fm/episodes/all?featured_on=pythonbytes">on Talk Python</a> soon.</p></li> <li><p><a href="https://github.com/starfederation/datastar-python?featured_on=pythonbytes">Official Datastar Python SDK</a></p></li> <li><p>Datastar is a little like HTMX, but</p> <ul> <li><p>The single source of truth is your server</p></li> <li><p>Events can be sent from server automatically (using SSE)</p> <ul> <li>e.g <div class="codehilite"> <pre><span></span><code><span class="k">yield</span> <span class="n">SSE</span><span class="o">.</span><span class="n">patch_elements</span><span class="p">(</span> <span class="sa">f</span><span class="s2">&quot;&quot;&quot;</span><span class="si">{</span><span class="p">(</span><span class="err">#</span><span class="n">HTML</span><span class="err">#</span><span class="p">)</span><span class="si">}{</span><span class="n">datetime</span><span class="o">.</span><span class="n">now</span><span class="p">()</span><span class="o">.</span><span class="n">isoformat</span><span class="p">()</span><span class="si">}</span><span class="s2">&quot;&quot;&quot;</span> <span class="p">)</span> </code></pre> </div></li> </ul></li> </ul></li> <li><p><a href="https://everydaysuperpowers.dev/articles/why-i-switched-from-htmx-to-datastar/?featured_on=pythonbytes">Why I switched from HTMX to Datastar</a> article</p></li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://djangochat.com/episodes/inverting-the-testing-pyramid-brian-okken?featured_on=pythonbytes">Django Chat: Inverting the Testing Pyramid - Brian Okken</a> <ul> <li>Quite a fun interview</li> </ul></li> <li><a href="https://peps.python.org/pep-0686/?featured_on=pythonbytes">PEP 686 – Make UTF-8 mode default</a> <ul> <li>Now with status “Final” and slated for Python 3.15</li> </ul></li> </ul> <p>Michael:</p> <ul> <li><a href="https://fosstodon.org/@proteusiq/115962026225790683">Prayson Daniel’s Paper tracker</a></li> <li><a href="https://github.com/Dimillian/IceCubesApp?featured_on=pythonbytes">Ice Cubes</a> (open source Mastodon client for macOS)</li> <li><a href="https://github.com/rvben/rumdl-intellij?featured_on=pythonbytes">Rumdl for PyCharm</a>, et. al</li> <li><a href="https://arstechnica.com/security/2026/01/overrun-with-ai-slop-curl-scraps-bug-bounties-to-ensure-intact-mental-health/?featured_on=pythonbytes">cURL Gets Rid of Its Bug Bounty Program Over AI Slop Overrun</a></li> <li><a href="https://surveys.jetbrains.com/s3/python-developers-survey-2026?featured_on=pythonbytes">Python Developers Survey 2026</a></li> </ul> <p><strong>Joke: <a href="https://x.com/PR0GRAMMERHUM0R/status/2015711332947902557?featured_on=pythonbytes">Pushed to prod</a></strong></p>

February 03, 2026 08:00 AM UTC


Daniel Roy Greenfeld

We moved to Manila!

Last year we relocated to Metro Manila, Philippines for the foreseeable future. Audrey's mother is from here, and we wanted our daughter Uma to have the opportunity to spend time with her extended family and experience another line of her heritage.

Where are you living?

In Makati, a city that contains one of the major business districts in Metro Manila. Specifically we're in Salcedo village, a neighboorhood in the CBD, made of towering residential and business buildings with numerous shops, markets, and a few parks. This area allows for a walkable life, which is important to us coming from London.

What about the USA?

The USA is our homeland and we're US citizens. We still have family and friends there. We're hoping to visit the US at least once a year.

What about the UK?

We loved living in London, and have many good friends there. I really enjoyed working for Kraken Tech, but my time came to an end there so our visas were no longer valid. We hope to visit the UK (and the rest of Europe) as tourists, but without the family connection it's harder to justify than trips to the homeland.

What about your daughter?

Uma loves Manila and is in second grade at an international school here in walking distance of our residence. We had looked into getting her into a local public school with a notable science program, but the paperwork required too much lead time. We do like the small class sizes at her current school, and how they accomodate the different learning speeds of students. She will probably stay there for a while.

For extra curricular activities she's enjoying Brazilian Jiu-Jitsu, climbing, yoga, and swimming.

If I'm in Manila can I meet up with you?

Sure! Some options:

February 03, 2026 06:41 AM UTC

February 02, 2026


Python⇒Speed

Speeding up NumPy with parallelism

If your NumPy code is too slow, what next?

One option is taking advantage of the multiple cores on your CPU: using a thread pool to do work in parallel. Another option is to tune your code so it’s less wasteful. Or, since these are two different sources of speed, you can do both.

In this article I’ll cover:

Read more...

February 02, 2026 07:42 PM UTC


Real Python

The Terminal: First Steps and Useful Commands for Python Developers

The terminal provides Python developers with direct control over their operating system through text commands. Instead of clicking through menus, you type commands to navigate folders, run scripts, install packages, and manage version control. This command-line approach is faster and more flexible than graphical interfaces for many development tasks.

By the end of this tutorial, you’ll understand that:

  • Terminal commands like cd, ls, and mkdir let you navigate and organize your file system efficiently
  • Virtual environments isolate project dependencies, keeping your Python installations clean and manageable
  • pip installs, updates, and removes Python packages directly from the command line
  • Git commands track changes to your code and create snapshots called commits
  • The command prompt displays your current directory and indicates when the terminal is ready for input

This tutorial walks through the fundamentals of terminal usage on Windows, Linux, and macOS. The examples cover file system navigation, creating files and folders, managing packages with pip, and tracking code changes with Git.

Get Your Cheat Sheet: Click here to download a free cheat sheet of useful commands to get you started working with the terminal.

Install and Open the Terminal

Back in the day, the term terminal referred to some clunky hardware that you used to enter data into a computer. Nowadays, people are usually talking about a terminal emulator when they say terminal, and they mean some kind of terminal software that you can find on most modern computers.

Note: There are two other terms that you might hear now and then in combination with the terminal:

  1. A shell is the program that you interact with when running commands in a terminal.
  2. A command-line interface (CLI) is a program designed to run in a shell inside the terminal.

In other words, the shell provides the commands that you use in a command-line interface, and the terminal is the application that you run to access the shell.

If you’re using a Linux or macOS machine, then the terminal is already built in. You can start using it right away.

On Windows, you also have access to command-line applications like the Command Prompt. However, for this tutorial and terminal work in general, you should use the Windows terminal application instead.

Read on to learn how to install and open the terminal on Windows and how to find the terminal on Linux and macOS.

Windows

The Windows terminal is a modern and feature-rich application that gives you access to the command line, multiple shells, and advanced customization options. If you have Windows 11 or above, chances are that the Windows terminal is already present on your machine. Otherwise, you can download the application from the Microsoft Store or from the official GitHub repository.

Before continuing with this tutorial, you need to get the terminal working on your Windows computer. You can follow the Your Python Coding Environment on Windows: Setup Guide to learn how to install the Windows terminal.

After you install the Windows terminal, you can find it in the Start menu under Terminal. When you start the application, you should see a window that looks like this:

Windows Terminal with Windows PowerShell tab

It can be handy to create a desktop shortcut for the terminal or pin the application to your task bar for easier access.

Linux

You can find the terminal application in the application menu of your Linux distribution. Alternatively, you can press Ctrl+Alt+T on your keyboard or use the application launcher and search for the word Terminal.

After opening the terminal, you should see a window similar to the screenshot below:

Screenshot of the Linux terminal

How you open the terminal may also depend on which Linux distribution you’re using. Each one has a different way of doing it. If you have trouble opening the terminal on Linux, then the Real Python community will help you out in the comments below.

macOS

A common way to open the terminal application on macOS is by opening the Spotlight Search and searching for Terminal. You can also find the terminal app in the application folder inside Finder.

When you open the terminal, you see a window that looks similar to the image below:

Read the full article at https://realpython.com/terminal-commands/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

February 02, 2026 02:00 PM UTC

February 01, 2026


Graham Dumpleton

Developer Advocacy in 2026

I got into developer advocacy in 2010 at New Relic, followed by a stint at Red Hat. When I moved to VMware, I expected things to continue much as before, but COVID disrupted those plans. When Broadcom acquired VMware, the writing was on the wall and though it took a while, I eventually got made redundant. That was almost 18 months ago. In the time since, I've taken an extended break with overseas travel and thoughts of early retirement. It's been a while therefore since I've done any direct developer advocacy.

One thing became clear during that time. I had no interest in returning to a 9-to-5 programming job in an office, working on some dull internal system. Ideally, I'd have found a company genuinely committed to open source where I could contribute to open source projects. But those opportunities are thin on the ground, and being based in Australia made it worse as such companies are typically in the US or Europe and rarely hire outside their own region.

Recently I've been thinking about getting back into developer advocacy. The job market makes this a difficult proposition though. Companies based in the US and Europe that might otherwise be good places to work tend to ignore the APAC region, and even when they do pay attention, they rarely maintain a local presence. They just send people out when they need to.

Despite the difficulties, I would also need to understand what I was getting myself into. How much had developer advocacy changed since I was doing it? What challenges would I face working in that space?

So I did what any sensible person does in 2026. I asked an AI to help me research the current state of the field. I started with broad questions across different topics, but one question stood out as a interesting starting point: What are the major forces that have reshaped developer advocacy in recent years?

This post looks at what the AI said and how it matched my own impressions.

Catching Up: What's Changed?

The AI came back with three main themes.

Force 1: AI Has Changed Everything

What the AI told me:

The data suggests a fundamental shift in how developers work. Around 84% of developers now use AI tools on a daily basis, with more than half relying on them for core development tasks. Developers are reporting 30-60% time savings on things like boilerplate generation, debugging, documentation lookup, and testing.

This has significant implications for developer advocacy. The traditional path—developer has a problem, searches Google, lands on Stack Overflow or your documentation, reads a tutorial—has been disrupted. Now, developers increasingly turn to AI assistants first. They describe their problem and get an immediate, contextual answer, often with working code included.

What this means is that your content now has two audiences: humans and AI systems. Your documentation isn't just being read by developers—it's being ingested, processed, and used by AI to generate answers. If the AI misrepresents your product or gives incorrect guidance, you may never even know it's happening.

There's even an emerging standard called llms.txt that's gaining traction. First proposed in late 2024, it's a file that tells AI systems what content to prioritise and how to interpret it. By 2026, documentation without this kind of AI-optimisation struggles to surface correctly when developers ask AI assistants for help. It's like SEO, but for language models instead of search engines.

My take:

Yep, no surprise here. I've seen this firsthand with my own mod_wsgi and wrapt open source projects, and it goes beyond what the AI described. In the past, people would ask questions via Stack Overflow, project-specific forums, or mailing lists if they did get stuck. These days, direct engagement is rare. Occasionally someone might raise a problem on GitHub, but those old communication channels have largely disappeared. People using your code just ask the AI instead.

As for the suggestion that I need to put in extra work to help AI systems, such as providing an llms.txt file, I'm not convinced. From what I can see, AI is already doing a good job of working things out on its own. Besides, project documentation, while supposedly authoritative, isn't the only source of information out there. I don't think an AI will give your llms.txt file any more credibility than the third-party content scattered across blogs, conference talks, and elsewhere.

Either way, AI has drastically changed things and now sits between a user and documentation for a project. It can be demoralising to write documentation when you don't know if a real person will ever actually read it anymore.

Force 2: Content Saturation Has Reached Crisis Levels

What the AI told me:

The volume of content being produced has exploded. Current estimates put global data production at over 403 million terabytes daily, and around 80% of developers report experiencing information overload. The symptoms show up in the data: lower click-through rates, shorter time-on-page, faster scrolling behaviour, declining email open rates.

The root cause? AI has made content creation trivially easy. Anyone can generate a tutorial, write a blog post, or produce documentation in minutes. The result is a flood of content, much of it generic, shallow, and nearly indistinguishable from everything else.

Developer communities have adapted. Developers have become sophisticated at filtering low-quality content. Generic "getting started" tutorials are largely ignored because AI can generate them on demand. Surface-level blog posts get scrolled past. The attention bar has risen significantly.

This creates a dual reality. The challenge: your content competes against unprecedented noise. The opportunity: genuinely valuable content stands out more than ever, precisely because it's rare.

My take:

I'd have to agree with the AI here. I'm definitely feeling overwhelmed by the sheer volume of content when researching topics these days. That said, I'd qualify that observation. It's particularly acute for topics with a lot of buzz around them, AI being the obvious example. The same applies when searching for information on large software projects with broad user bases, like popular programming languages or major web frameworks. But for niche areas, I don't see this as much of an issue. People aren't going to generate content for obscure projects just for the sake of it, especially when there's no hype driving attention.

When talking about developer advocacy as a job, the focus is usually a company's own product. For most companies, especially early-stage ones, they don't have the mind share yet, so they fit into that niche category.

That said, content saturation is still a real problem. I'm tired of the endless YouTube videos on the same topics and how self-proclaimed advocates jump on trends to outdo each other. Too often, the focus is just on shipping content, which means discussions end up shallow and repetitive with little substance.

Force 3: Platform Fragmentation Has Disrupted Communities

What the AI told me:

The platform landscape has shifted significantly. When you were last active, Slack dominated developer communities, Discord was primarily for gamers, and Twitter served as the town square for tech discourse.

None of that holds anymore.

Discord has overtaken Slack for developer communities. The drivers were primarily economic—Slack's pricing pushed communities toward free alternatives—and feature-related, with Discord offering better moderation tools, role management, and community features. Many developers who were daily Slack users between 2015-2020 have essentially stopped using Slack and are now primarily active in Discord channels.

The Twitter situation is more fragmented. Some developers moved to Mastodon, others to Bluesky, and many simply reduced their social media engagement altogether. LinkedIn has grown as a platform for technical content. There's no longer a single "town square" where developers reliably gather.

The practical implication: you can't rely on any single platform for community strategy. Presence across multiple spaces, with different approaches for each, is now necessary.

My take:

My age is probably showing here. The AI talks about people moving from Slack to Discord and the demise of Twitter. I still miss mailing lists. Back then, I found the asynchronous nature of mailing lists to be a much better forum for discussions with users. You could take your time understanding questions and drafting thoughtful responses. These days, with real-time discussion platforms, there's pressure to provide immediate answers, which often means less effort goes into truly understanding a user's problem.

To me migrations between platforms for the purpose of providing support to users is inevitable, especially as technology changes. This doesn't mean that new platforms are going to be better though.

Of the disruptions, I felt the demise of Twitter most acutely. It provided more community interactions for me than other discussion forums. When everyone fled Twitter, I lost those connections and don't feel as close to developer communities as I once did, especially the Python community. COVID and the shutdown of conferences during that time compounded this. Overall, I don't feel as connected to the Python community as I was in the past.

Initial Reflections

Having gone through these three forces, I'm left with mixed feelings. Nothing the AI said was really a surprise though.

The main challenge in getting back into developer advocacy is adapting to how AI has changed everything.

I don't see it as insurmountable though, especially since companies expanding their developer advocacy programs are typically niche players without a huge volume of content about their product already out there. The key is ensuring the content they do have on their own site addresses what users need, and expanding from there as necessary.

Relying solely on documentation isn't the answer either. When I've done developer advocacy in the past, I found that online interactive learning platforms could supplement documentation well. That's even more true now, as users aren't willing to spend much time reading through documentation. You need something to hook them, a way to quickly show how your product might help them. Interactive platforms where they can experiment with a product without installing it locally can make a real difference here.

What's Next

Right now I'm not sure what that next step is. I'll almost certainly need to find some sort of job, at least for the next few years before I can think about retiring completely. I still work on my own open source projects, but they don't pay the bills.

One of those projects is actually an interactive learning platform, exactly the sort of thing I've been talking about above. I've invested significant time on it, but it's something I've never really discussed here on my blog. As I think through what comes next, it seems like time to change that.

February 01, 2026 10:38 AM UTC


Tryton News

Tryton News February 2026

During the last month we focused on fixing bugs, improving the behaviour of things, speeding-up performance issues - building on the changes from our release last month. But we also added many new features which we would like to introduce to you in this newsletter.

For an in depth overview of the Tryton issues please take a look at our issue tracker or see the issues and merge requests filtered by label.

Changes for the User

Sales, Purchases and Projects

We now add the optional gift card field to the list of products. This helps to search gift card products.

Now we clean the former quotation_date when copy sale records as we already do with the sale_date.

We now display the origin field of requests in the purchase request list. When the purchase request is not from a stock supply, it is useful for the user who takes action, to know the origin of the request.

Accounting, Invoicing and Payments

Now we support allowance and charge in UBL invoices.

We now fill the buyer’s item identification (BuyersItemIdentification) in the UBL invoice, when sale product customer is activated.
On the invoice line we have properties like product_name to get related supplier and customer codes.

Now we add a cron scheduler to reconcile account move lines. On larger setups the number of accounts and parties to reconcile can be very numerous. It would consume too much time to execute the reconciliation wizard, even with the automatic option.
In this cases it will be better to run the reconciliation process as a scheduled task in background.

We now add support for payment references for incoming invoice. As the invoice manages payment references, we support to fill it using information from the incoming document.

Now Tryton warns the user before creating an overpayment. Sometimes users book a payment directly as a move line but without creating a payment record. If the line is not yet reconciled (it can be a partial payment), the line to pay stand still there showing the full amount to pay. This can lead to over pay a party without any notice for the user.
So we now ensure that the amount being paid does not exceed the payable (or receivable) amount of the party.
There is no guarantee against overpayment The proper way to avoid is to always use the payments functionality. But the warning will catch most of the mistakes.

Now we add support for Peppyrus webhooks in Tryton’s document incoming functionality.

We now set Belgian account 488 as deposit.

Now we add cy_vat as tax identifier type.

Stock, Production and Shipments

We now store the original planned date of requested internal shipments and productions.
For shipments created by sales we already store the original planned date to compute the delay. Now we do the same for the supplied shipments and productions.

Now we use a fields.Many2One to display either the product or the variant in the stock reporting instead of the former reference field. With this change the user is able to search for product or variant specific attributes. But the reference field is still useful to build the domain, so we keep it invisible.

We now add routings on BOM form to ease the setup.

Now we use the default warehouse when creating new product locations.

User Interface

Now we allow to reorder tabs in Sao, the Tryton web client.

Now we use the default digit value to calculate the width of the float widget in Sao.

New Releases

We released bug fixes for the currently maintained long term support series
7.0 and 6.0, and for the penultimate series 7.8 and 7.6.

Security

Please update your systems to take care of a security related bug we found last month.
Mahdi Afshar and Abdulfatah Abdillahi have found that trytond sends the trace-back to the clients for unexpected errors. This trace-back may leak information about the server setup. Impact CVSS v3.0 Base Score: 4.3 Attack Vector: Network Attack Complexity: Low Privileges Required: Low User Interaction: None Scope: Unchanged Confidentiality: Low Integrity: None Availability: None Workaround A possible workaround is to configure an error handler which would remove the trace-back from the respo…
Abdulfatah Abdillahi has found that sao does not escape the completion values. The content of completion is generally the record name which may be edited in many ways depending on the model. The content may include some JavaScript which is executed in the same context as sao which gives access to sensitive data such as the session. Impact CVSS v3.0 Base Score: 7.3 Attack Vector: Network Attack Complexity: Low Privileges Required: Low User Interaction: Required Scope: Unchanged Confidentiality…
Mahdi Afshar has found that trytond does not enforce access rights for the route of the HTML editor (since version 6.0). Impact CVSS v3.0 Base Score: 7.1 Attack Vector: Network Attack Complexity: Low Privileges Required: Low User Interaction: None Scope: Unchanged Confidentiality: High Integrity: Low Availability: None Workaround A possible workaround is to block access to the html editor. Resolution All affected users should upgrade trytond to the latest version. Affected versions per ser…
Cédric Krier has found that trytond does not enforce access rights for data export (since version 6.0). Impact CVSS v3.0 Base Score: 6.5 Attack Vector: Network Attack Complexity: Low Privileges Required: Low User Interaction: None Scope: Unchanged Confidentiality: High Integrity: None Availability: None Workaround There is no workaround. Resolution All affected users should upgrade trytond to the latest version. Affected versions per series: trytond: 7.6: <= 7.6.10 7.4: <= 7.4.20 7.0: <=…

Changes for the System Administrator

Now we allow filtering users to be notified by cron tasks. When notifying subscribing users of a cron task, messages may make sense only for some user. For example if the message is about a specific company, we want to notify only the users having access to this company.

We now dump the action value of cron notification as JSON if it is not a string (aka JSON).

Now we log an exception when the Binary field retrieval of a file ID from the file-store fails.

We now support 0 as parameter of max_tasks_per_child.
The ProcessPoolExecutor requires a positive number or None. But when using an environment variable to configure a startup script, it is complicated to set no value (which means skip the argument). Now it is easier because 0 is considered as None.

Changes for Implementers and Developers

We now log an exception when it fails to open the XML-file of a view (view arch).

Now we format dates used as record names with the contextual language.

We now add the general test PartyCheckReplaceMixin to check replaced fields of the replace party wizard.

Authors: @dave @pokoli @udono

1 post - 1 participant

Read full topic

February 01, 2026 07:00 AM UTC

January 31, 2026


EuroPython

Humans of EuroPython: Naa Ashiorkor Nortey

Behind every inspiring talk, networking session, and workshop at EuroPython lies countless hours of dedication from our amazing volunteers. From organizing logistics and securing speakers to welcoming attendees, these passionate community members make our conference possible year after year. Without their selfless commitment and hard work, EuroPython simply wouldn&apost exist.

Here’s our recent conversation with Naa Ashiorkor Nortey, who led the EuroPython 2025 Speaker Mentorship Team, contributed to the Programme Team and mentored at the Humble Data workshop.

We appreciate your work on the conference, Naa!

altNaa Ashiorkor Nortey, Speaker Mentorship Lead at EuroPython 2025

EP: Had you attended EuroPython before volunteering, or was volunteering your first experience with it?

My first experience volunteering at EuroPython was in 2023. I volunteered at the registration desk and as a session chair, and I’m still here volunteering.

EP: What&aposs one task you handled that attendees might not realize happens behind the scenes at EuroPython?

I can’t think of a specific task, but I would say that some attendees might not realise the number of hours volunteers put in for EuroPython. Usually, a form might be filled out with the number of hours a volunteer can dedicate in a week, but in reality the number of hours invested might be way more than that. There are volunteers in different time zones with different personal lives, so imagine making all that work.

EP: Was there a moment when you felt your contribution really made a difference?

Generally, showing up at the venue after months of planning, it just hit me how much difference my contribution makes. Specifically at EuroPython 2025, where I had the opportunity to lead the Speaker Mentorship Team. I interviewed one of the mentees during the conference. She mentioned that it was her first time speaking and highlighted how the speaker mentorship programme and her mentor greatly impacted her. At that moment, I felt my contribution really made a difference.

EP: What surprised you most about the volunteer experience?

The dedication and commitment of some of the volunteers were so inspiring. 

EP: If you could describe the volunteer experience in three words, what would they be?

Fun learning experience. 

EP: Do you have any tips for first-time EuroPython volunteers?

Don’t be afraid to volunteer, even if it involves leading one of the teams or contributing to a team you have no experience with. You can learn the skills needed in the team while volunteering. Everyone is supportive and ready to help. Communicate as much as you can and enjoy the experience.

EP: Thank you for the interview, Naa!

January 31, 2026 01:23 PM UTC


Armin Ronacher

Pi: The Minimal Agent Within OpenClaw

If you haven’t been living under a rock, you will have noticed this week that a project of my friend Peter went viral on the internet. It went by many names. The most recent one is OpenClaw but in the news you might have encountered it as ClawdBot or MoltBot depending on when you read about it. It is an agent connected to a communication channel of your choice that just runs code.

What you might be less familiar with is that what’s under the hood of OpenClaw is a little coding agent called Pi. And Pi happens to be, at this point, the coding agent that I use almost exclusively. Over the last few weeks I became more and more of a shill for the little agent. After I gave a talk on this recently, I realized that I did not actually write about Pi on this blog yet, so I feel like I might want to give some context on why I’m obsessed with it, and how it relates to OpenClaw.

Pi is written by Mario Zechner and unlike Peter, who aims for “sci-fi with a touch of madness,” 1 Mario is very grounded. Despite the differences in approach, both OpenClaw and Pi follow the same idea: LLMs are really good at writing and running code, so embrace this. In some ways I think that’s not an accident because Peter got me and Mario hooked on this idea, and agents last year.

What is Pi?

So Pi is a coding agent. And there are many coding agents. Really, I think you can pick effectively anyone off the shelf at this point and you will be able to experience what it’s like to do agentic programming. In reviews on this blog I’ve positively talked about AMP and one of the reasons I resonated so much with AMP is that it really felt like it was a product built by people who got both addicted to agentic programming but also had tried a few different things to see which ones work and not just to build a fancy UI around it.

Pi is interesting to me because of two main reasons:

And a little bonus: Pi itself is written like excellent software. It doesn’t flicker, it doesn’t consume a lot of memory, it doesn’t randomly break, it is very reliable and it is written by someone who takes great care of what goes into the software.

Pi also is a collection of little components that you can build your own agent on top. That’s how OpenClaw is built, and that’s also how I built my own little Telegram bot and how Mario built his mom. If you want to build your own agent, connected to something, Pi when pointed to itself and mom, will conjure one up for you.

What’s Not In Pi

And in order to understand what’s in Pi, it’s even more important to understand what’s not in Pi, why it’s not in Pi and more importantly: why it won’t be in Pi. The most obvious omission is support for MCP. There is no MCP support in it. While you could build an extension for it, you can also do what OpenClaw does to support MCP which is to use mcporter. mcporter exposes MCP calls via a CLI interface or TypeScript bindings and maybe your agent can do something with it. Or not, I don’t know :)

And this is not a lazy omission. This is from the philosophy of how Pi works. Pi’s entire idea is that if you want the agent to do something that it doesn’t do yet, you don’t go and download an extension or a skill or something like this. You ask the agent to extend itself. It celebrates the idea of code writing and running code.

That’s not to say that you cannot download extensions. It is very much supported. But instead of necessarily encouraging you to download someone else’s extension, you can also point your agent to an already existing extension, say like, build it like the thing you see over there, but make these changes to it that you like.

Agents Built for Agents Building Agents

When you look at what Pi and by extension OpenClaw are doing, there is an example of software that is malleable like clay. And this sets certain requirements for the underlying architecture of it that are actually in many ways setting certain constraints on the system that really need to go into the core design.

So for instance, Pi’s underlying AI SDK is written so that a session can really contain many different messages from many different model providers. It recognizes that the portability of sessions is somewhat limited between model providers and so it doesn’t lean in too much into any model-provider-specific feature set that cannot be transferred to another.

The second is that in addition to the model messages it maintains custom messages in the session files which can be used by extensions to store state or by the system itself to maintain information that either not at all is sent to the AI or only parts of it.

Because this system exists and extension state can also be persisted to disk, it has built-in hot reloading so that the agent can write code, reload, test it and go in a loop until your extension actually is functional. It also ships with documentation and examples that the agent itself can use to extend itself. Even better: sessions in Pi are trees. You can branch and navigate within a session which opens up all kinds of interesting opportunities such as enabling workflows for making a side-quest to fix a broken agent tool without wasting context in the main session. After the tool is fixed, I can rewind the session back to earlier and Pi summarizes what has happened on the other branch.

This all matters because for instance if you consider how MCP works, on most model providers, tools for MCP, like any tool for the LLM, need to be loaded into the system context or the tool section thereof on session start. That makes it very hard to impossible to fully reload what tools can do without trashing the complete cache or confusing the AI about how prior invocations work differently.

Tools Outside The Context

An extension in Pi can register a tool to be available to the LLM to call and every once in a while I find this useful. For instance, despite my criticism of how Beads is implemented, I do think that giving an agent access to a to-do list is a very useful thing. And I do use an agent-specific issue tracker that works locally that I had my agent build itself. And because I wanted the agent to also manage to-dos, in this particular case I decided to give it a tool rather than a CLI. It felt appropriate for the scope of the problem and it is currently the only additional tool that I’m loading into my context.

But for the most part all of what I’m adding to my agent are either skills or TUI extensions to make working with the agent more enjoyable for me. Beyond slash commands, Pi extensions can render custom TUI components directly in the terminal: spinners, progress bars, interactive file pickers, data tables, preview panes. The TUI is flexible enough that Mario proved you can run Doom in it. Not practical, but if you can run Doom, you can certainly build a useful dashboard or debugging interface.

I want to highlight some of my extensions to give you an idea of what’s possible. While you can use them unmodified, the whole idea really is that you point your agent to one and remix it to your heart’s content.

/answer

I don’t use plan mode. I encourage the agent to ask questions and there’s a productive back and forth. But I don’t like structured question dialogs that happen if you give the agent a question tool. I prefer the agent’s natural prose with explanations and diagrams interspersed.

The problem: answering questions inline gets messy. So /answer reads the agent’s last response, extracts all the questions, and reformats them into a nice input box.

The /answer extension showing a question dialog

/todos

Even though I criticize Beads for its implementation, giving an agent a to-do list is genuinely useful. The /todos command brings up all items stored in .pi/todos as markdown files. Both the agent and I can manipulate them, and sessions can claim tasks to mark them as in progress.

/review

As more code is written by agents, it makes little sense to throw unfinished work at humans before an agent has reviewed it first. Because Pi sessions are trees, I can branch into a fresh review context, get findings, then bring fixes back to the main session.

The /review extension showing review preset options

The UI is modeled after Codex which provides easy to review commits, diffs, uncommitted changes, or remote PRs. The prompt pays attention to things I care about so I get the call-outs I want (eg: I ask it to call out newly added dependencies.)

/control

An extension I experiment with but don’t actively use. It lets one Pi agent send prompts to another. It is a simple multi-agent system without complex orchestration which is useful for experimentation.

/files

Lists all files changed or referenced in the session. You can reveal them in Finder, diff in VS Code, quick-look them, or reference them in your prompt. shift+ctrl+r quick-looks the most recently mentioned file which is handy when the agent produces a PDF.

Others have built extensions too: Nico’s subagent extension and interactive-shell which lets Pi autonomously run interactive CLIs in an observable TUI overlay.

Software Building Software

These are all just ideas of what you can do with your agent. The point of it mostly is that none of this was written by me, it was created by the agent to my specifications. I told Pi to make an extension and it did. There is no MCP, there are no community skills, nothing. Don’t get me wrong, I use tons of skills. But they are hand-crafted by my clanker and not downloaded from anywhere. For instance I fully replaced all my CLIs or MCPs for browser automation with a skill that just uses CDP. Not because the alternatives don’t work, or are bad, but because this is just easy and natural. The agent maintains its own functionality.

My agent has quite a few skills and crucially I throw skills away if I don’t need them. I for instance gave it a skill to read Pi sessions that other engineers shared, which helps with code review. Or I have a skill to help the agent craft the commit messages and commit behavior I want, and how to update changelogs. These were originally slash commands, but I’m currently migrating them to skills to see if this works equally well. I also have a skill that hopefully helps Pi use uv rather than pip, but I also added a custom extension to intercept calls to pip and python to redirect them to uv instead.

Part of the fascination that working with a minimal agent like Pi gave me is that it makes you live that idea of using software that builds more software. That taken to the extreme is when you remove the UI and output and connect it to your chat. That’s what OpenClaw does and given its tremendous growth, I really feel more and more that this is going to become our future in one way or another.

January 31, 2026 12:00 AM UTC

January 30, 2026


Kevin Renskers

Django's test runner is underrated

Every podcast, blog post, Reddit thread, and every conference talk seems to agree: “just use pytest”. Real Python says most developers prefer it. Brian Okken’s popular book calls it “undeniably the best choice”. It’s treated like a rite of passage for Python developers: at some point you’re supposed to graduate from the standard library to the “real” testing framework.

I never made that switch for my Django projects. And after years of building and maintaining Django applications, I still don’t feel like I’m missing out.

What I actually want from tests

Before we get into frameworks, let me be clear about what I need from a test suite:

  1. Readable failures. When something breaks, I want to understand why in seconds, not minutes.

  2. Predictable setup. I want to know exactly what state my tests are running against.

  3. Minimal magic. The less indirection between my test code and what’s actually happening, the better.

  4. Easy onboarding. New team members should be able to write tests on day one without learning a new paradigm.

Django’s built-in test framework delivers all of this. And honestly? That’s enough for most projects.

Django tests are just Python’s unittest

Here’s something that surprises a lot of developers: Django’s test framework isn’t some exotic Django-specific system. Under the hood, it’s Python’s standard unittest module with a thin integration layer on top.

TestCase extends unittest.TestCase. The assertEqual, assertRaises, and other assertion methods? Straight from the standard library. Test discovery, setup and teardown, skip decorators? All standard unittest behavior.

What Django adds is integration: Database setup and teardown, the HTTP client, mail outbox, settings overrides.

This means when you choose Django’s test framework, you’re choosing Python’s defaults plus Django glue. When you choose pytest with pytest-django, you’re replacing the assertion style, the runner, and the mental model, then re-adding Django integration on top.

Neither approach is wrong. But it’s objectively more layers.

The self.assert* complaint

A common argument I hear against unittest-style tests is: “I can’t remember all those assertion methods”. But let’s be honest. We’re not writing tests in Notepad in 2026. Every editor has autocomplete. Type self.assert and pick from the list.

And in practice, how many assertion methods do you actually use? In my tests, it’s mostly assertEqual and assertRaises. Maybe assertTrue, assertFalse, and assertIn once in a while. That’s not a cognitive burden.

Here’s the same test in both styles:

# Django / unittest
self.assertEqual(total, 42)
with self.assertRaises(ValidationError):
    obj.full_clean()
# pytest
assert total == 42
with pytest.raises(ValidationError):
    obj.full_clean()

Yes, pytest’s assert is shorter. It’s a bit easier on the eyes. And I’ll be honest: pytest’s failure messages are better too. When an assertion fails, pytest shows you exactly what values differed with nice diffs. That’s genuinely useful.

But here’s what makes that work: pytest rewrites your code. It hooks into Python’s AST and transforms your test files before they run so it can produce those detailed failure messages from plain assert statements. That’s not necessarily bad - it’s been battle-tested for over a decade. But it is a layer of transformation between what you write and what executes, and I prefer to avoid magic when I can.

For me, unittest’s failure messages are good enough. When assertEqual fails, it tells me what it expected and what it got. That’s usually all I need. Better failure messages are nice, but they’re not worth adding dependencies and an abstraction layer for.

The missing piece: parametrized tests

If there’s one pytest feature people genuinely miss when using Django’s test framework, it’s parametrization. Writing the same test multiple times with different inputs feels wasteful.

But you really don’t need to switch to pytest just for that. The parameterized package solves this cleanly:

from django.test import SimpleTestCase
from parameterized import parameterized

class SlugifyTests(SimpleTestCase):
    @parameterized.expand([
        ("Hello world", "hello-world"),
        ("Django's test runner", "djangos-test-runner"),
        ("  trim  ", "trim"),
    ])
    def test_slugify(self, input_text, expected):
        self.assertEqual(slugify(input_text), expected)

Compare that to pytest:

import pytest

@pytest.mark.parametrize("input_text,expected", [
    ("Hello world", "hello-world"),
    ("Django's test runner", "djangos-test-runner"),
    ("  trim  ", "trim"),
])
def test_slugify(input_text, expected):
    assert slugify(input_text) == expected

Both are readable. Both work well. The difference is that parameterized is a tiny, focused library that does one thing. It doesn’t replace your test runner, introduce a new fixture system, or bring an ecosystem of plugins. It’s a decorator, not a paradigm shift.

Once I added parameterized, I realized pytest no longer solved a problem I actually had.

Side by side: common test patterns

Let’s look at how typical Django tests compare to pytest’s approach.

Database tests

# Django
from django.test import TestCase
from myapp.models import Article

class ArticleTests(TestCase):
    def test_article_str(self):
        article = Article.objects.create(title="Hello")
        self.assertEqual(str(article), "Hello")
# pytest + pytest-django
import pytest
from myapp.models import Article

@pytest.mark.django_db
def test_article_str():
    article = Article.objects.create(title="Hello")
    assert str(article) == "Hello"

With Django, database access simply works. TestCase wraps every test in a transaction and rolls it back afterward, giving you a clean slate without extra decorators. pytest-django takes the opposite approach: database access is opt-in. Different philosophies, but I find theirs annoying since most of my tests touch the database anyway, so I’d end up with @pytest.mark.django_db on almost every test.

View tests

# Django
from django.test import TestCase
from django.urls import reverse

class ViewTests(TestCase):
    def test_home_page(self):
        response = self.client.get(reverse("home"))
        self.assertEqual(response.status_code, 200)
# pytest + pytest-django
from django.urls import reverse

def test_home_page(client):
    response = client.get(reverse("home"))
    assert response.status_code == 200

In Django, self.client is right there on the test class. If you want to know where it comes from, follow the inheritance tree to TestCase. In pytest, client appears because you named your parameter client. That’s how fixtures work: injection happens by naming convention. If you didn’t know that, the code would be puzzling. And if you want to find where a fixture is defined, you might be hunting through conftest.py files across multiple directory levels.

What about fixtures?

Pytest’s fixture system is the other big feature people bring up. Fixtures compose, they handle setup and teardown automatically, and they can be scoped to function, class, module, or session.

But the mechanism is implicit. You’ve already seen the implicit injection in the view test example: name a parameter client and it appears, add db to your function signature and you get database access. Powerful, but also magic you need to learn.

For most Django tests, you need some objects in the database before your test runs. Django gives you two ways to do this:

class ArticleTests(TestCase):
    @classmethod
    def setUpTestData(cls):
        cls.author = User.objects.create(username="kevin")
    
    def test_article_creation(self):
        article = Article.objects.create(title="Hello", author=self.author)
        self.assertEqual(article.author.username, "kevin")

If you need more sophisticated object creation, factory-boy works great with either framework.

The fixture system solves a real problem - complex cross-cutting setup that needs to be shared and composed. My projects just haven’t needed that level of sophistication. And I’d rather not add the indirection until I do.

The hidden cost of flexibility

Pytest’s flexibility is a feature. It’s also a liability.

In small projects, pytest feels lightweight. But as projects grow, that flexibility can accumulate into complexity. Your conftest.py starts small, then grows into its own mini-framework. You add pytest-xdist for parallel tests (Django has --parallel built-in). You write custom fixtures for DRF’s APIClient (Django’s APITestCase just works). You add a plugin for coverage, another for benchmarking. Each one makes sense in isolation.

Then a test fails in CI but not locally, and you’re debugging the interaction between three plugins and a fixture that depends on two other fixtures.

Django’s test framework doesn’t have this problem because it doesn’t have this flexibility. There’s one way to set up test data. There’s one test client. There’s one way to run tests in parallel. Boring, but predictable.

When I’m debugging a test failure, I want to debug my code, not my test infrastructure.

When I would recommend pytest

I’m not anti-pytest. If your team already has deep pytest expertise and established patterns, switching to Django’s runner would be a net negative. Switching costs are real. If I join a project that uses pytest? I use pytest. This is a preference for new projects, not a religion.

It’s also worth noting that pytest can run unittest-style tests without modification. You don’t have to rewrite everything if you want to try it. That’s a genuinely nice feature.

But if you’re starting fresh, or you’re the one making the decision? Make it a conscious choice. “Everyone uses pytest” can be a valid consideration, but it shouldn’t be the whole argument.

My rule of thumb

Start with Django’s test runner. It’s boring, it’s stable, and it works.

Add parameterized when you need parametrized tests.

Switch to pytest only when you can name the specific problem Django’s framework can’t solve. Not because a podcast told you to, but because you’ve hit an actual wall.

I’ve been building Django applications for a long time. I’ve tried both approaches. And I keep choosing boring.

Boring is a feature in test infrastructure.

January 30, 2026 08:21 PM UTC


The Python Coding Stack

Planning Meals, Weekly Shop, Alternative Constructors Using Class Methods

I’m sure we’re not the only family with this problem: deciding what meals to cook throughout the week. There seems to be just one dish that everyone loves, but we can hardly eat the same dish every day.

So we came up with a system, and I’m writing a Python program to implement it. We keep a list of meals we try out. Each family member assigns a score to each meal. Every Saturday, before we go to the supermarket for the weekly shop, we plan which meals we’ll cook on each day of the week. It’s not based solely on the preference ratings, of course, since my wife and I have the final say to ensure a good balance. Finally, the program provides us with the shopping list with the ingredients we need for all the week’s meals.

I know, we’ve reinvented the wheel. There are countless apps that do this. But the fun is in writing your own code to do exactly what you want.

I want to keep this article focussed on just one thing: alternative constructors using class methods. Therefore, I won’t go through the whole code in this post. Perhaps I’ll write about the full project in a future article.

So, here’s what you need to know to get our discussion started.

Do you learn best from one-to-one sessions? The Python Coding Place offers one-to-one lessons on Zoom. Try them out, we bet you’ll love them. Find out more about one-to-one private sessions.

One-to-one Python sessions

Setting the Scene • Outlining the Meal and WeeklyMealPlanner Classes

Let me outline two of the classes in my code. The first is the Meal class. This class – you guessed it – deals with each meal. Here’s the class’s .__init__() method:

All code blocks are available in text format at the end of this article • #1

The meal has a name so we can easily refer to it, so there’s a .name data attribute. And the meals I cook are different from the meals my wife cooks, which is why there’s a .person_cooking data attribute. This matters as on some days of the week, only one of us is available to prepare dinner, so this attribute becomes relevant!

There are also days when we have busy afternoons and evenings with children’s activities, so we need to cook a quick meal. The .quick_meal data attribute is a Boolean flag to help with planning for these hectic days.

Then there’s the .ingredients data attribute. You don’t need me to explain this one. And since each family member rates each meal, there’s a .ratings dictionary to keep track of the scores.

The class has more methods, such as add_ingredient(), remove_ingredient(), add_rating(), and more. There’s also code to save to and load from CSV and JSON files. But these are not necessary for today’s article, so I’ll leave them out.

There’s also a WeeklyMealPlanner class:

#2

The ._meals data attribute is a dictionary with the days of the week as keys and Meal instances as values. It’s defined as a non-public attribute to be used with the read-only property .meals. The .meals property returns a shallow copy of the ._meals dictionary. This makes it safer as it’s harder for a user to make changes directly to this dictionary. The dictionary is modified only through methods within WeeklyMealPlanner. I’ve omitted the rest of the methods in this class as they’re not needed for this article.

You can read more about properties in Python in this article: The Properties of Python’s ‘property’

So, each time we try a new dish, we create a Meal object, and each family member rates it. This meal then goes into our collection of meals to choose from each week. On Saturday, we choose the meals we want for the week, put them in a WeeklyMealPlanner instance, and we’re almost ready to go…

At the Supermarket

Well, we’re almost ready to go to the supermarket at this point. So, here’s another class:

#3

A ShoppingList object has an .ingredients data attribute. This attribute is a dictionary. The keys are the ingredients, and the values are the quantities needed for each ingredient. I’m also showing the .add_ingredient() method, which I’ll need later on. So, you can create an instance of ShoppingList in the usual way:

#4

Then, you can add ingredients as needed. But this is annoying for us on a Saturday. Here’s why…


Do you want to master Python one article at a time? Then don’t miss out on the article in The Club which are exclusive to premium subscribers here on The Python Coding Stack

Subscribe now


Alternative Constructor

Before describing our Saturday problems, let’s briefly revisit what happens when you create an instance of a class. When you place parentheses after the class name, Python does two things: it creates a blank new object, and it initialises it. The creation of the new object almost always happens “behind the scenes”. The .__new__() method creates a new object, but you rarely need to override it. And the .__init__() method performs the object’s initialisation.

You can only have one .__init__() special method in a class. Does this mean there’s only one way to create an instance of a class?

Not quite, no. Although there’s no way to define more .__init__() methods, there are ways to create instances through different routes. The @singledispatchmethod decorator is a useful tool, but one I’ll discuss in a future post. Today, I want to talk about using class methods as alternative constructors.

Back to a typical Saturday in our household. We just finished choosing the seven dinners we plan to have this coming week, and we created a WeeklyMealPlanner instance. So we should now create a ShoppingList instance using ShoppingList() and then go through all the meals we chose, entering their ingredients.

Wouldn’t it be nice if we could just create a ShoppingList instance directly from the WeeklyMealPlanner instance? But that would require a different way to create an instance of ShoppingList.

Let’s define an alternative constructor, then:

#5

There’s a new method called .from_meal_planner(). However, this is not an instance method. It doesn’t belong to an instance of the class. Instead, it belongs to the class directly. The @classmethod decorator tells Python to treat this method as a class method. Note that the first parameter in this method is not self, as with the usual (instance) methods. Instead, you use cls, which is the parameter name used by convention to refer to the class.

Whereas self in an instance method represents the instance of a class, cls represents the class directly. So, unlike instance methods, class methods don’t have access to the instance. Therefore, class methods don’t have access to instance attributes.

The first line of this method creates an instance of the class. Look at the expression cls(), which comes after the = operator. Recall that cls refers to the class. So, cls is the same as ShoppingList in this example. But adding parentheses after the class creates an instance. You assign this new instance to the local variable shopping_list. You use cls rather than ShoppingList to make the class more robust in case you choose to subclass it later.

Fast-forward to the end of this class method, and you’ll see that the method returns this new instance, shopping_list. However, it makes changes to the instance before returning it. The method fetches all the ingredients from each meal in the WeeklyMealPlanner instance and populates the .ingredients data attribute in the new ShoppingList instance.

In summary, the class method doesn’t have access to an instance through the self parameter. But since it has access to the class, the method uses the class to create a new instance and initialise it, adding steps to the standard .__init__() method.

Therefore, this class method creates and returns an instance of ShoppingList with its .ingredients data attribute populated with the ingredients you need for all the meals in the week.

You now have an alternative way of creating an instance of ShoppingList:

#6

This class now has two ways to create instances. The standard one using ShoppingList() and the alternative one using ShoppingList.from_meal_planner(). It’s common for class methods used as alternative constructors to have names starting with from_*.

You can have as many alternative constructors as you need in a class.

Question: if it’s more useful to create a shopping list directly from the weekly meal planner, couldn’t you implement this logic directly in the .__init__() method? Yes, you could. But this would create a tight coupling between the two classes, ShoppingList and WeeklyMealPlanner. You can no longer use ShoppingList without an instance of WeeklyMealPlanner, and you can no longer easily create a blank ShoppingList instance.

Creating two constructors gives you the best of both worlds. ShoppingList is still flexible enough so you can use it as a standalone class or in conjunction with other classes in other projects. But you also have access to the alternative constructor ShoppingList.from_meal_planner() when you need it.

Alternative Constructors in the Wild

You may have already seen and used alternative constructors, perhaps without noticing.

Let’s consider dictionaries. The standard constructor is dict() – the name of the class followed by parentheses. As it happens, you have several options when using dict() – you can pass a mapping, or an iterable of pairs, or **kwargs. You can read more about these alternatives in this article: dict() is More Versatile Than You May Think.

But there’s another alternative constructor that doesn’t use the standard constructor dict() but still creates a dictionary. This is dict.fromkeys():

#7

You can have a look at help(dict.fromkeys). You’ll see the documentation text refer to this method as a class method, just like the ShoppingList.from_meal_planner() class method you defined earlier.

And if you use the datetime module, you most certainly have used alternative constructors using class methods. The standard constructor when creating a datetime.datetime instance is the following:

#8

However, there are several class methods you can use as alternative constructors:

#9

Have a look at other datetime.datetime methods starting with from_.

Your call…

The Python Coding Place offers something for everyone:

• a super-personalised one-to-one 6-month mentoring option
$ 4,750

• individual one-to-one sessions
$ 125

• a self-led route with access to 60+ hrs of exceptional video courses and a support forum
$ 400

Which The Python Coding Place student are you?

Find Out Now

Final Words

Python restricts you to defining only one .__init__() method. But there are still ways for you to create instances of a class through different routes. Class methods are a common way of creating alternative constructors for a class. You call them directly through the class and not through an instance of the class – ShoppingList.from_meal_planner(). The class method then creates an instance, modifies it as needed, and finally returns the customised instance.

Now, let me see what’s on tonight’s meal planner and, more importantly, whether it’s my turn to cook.

Photo by Katya Wolf


Code in this article uses Python 3.14

The code images used in this article are created using Snappify. [Affiliate link]

Join The Club, the exclusive area for paid subscribers for more Python posts, videos, a members’ forum, and more.

Subscribe now

You can also support this publication by making a one-off contribution of any amount you wish.

Support The Python Coding Stack


For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!

Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.

And you can find out more about me at stephengruppetta.com

Further reading related to this article’s topic:


Appendix: Code Blocks

Code Block #1
class Meal:
    def __init__(
            self,
            name,
            person_cooking,
            quick_meal=False,
    ):
        self.name = name
        self.person_cooking = person_cooking
        self.quick_meal = quick_meal
        self.ingredients = {}  # ingredient: quantity
        self.ratings = {}  # person: rating

    # ... more methods
Code Block #2
class WeeklyMealPlanner:
    def __init__(self):
        self._meals = {}  # day: Meal

    @property
    def meals(self):
        return dict(self._meals)

    # ... more methods
Code Block #3
class ShoppingList:
    def __init__(self):
        self.ingredients = {}  # ingredient: quantity

    def add_ingredient(self, ingredient, quantity=1):
        if ingredient in self.ingredients:
            self.ingredients[ingredient] += quantity
        else:
            self.ingredients[ingredient] = quantity
            
    # ... more methods
Code Block #4
ShoppingList()
Code Block #5
class ShoppingList:
    def __init__(self):
        self.ingredients = {}  # ingredient: quantity

    @classmethod
    def from_meal_planner(cls, meal_planner: WeeklyMealPlanner):
        shopping_list = cls()
        for meal in meal_planner.meals.values():
            if meal is None:
                continue
            for ingredient, quantity in meal.ingredients.items():
                shopping_list.add_ingredient(ingredient, quantity)
        return shopping_list

    def add_ingredient(self, ingredient, quantity=1):
        if ingredient in self.ingredients:
            self.ingredients[ingredient] += quantity
        else:
            self.ingredients[ingredient] = quantity
Code Block #6
# if my_weekly_planner is an instance of 'WeeklyMealPlanner', then...

shopping_list = ShoppingList.from_meal_planner(my_weekly_planner)
Code Block #7
dict.fromkeys(["James", "Bob", "Mary", "Jane"])
# {'James': None, 'Bob': None, 'Mary': None, 'Jane': None}
Code Block #8
import datetime
datetime.datetime(2026, 1, 30)
# datetime.datetime(2026, 1, 30, 0, 0)
Code Block #9
datetime.datetime.today()
# datetime.datetime(2026, 1, 30, 12, 54, 2, 243976)

datetime.datetime.fromisoformat("2026-01-30")
# datetime.datetime(2026, 1, 30, 0, 0)

For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!

Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.

And you can find out more about me at stephengruppetta.com

January 30, 2026 02:18 PM UTC


Real Python

The Real Python Podcast – Episode #282: Testing Python Code for Scalability & What's New in pandas 3.0

How do you create automated tests to check your code for degraded performance as data sizes increase? What are the new features in pandas 3.0? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 30, 2026 12:00 PM UTC

January 29, 2026


PyBites

7 Software Engineering Fixes To Advance As A Developer

It’s January! If you look back at yourself from exactly one year ago, January 2025, how different are you as a developer from then to now?

Did you ship the app you were thinking about? Did you finally learn how to configure a proper CI/CD pipeline? Did you land the Senior role you were after?

Or did you just watch a lot more YouTube videos and buy a few more Udemy courses that you haven’t finished yet?

If the answer stings a little bit, you aren’t alone.

Over the last six years of coaching hundreds of developers in PDM, Bob and I have noticed a pattern. We see the same specific bottlenecks that keep smart, capable people stuck in Tutorial Hell for years.

They know the syntax and can solve code challenges, but they aren’t shipping.

In this week’s episode of the Pybites Podcast, we get straight to the fix. We aren’t talking about the latest Python library or a cool new feature in Django. We’re talking about the 7 Engineering Shifts you need to make to stop going in circles and actually become a professional software engineer this year.

We dive deep into the hard truths, including:

We are sharing the exact tips we give our PDM coaching clients to get them unstuck.

If you are tired of feeling productive but having nothing to show for it, this episode is for you.

Want the cheat sheet?

We condensed these 7 shifts into a brand new, high-impact guide: Escape Tutorial Hell. It breaks down every single point we discuss in the episode with actionable steps you can take today.

Download the free guide here.

Listen and Subscribe Here

January 29, 2026 11:18 AM UTC

January 28, 2026


Real Python

How Long Does It Take to Learn Python?

Have you read blog posts that claim you can learn Python in days and quickly secure a high-paying developer job? That’s an unlikely scenario and doesn’t help you prepare for a steady learning marathon. So, how long does it really take to learn Python, and is it worth your time investment?

By the end of this guide, you’ll understand that:

  • Most beginners can learn core Python fundamentals in about 2 to 6 months with consistent practice.
  • You can write a tiny script in days or weeks, but real confidence comes from projects and feedback.
  • Becoming job-ready often takes 6 to 12 months, depending on your background and target role.
  • Mastery takes years because the ecosystem and specializations keep growing.

The short answer for how long it takes to learn Python depends on your goals, time budget, and the level you’re aiming for.

Get the PDF Guide: Click here to download a free PDF guide that breaks down how long it takes to learn Python and what factors affect your timeline.

Take the Quiz: Test your knowledge with our interactive “Python Skill Test” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Python Skill Test

Test your Python knowledge in a skills quiz with basic to advanced questions. Are you a Novice, Intermediate, Proficient, or Expert?

How Long Does It Take to Learn Python Basics?

Python is beginner-friendly, and you can start writing simple programs in just a few days. But reaching the basics stage still takes consistent practice because you’re learning both the language itself and how to think like a programmer.

The following timeline shows how long it typically takes to learn Python basics based on how much time you can practice each week:

Weekly practice time Typical timeline for basics What that feels like
2–3 hours/week 8–12 months Slow but steady progress
5–10 hours/week 3–6 months Realistic pace for busy adults
15–20 hours/week ~2 months Consistent focus and fast feedback
40+ hours/week ~1 month Full-time immersion

These ranges assume about five study days per week. If you add a sixth day, you’ll likely land toward the faster end of each range.

You’ll get better results if you use this table as a planning guide. Don’t think of it as rigid deadlines—your learning pace depends on many factors. For example, if you already know another programming language, then you can usually move faster. If you’re brand-new to coding, then expect to be at the slower end of each range.

As a general guideline, many beginners reach the basics in about 2 to 6 months with steady practice.

Note: If you’re ready to fast-track your learning with an expert-guided small cohort course that gives you live guidance and accountability, then check out Real Python’s live courses!

With a focused schedule of around four hours per day, five days per week, you can often reach the basics stage in roughly 6 to 10 weeks, assuming you’re writing and debugging code most sessions. By then, you should be able to finish several small projects on your own.

When you read online that someone learned Python quickly, they’re probably talking about this basics stage. And indeed, with the right mix of dedication, circumstances, and practice, learning Python basics can happen pretty fast!

Before you go ahead and lock in a timeline, take a moment to clarify for yourself why you want to learn Python. Understanding your motivation for learning Python will help along the way.

Learning Python means more than just learning the Python programming language. You need to know more than just the specifics of a single programming language to do something useful with your programming skills. At the same time, you don’t need to understand every single aspect of Python to be productive.

Learning Python is about learning how to accomplish practical tasks with Python programming. It’s about having a skill set that you can use to build projects for yourself or an employer.

As your next step, write down your personal goal for learning Python. Always keep that goal in mind throughout your learning journey. Your goal shapes what you need to learn and how quickly you’ll progress.

What’s a Practical 30-Day Learning Plan for Complete Beginners?

When you’re clear about your why, you can start drafting your personal Python learning roadmap.

If you’re starting from zero and can spend about 5 to 10 hours per week, the following plan keeps you moving without becoming overwhelming:

Aim to finish at least one small project by the end of the month. The project matters more than completing every tutorial or task on your checklist.

Read the full article at https://realpython.com/how-long-does-it-take-to-learn-python/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 28, 2026 02:00 PM UTC


PyCharm

PyCharm is designed to support the full range of modern Python workflows, from web development to data and ML/AI work, in a single IDE. An essential part of these workflows is Jupyter notebooks, which are widely used for experimentation, data exploration, and prototyping across many roles.

PyCharm provides first-class support for Jupyter notebooks, both locally and when connecting to external Jupyter servers, with IDE features such as refactoring and navigation available directly in notebooks. Meanwhile, Google Colab has become a key tool for running notebook-based experiments in the cloud, especially when local resources are insufficient.

With PyCharm 2025.3.2, we’re bringing local IDE workflows and Colab-hosted notebooks together. Google Colab support is now available for free in PyCharm as a core feature, along with basic Jupyter notebook support. If you already use Google Colab, you can now bring your notebooks into PyCharm and work with them using IDE features designed for larger projects and longer development sessions.

Getting started with Google Colab in PyCharm

Connecting PyCharm to Colab is quick and straightforward:

  1. Open a Jupyter notebook in PyCharm.
  2. Select Google Colab (Beta) from the Jupyter server menu in the top-right corner.
  3. Sign in to your Google account.
  4. Create and use a Colab-backed server for the notebook.

Once connected, your notebook behaves as usual, with navigation, inline outputs, tables, and visualizations rendered directly in the editor.

Working with data and files 

When your Jupyter notebook depends on files that are not yet available on the Colab machine, PyCharm helps you handle this without interrupting your workflow. If a file is missing, you can upload it directly from your local environment. The remote file structure is also visible in the Project tool window, so you can browse directories and inspect files as you work.

Whether you’re experimenting with data, prototyping models, or working with notebooks that outgrow local resources, this integration makes it easier to move between local work, remote execution, and cloud resources without changing how you work in PyCharm.

If you’d like to try it out:

January 28, 2026 01:40 PM UTC


EuroPython

January Newsletter: We Want Your Proposals for Kraków!

Happy New Year! We&aposre kicking off 2026 with exciting news: EuroPython is moving to a brand new location! After three wonderful years in Prague, we&aposre heading to Kraków, Poland for our 25th anniversary edition. Mark your calendars for July 13-19, 2026. 🎉

🏰 Welcome to Kraków!

EuroPython 2026 will take place at the ICE Kraków Congress Centre, bringing together 1,500+ Python enthusiasts for a week of learning, networking, and collaboration. 

Check out all the details: ep2026.europython.eu/krakow

alt

📣 Call for Proposals is OPEN!

The CfP is now live, and we want to hear from YOU! Whether you&aposre a seasoned speaker or considering your first talk, tutorial or poster, we&aposre looking for proposals on all topics and experience levels.

Deadline: February 15th, 2026 at 23:55 UTC+1 (no extension, so don’t leave it for the last minute!)

We&aposre seeking:

No matter your level of Python or public speaking experience, EuroPython is here to help you bring yourself to our community. Represent your work, your interests, and your unique perspective!

Want to get some extra help? The first 100 proposals will get direct feedback from the Programme team, so hurry with your submissions!

👉 Submit your proposal by February 15th: programme.europython.eu

alt

🎤 Speaker Mentorship is Open

First time speaking? Feeling nervous? The Speaker Mentorship Programme is back! We match mentees with experienced speakers who&aposll help you craft strong proposals and, if accepted, prepare your talk. This programme especially welcomes folks from underrepresented backgrounds in tech.

Applications are open now for Mentees and Mentors. Don&apost let uncertainty hold you back – apply and join our supportive community of speakers. 

Deadline: 10th February 2026, 23:59 UTC

👉 More info: ep2026.europython.eu/mentorship

🎙️ Conversations with First-Time Speakers

Want to hear from people who&aposve been in your shoes? Check out our interviews with first-time speakers who took the leap. They share their experience of what it&aposs really like to speak at EuroPython.

👉 With Jenny Vega: https://youtu.be/0lLrQkPtOy8

👉 With Kayode Oladapo: https://youtu.be/qy7BZUJCYD4 

🎥 Video Recap from Prague

Prague was incredible! ✨ Relive the best moments from EuroPython 2025 in our video recap.

📢 Help Us Spread the Word!

Big thanks to our speaker and community organiser Honza Král for giving a lightning talk about EuroPython at Prague Pyvo. If you&aposre a speaker or community organizer, we&aposd love your help spreading the word about the CfP!

alt

💰 Sponsorship & Financial Aid

Sponsorship packages will be announced soon! Interested in supporting EuroPython 2026? Reach out to us at sponsoring@europython.eu.

Financial Aid applications will open in the coming weeks. We&aposre committed to making EuroPython accessible to everyone, regardless of financial situation. Stay tuned!

🤝  Where can you meet us this month?  

We&aposll be at FOSDEM this weekend (February 1-2) with a booth alongside the Python Software Foundation and Django Software Foundation. If you&aposre in Brussels, come say hi, grab some stickers, and get the latest EuroPython news!

We&aposre also heading to Ostrava Python Pizza! Join us for tasty pizza and good conversation about all things Python on 21st February. 

👋 Stay Connected

Follow us on social media and subscribe to our newsletter for all the updates:

January 28, 2026 10:56 AM UTC


Hugo van Kemenade

Speeding up Pillow's open and save

Tachyon #

I tried out Tachyon, the new “high-frequency statistical sampling profiler” coming in Python 3.15, to see if we can speed up the Pillow imaging library. I started with a simple script to open an image:

import sys
from PIL import Image

im = Image.open(f"Tests/images/hopper.{sys.argv[1]}")

Then ran:

$ python3.15 -m profiling.sampling run --flamegraph /tmp/1.py png
Captured 35 samples in 0.04 seconds
Sample rate: 1,000.00 samples/sec
Error rate: 25.71
Flamegraph data: 1 root functions, total samples: 26, 169 unique strings
Flamegraph saved to: flamegraph_97927.html

Which generates this flame graph:

Flame graph for opening a PNG with Pillow

The whole thing took 40 milliseconds, with half in Image.py’s open(). If you visit the interactive HTML page we can see open() calls preinit(), which in turn imports GifImagePlugin, BmpImagePlugin, PngImagePlugin and JpegImagePlugin (hover over the <module> boxes to see them).

Do we really need to import all those plugins when we’re only interested in PNG?

Okay, let’s try another kind of image:

$ python3.15 -m profiling.sampling run --flamegraph /tmp/1.py webp
Captured 59 samples in 0.06 seconds
Sample rate: 1,000.00 samples/sec
Error rate: 22.03
Flamegraph data: 1 root functions, total samples: 46, 256 unique strings
Flamegraph saved to: flamegraph_98028.html

Flame graph for opening a WebP with Pillow

Hmm, 60 milliseconds with 80% in open() and most of that in init(). The HTML page shows it imports AvifImagePlugin, PdfImagePlugin, WebpImagePlugin, DcxImagePlugin, DdsImagePlugin and PalmImagePlugin. We also have preinit importing GifImagePlugin, BmpImagePlugin and PngImagePlugin.

Again, why import even more plugins when we only care about WebP?

Loading all the plugins? #

That’s enough profiling, let’s look at the code.

When open()ing or save()ing an image, if Pillow isn’t yet initialised, we call a preinit() function. This loads five drivers for five formats by importing their plugins: BMP, GIF, JPEG, PPM and PNG.

During import, each plugin registers its file extensions, MIME types and some methods used for opening and saving.

Then we check each of these plugins in turn to see if one will accept the image. Most of Pillow’s plugins detect an image by opening the file and checking if the first few bytes match a magic prefix. For example:

If none of these five match, we call init(), which imports the remaining 42 plugins. We then check each of these for a match.

This has been the case since at least PIL 1.1.1 released in 2000 (this is the oldest version I have to check). There were 33 builtin plugins then and 47 now.

Lazy loading #

This is all a bit wasteful if we only need one or two image formats during a program’s lifetime, especially for things like CLIs. Longer running programs may need a few more, but unlikely all 47.

A benefit of the plugin system is third parties can create their own plugins, but we can be more efficient with our builtins.

I opened a PR to add a mapping of file extensions to plugins. Before calling preinit() or init(), we can instead do a cheap lookup, which may save us importing, registering, and checking all those plugins.

Of course, we may have an image without an extension, or with the “wrong” extension, but that’s fine; I expect it’s rare and anyway we’ll fall back to the original preinit() -> init() flow.

After merging the PR, here’s a new flame graph for opening PNG (HTML page):

Much less compressed flame graph showing less work

And for WebP (HTML page):

Much less compressed for WebP

The flame graphs are scaled to the same width, but there’s far fewer boxes meaning there’s much less work now. We’re down from 40 and 60 milliseconds to 20 and 20 milliseconds.

The PR has a bunch of benchmarks which show opening a PNG (that previously loaded five plugins) is now 2.6 times faster. Opening a WebP (that previously loaded all 47 plugins), is now 14 times faster. Similarly, Saving PNG is improved by 2.2 times and WebP by 7.9 times. Success! This will be in Pillow 12.2.0.

See also #

January 28, 2026 10:29 AM UTC


EuroPython

Humans of EuroPython: Rodrigo Girão Serrão

EuroPython depends entirely on the dedication of volunteers who invest tremendous effort into bringing it to life. From managing sponsor relationships and designing the event schedule to handling registration systems and organizing social events, countless hours of passionate work go into ensuring each year surpasses the last.

Discover our recent conversation with Rodrigo Girão Serrão, who served on the EuroPython 2025 Programme Team.

We&aposre grateful for your work on the conference programme, Rodrigo!

altRodrigo Girão Serrão, member of the Programme Team at EuroPython 2025

EP: Had you attended EuroPython before volunteering, or was volunteering your first experience with it?

When I attended my first EuroPython in person I was not officially a volunteer but ended up helping a bit. Over the years, my involvement with EuroPython as a volunteer and organiser has been increasing exponentially!

EP: Are there any new skills you learned while volunteering at EuroPython? If so, which ones?

Volunteering definitely pushed me to develop many skills. As an example, hosting the sprints developed my social skills since I had to welcome all the participants and ensure they had everything they needed. It also improved my management skills, from supporting the project sprint organisers to coordinating with venue staff.

EP: Did you have any unexpected or funny experiences during EuroPython?

In a recent EuroPython someone came up to me after my tutorial and said something like “I doubted your tutorial was going to be good, but in the end it was good”. Why on Earth would that person doubt me in the first place and then come to me and admit it? 🤣

EP: Did you make any lasting friendships or professional connections through volunteering?

Yes to both! Many of these relationships grew over time through repeated interactions across multiple EuroPython editions and also other conferences. Volunteering created a sense of continuity and made it much easier to connect with the same people year after year.

EP: If you were to invite someone else, what do you think are the top 3 reasons to join the EuroPython organizing team?

Nothing beats the smiles and thank you’s you get when the conference is over. Plus, it is an amazing feeling to be part of something bigger than yourself.

EP: Would you volunteer again, and why?

Hell yeah! See above :)

EP: Thanks, Rodrigo!

January 28, 2026 10:07 AM UTC


PyCharm

Google Colab Support Is Now Available in PyCharm 2025.3.2

January 28, 2026 09:33 AM UTC


Python Morsels

All iteration is the same in Python

In Python, for loops, list comprehensions, tuple unpacking, and * unpacking all use the same iteration mechanism.

Table of contents

  1. Looping over dictionaries gives keys
  2. Looping over strings provides characters
  3. Looping is looping
  4. The ups and downs of duck typing

Looping over dictionaries gives keys

When you loop over a dictionary, you'll get the keys in that dictionary:

>>> my_dict = {'red': 2, 'blue': 3, 'green': 4}
>>> for thing in my_dict:
...     print(thing)
...
red
blue
green

If you loop over a dictionary in a list comprehensions, you'll also get keys:

>>> names = [x.upper() for x in my_dict]
>>> names
['RED', 'BLUE', 'GREEN']

Iterable unpacking with * also relies on iteration. So if we use this to iterate over a dictionary, we again get the keys:

>>> print(*my_dict)
red blue green

The same thing happens if we use * to unpack a dictionary into a list:

>>> colors = ["purple", *my_dict]
>>> colors
['purple', 'red', 'blue', 'green']

And even tuple unpacking relies on iteration. Anything you can loop over can be unpacked. Since we know there are three items in our dictionary, we could unpack it:

>>> a, b, c = my_dict

And of course, as strange as it may seem, we get the keys in our dictionary when we unpack it:

>>> a
'red'
>>> b
'blue'

So what would happen if we turned our dictionary into a list by passing it to the list constructor?

>>> list(my_dict)

Well, list will loop over whatever iterable was given to it and make a new list out of it. And when we loop over a dictionary, what do we get?

The keys:

>>> list(my_dict)
['red', 'blue', 'green']

And of course, if we ask whether something is in a dictionary, we are asking about the keys:

>>> 'blue' in my_dict
True

Iterating over a dictionary object in Python will give you keys, no matter what Python feature you're using to do that iteration. All forms of iteration do the same thing in Python.

Aside: of course if you want key-value pairs you can get them using the dictionary items method.

Looping over strings provides characters

Strings are also iterables.

Read the full article: https://www.pythonmorsels.com/all-iteration-is-the-same/

January 28, 2026 12:30 AM UTC

January 27, 2026


Giampaolo Rodola

From Python 3.3 to today: ending 15 years of subprocess polling

One of the less fun aspects of process management on POSIX systems is waiting for a process to terminate. The standard library's subprocess module has relied on a busy-loop polling approach since the timeout parameter was added to Popen.wait() in Python 3.3, around 15 years ago (see source). And psutil's Process.wait() method uses exactly the same technique (see source).

The logic is straightforward: check whether the process has exited using non-blocking waitpid(WNOHANG), sleep briefly, check again, sleep a bit longer, and so on.

import os, time

def wait_busy(pid, timeout):
    end = time.monotonic() + timeout
    interval = 0.0001
    while time.monotonic() < end:
        pid_done, _ = os.waitpid(pid, os.WNOHANG)
        if pid_done:
            return
        time.sleep(interval)
        interval = min(interval * 2, 0.04)
    raise TimeoutExpired

In this blog post I'll show how I finally addressed this long-standing inefficiency, first in psutil, and most excitingly, directly in CPython's standard library subprocess module.

The problem with busy-polling

Event-driven waiting

All POSIX systems provide at least one mechanism to be notified when a file descriptor becomes ready. These are select(), poll(), epoll() (Linux) and kqueue() (BSD / macOS) system calls. Until recently, I believed they could only be used with file descriptors referencing sockets, pipes, etc., but it turns out they can also be used to wait for events on process PIDs!

Linux

In 2019, Linux 5.3 introduced a new syscall, pidfd_open(), which was added to the os module in Python 3.9. It returns a file descriptor referencing a process PID. The interesting thing is that pidfd_open() can be used in conjunction with select(), poll() or epoll() to effectively wait until the process exits. E.g. by using poll():

import os, select

def wait_pidfd(pid, timeout):
    pidfd = os.pidfd_open(pid)
    poller = select.poll()
    poller.register(pidfd, select.POLLIN)
    # block until process exits or timeout occurs
    events = poller.poll(timeout * 1000)
    if events:
        return
    raise TimeoutError

This approach has zero busy-looping. The kernel wakes us up exactly when the process terminates or when the timeout expires if the PID is still alive.

I chose poll() over select() because select() has a historical file descriptor limit (FD_SETSIZE), which typically caps it at 1024 file descriptors per-process (reminded me of BPO-1685000).

I chose poll() over epoll() because it does not require creating an additional file descriptor. It also needs only a single syscall, which should make it a bit more efficient when monitoring a single FD rather than many.

macOS and BSD

BSD-derived systems (including macOS) provide the kqueue() syscall. It's conceptually similar to select(), poll() and epoll(), but more powerful (e.g. it can also handle regular files). kqueue() can be passed a PID directly, and it will return once the PID disappears or the timeout expires:

import select

def wait_kqueue(pid, timeout):
    kq = select.kqueue()
    kev = select.kevent(
        pid,
        filter=select.KQ_FILTER_PROC,
        flags=select.KQ_EV_ADD | select.KQ_EV_ONESHOT,
        fflags=select.KQ_NOTE_EXIT,
    )
    # block until process exits or timeout occurs
    events = kq.control([kev], 1, timeout)
    if events:
        return
    raise TimeoutError

Windows

Windows does not busy-loop, both in psutil and subprocess module, thanks to WaitForSingleObject. This means Windows has effectively had event-driven process waiting from the start. So nothing to do on that front.

Graceful fallbacks

Both pidfd_open() and kqueue() can fail for different reasons. For example, with EMFILE if the process runs out of file descriptors (usually 1024), or with EACCES / EPERM if the syscall was explicitly blocked at the system level by the sysadmin (e.g. via SECCOMP). In all cases, psutil silently falls back to the traditional busy-loop polling approach rather than raising an exception.

This fast-path-with-fallback approach is similar in spirit to BPO-33671, where I sped up shutil.copyfile() by using zero-copy system calls back in 2018. In there, more efficient os.sendfile() is attempted first, and if it fails (e.g. on network filesystems) we fall back to the traditional read() / write() approach to copy regular files.

Measurement

As a simple experiment, here's a simple program which waits on itself for 10 seconds without terminating:

# test.py
import psutil, os
try:
    psutil.Process(os.getpid()).wait(timeout=10)
except psutil.TimeoutExpired:
    pass

We can measure the CPU context switching using /usr/bin/time -v. Before the patch (the busy-loop):

$ /usr/bin/time -v python3 test.py 2>&1 | grep context
    Voluntary context switches: 258
    Involuntary context switches: 4

After the patch (the event-driven approach):

$ /usr/bin/time -v python3 test.py 2>&1 | grep context
    Voluntary context switches: 2
    Involuntary context switches: 1

This shows that instead of spinning in userspace, the process blocks in poll() / kqueue(), and is woken up only when the kernel notifies it, resulting in just a few CPU context switches.

Sleeping state

It's also interesting to note that waiting via poll() (or kqueue()) puts the process into the exact same sleeping state as a plain time.sleep() call. From the kernel's perspective, both are interruptible sleeps: the process is de-scheduled, consumes zero CPU, and sits quietly in kernel space.

The "S+" state shown below by ps means that the process "sleeps in foreground".

$ (python3 -c 'import time; time.sleep(10)' & pid=$!; sleep 0.3; ps -o pid,stat,comm -p $pid) && fg &>/dev/null
    PID STAT COMMAND
 491573 S+   python3
$ (python3 -c 'import os,select; fd = os.pidfd_open(os.getpid(),0); p = select.poll(); p.register(fd,select.POLLIN); p.poll(10_000)' & pid=$!; sleep 0.3; ps -o pid,stat,comm -p $pid) && fg &>/dev/null
    PID STAT COMMAND
 491748 S+   python3

CPython contribution

After landing the psutil implementation (psutil/PR-2706), I took the extra step and submitted a matching pull request for CPython subprocess module: cpython/PR-144047.

I'm especially proud of this one: this is the second time in psutil's 17+ year history that a feature developed in psutil made its way upstream into the Python standard library. The first was back in 2011, when psutil.disk_usage() inspired shutil.disk_usage() (see python-ideas ML proposal).

Funny thing: 15 years ago, Python 3.3 added the timeout parameter to subprocess.Popen.wait() (see commit). That's probably where I took inspiration when I first added the timeout parameter to psutil's Process.wait() around the same time (see commit). Now, 15 years later, I'm contributing back a similar improvement for that very same timeout parameter. The circle is complete.

Links

Topics related to this:

Discussion

January 27, 2026 11:00 PM UTC