skip to navigation
skip to content

Planet Python

Last update: October 21, 2025 01:43 PM UTC

October 21, 2025


Reuven Lerner

Your personal mentor for Python and Pandas

I’ve been teaching Python and Pandas for decades — in companies, at conferences, and on YouTube — and I keep hearing the same frustrations everywhere I go:

That’s why I’ve spent the last few months completely rebuilding LernerPython.com. It’s a complete, structured learning experience, not just a set of courses.

Instead of leaving you on your own, here’s the system I’ve built:

✅ Dozens of courses on Python, Pandas, and Git, from “Python for non-programmers” to “Advanced Python objects.” Each course includes exercises and downloadable Jupyter notebooks.

✅ Hundreds of exercises to help everything “click,” plus new real-world data challenges every Wednesday from my Bamboo Weekly newsletter.

✅ Live mentorship: twice-monthly Zoom office hours and a private Discord where you can ask me anything.

✅ Members-only lectures on topics chosen by the community, from the Unix shell to pytest, from plotting to uv.

✅ Exclusive member perks: You’ll get discounts on Python certification, as well as access to any new courses I produce. Upcoming courses include FastAPI, PyArrow, concurrency, modern Pandas, and machine learning.

LernerPython.com isn’t just about videos. It’s about learning, practice, and personal guidance from someone who teaches this every day.

Here’s what some people have said:

“Reuven is one of the top five best teachers I’ve ever had! His clarity of expression and sense of humor make learning from him worthwhile and fun.”— Norman Eliaser, Business Systems Analyst

“I’ve gotten tremendous insight into how Python works. A great blend of not just how to do things, but why?”— Ahmed, Sr infrastructure engineer

“Reuven takes complicated subjects and makes them simple. There is a TON of fantastic material in a LernerPython subscription and more is always being added to it. It is one of my favorite and most valuable sources of Python educational materials.”— Michael Dahlberg, Systems Administrator

👉 Start your free two-week trial today, and see why learners around the world are becoming more confident with Python and Pandas. Check it out at https://LernerPython.com !

The post Your personal mentor for Python and Pandas appeared first on Reuven Lerner.

October 21, 2025 01:05 PM UTC


Tryton News

Security Release for issue #14290

Brandon Da Costa and Mahdi Afshar have found that sao executes JavaScript included in HTML documents (such as attachments). These documents may be uploaded by any authenticated user. The JavaScript is executed in the same context as sao which gives access to sensitive data such as the session.

Impact

CVSS v3.0 Base Score: 7.3

If the inbound_email and document_incoming modules are activated, the impact increases as anybody can send emails with attachments: CVSS v3.0 Base Score: 8.1

Workaround

There is no general workaround.
For inbound email blocking emails with HTML attachments will block this attack vector.

Resolution

All affected users should upgrade sao to the latest version.

Affected versions per series:

Non affected versions per series:

Reference

Concerns?

Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/ with the confidential checkbox checked.

1 post - 1 participant

Read full topic

October 21, 2025 06:00 AM UTC


Armin Ronacher

Regulation Isn’t the European Trap — Resignation Is

Plenty has been written about how hard it is to build in Europe versus the US. The list is always the same with little process: brittle politics, dense bureaucracy, mandatory notaries, endless and rigid KYC and AML processes. Fine. I know, you know.

I’m not here to add another complaint to the pile (but if we meet over a beer or coffee, I’m happy to unload a lot of hilarious anecdotes on you). The unfortunate reality is that most of these constraints won’t change in my lifetime and maybe ever. Europe is not culturally aligned with entrepreneurship, it’s opposed to the idea of employee equity, and our laws reflect that.

What bothers me isn’t the rules — it’s the posture that develops form it within people that should know better. Across the system, everyone points at someone else. If a process takes 10 steps, you’ll find 10 people who feel absolved of responsibility because they can cite 9 other blockers. Friction becomes a moral license to do a mediocre job (while lamenting about it).

The vibe is: “Because the system is slow, I can be slow. Because there are rules, I don’t need judgment. Because there’s risk, I don’t need initiative.” And then we all nod along and nothing moves.

There are excellent people here; I’ve worked with them. But they are fighting upstream against a default of low agency. When the process is bad, too many people collapse into it. Communication narrows to the shortest possible message. Friday after 2pm, the notary won’t reply — and the notary surely will blame labor costs or regulation for why service ends there. The bank will cite compliance for why they don’t need to do anything. The registrar will point at some law that allows them to demand a translation of a document by a court appointed translator [^1]. Everyone has a reason. No one owns the outcome.

Meanwhile, in the US, our counsel replies when it matters, even after hours. Bankers answer the same day. The instinct is to enable progress, not enumerate reasons you can’t have it. The goal is the outcome and the rules are constraints to navigate, not a shield to hide behind.

So what’s the point? I can’t fix politics. What I can do: act with agency, and surround myself with people who do the same and speak in support of it. Work with those who start from “how do we make this work?” not “why this can’t work.” Name the absurdities without using them as cover. Be transparent, move anyway and tell people.

Nothing stops a notary from designing an onboarding flow that gets an Austrian company set up in five days — standardized KYC packets, templated resolutions, scheduled signing slots, clear checklists, async updates, a bias for same-day feedback. That could exist right now. It rarely does or falls short.

Yes, much in Europe is objectively worse for builders. We have to accept it. Then squeeze everything you can from what is in your control:

Select for agency. Choose partners who answer promptly when it’s material and who don’t confuse process with progress.

The trap is not only regulation. It’s the learned helplessness it breeds. If we let friction set our standards, we become the friction. We won’t legislate our way to a US-style environment anytime soon. But we don’t need permission to be better operators inside a bad one.

That’s the contrast and it’s the part we control.


Postscript: Comparing Europe to the US triggers people and I’m concious of that. Maturity is holding two truths at once: they do some things right and some things wrong and so do we. You don’t win by talking others down or praying for their failure. I’d rather see both Europe and the US succeed than celebrate Europe failing slightly less.

And no, saying I feel gratitude and happiness when I get a midnight reply doesn’t make me anti-work-life balance (I am not). It means when something is truly time-critical, fast, clear action lifts everyone. The times someone sent a document in minutes, late at night, both sides felt good about it when it mattered. Responsiveness, used with judgment, is not exploitation; it’s respect for outcomes and the relationships we form.

October 21, 2025 12:00 AM UTC

October 20, 2025


Python Morsels

T-strings: Python's Fifth String Formatting Technique?

Python’s new t-strings may look like f-strings, but they work in a totally different way, allowing you to delay string interpolation.

Table of contents

  1. String formatting: a very brief history
  2. How are t-strings different
  3. The problem with f-strings
  4. T-strings return templates
  5. Lazy string interpolation with t-strings
  6. A dedent function that uses t-strings
  7. When to use t-strings
  8. T-strings are primarily for library authors

String formatting: a very brief history

Python has had many different waves of string interpolation over the years.

It's had percent-style (%) string formatting since pretty much the beginning:

>>> name = "Trey"
>>> n = 3
>>> "%s, you have %d new messages." % (name, n)
'Trey, you have 3 new messages.'

Then Python 2.4 added a Template class to the string module:

>>> from string import Template
>>> t = Template("$name, you have $n new messages.")
>>> t.substitute(name=name, n=n)
'Trey, you have 3 new messages.'

And Python 2.6 added an even easier version of string formatting with the string format method:

>>> "{name}, you have {n} new messages.".format(name=name, n=n)
'Trey, you have 3 new messages.'

Then Python 3.6 added a new string formatting syntax with f-strings:

>>> f"{name}, you have {n} new messages."
'Trey, you have 3 new messages.'

Now, Python 3.14 has yet another string interpolation syntax with t-strings:

>>> t"{name}, you have {n} new messages."
Template(strings=('', ', you have ', ' new messages.'), interpolations=(Interpolation('Trey'
, 'name', None, ''), Interpolation(3, 'n', None, '')))

Well, sort of!

How are t-strings different

Unlike f-strings, t-strings don't actually …

Read the full article: https://www.pythonmorsels.com/t-strings-in-python/

October 20, 2025 08:30 PM UTC


Ari Lamstein

When One Graph Isn’t Enough: Upcoming Streamlit Workshop

Have you ever worked with a dataset so rich, so multidimensional, that no single graph—or even a dashboard—could do it justice?

That was the challenge I kept running into as a data scientist and educator. Whether I was exploring US Census trends or trying to make immigration enforcement data more transparent, I found myself wishing for a better way to let others explore the data, not just look at my summary.

Enter: Streamlit—a Python framework that lets you build interactive web apps without writing a single line of front-end code.

Next week, I’m teaching a hands-on workshop with the American Statistical Association (ASA) and Instats.org called Data Apps in Python with Streamlit. It’s designed for researchers, analysts, and educators who are comfortable with Python and want to learn how to turn their analyses into simple, shareable apps.

What You’ll Learn

Across two focused sessions—three hours on Monday and three hours on Tuesday—we’ll walk through the full lifecycle of building and deploying a data app.

This workshop includes a review of Python’s data analysis ecosystem using Jupyter Notebooks and Pandas. After this review you’ll learn how to:

You’ll build a working app that visualizes trends in US Census data—but the skills you’ll learn are transferable to any dataset or research question. The final app you deploy will have its own URL which you can share with colleagues, friends, and family, and will look like this:

Why This Workshop?

If you’ve ever:

…this course is for you.

No prior experience with Streamlit or web development is required. If you’ve worked with Python before—even just a little—you’ll be able to follow along. We’ll review key tools like Pandas, Jupyter Notebooks, and GitHub as part of the workshop, so passing familiarity is plenty. And if you have a dataset you’re curious about, even better—but we’ll provide examples you can build on, so you don’t need to bring anything to participate fully.

Join Me

The course runs October 27–28, with two live Zoom sessions—just three hours each day—plus 30-day access to all materials and a monitored Q&A forum. You’ll also receive an official certificate of completion.

🔗 Learn more and enroll here

Let’s build something interactive together!

October 20, 2025 04:00 PM UTC


Real Python

When Should You Use .__repr__() vs .__str__() in Python?

One of the most common tasks that a computer program performs is to display data. The program often displays this information to the program’s user. However, a program also needs to show information to the programmer developing and maintaining it. The information a programmer needs about an object differs from how the program should display the same object for the user, and that’s where .__repr__() vs .__str__() comes in.

A Python object has several special methods that provide specific behavior. There are two similar special methods that describe the object using a string representation. These methods are .__repr__() and .__str__(). The .__repr__() method returns a detailed description for a programmer who needs to maintain and debug the code. The .__str__() method returns a simpler description with information for the user of the program.

The .__repr__() and .__str__() methods are two of the special methods that you can define for any class. They allow you to control how a program displays an object in several common forms of output, such as what you get from the print() function, formatted strings, and interactive environments.

In this tutorial, you’ll learn how to differentiate .__repr__() vs .__str__() and how to use these special methods in the classes you define. Defining these methods effectively makes the classes that you write more readable and easier to debug and maintain. So, when should you choose Python’s .__repr__() vs .__str__()?

Free Download: Get a sample chapter from Python Tricks: The Book that shows you Python’s best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.

Take the Quiz: Test your knowledge with our interactive “Using .__repr__() vs .__str__() in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Using .__repr__() vs .__str__() in Python

In this quiz, you'll test your understanding of Python's dunder repr and dunder str special methods. These methods allow you to control how a program displays an object, making your classes more readable and easier to debug and maintain.

In Short: Use .__repr__() for Programmers vs .__str__() for Users

Python classes have a number of special methods. These methods have a double leading underscore and a double trailing underscore in their names. You can informally refer to them as dunder methods because of the double underscores in their names.

The special methods .__repr__() and .__str__() both return string representations of the object. A string representation is a string that shows information about the object. You can tailor this information for different audiences, such as program users or your fellow programmers.

Like with other special methods with leading and trailing double underscores in their names, you can define these methods for any class.

The reason there are two methods to display an object is that they have different purposes:

  • .__repr__() provides the official string representation of an object, aimed at the programmer.
  • .__str__() provides the informal string representation of an object, aimed at the user.

The target audience for the string representation returned by .__repr__() is the programmer developing and maintaining the program. In general, it provides detailed and unambiguous information about the object. Another important property of the official string representation is that a programmer can normally use it to re-create an object equal to the original one.

The .__str__() method provides a string representation targeted to the program’s user, who may not necessarily be a Python programmer. Therefore, this representation enables any user to understand the data contained in the object. Usually, it’s simpler and easier to read for a user.

Note: For a discussion of these two special methods, check out the The Real Python Podcast: Episode 153.

One way of displaying both representations of an object is by using Python’s standard REPL. The REPL will display the string representation from .__repr__() when you evaluate a line that only has an object on it. However, the built-in function print() shows the informal string representation returned by .__str__().

You can view the strings returned by .__repr__() vs .__str__() for an instance of the datetime class in the datetime module:

Python
>>> import datetime
>>> today = datetime.datetime.now()

>>> today
datetime.datetime(2025, 9, 17, 10, 25, 55, 515728)

>>> print(today)
2025-09-17 10:25:55.515728

You create a datetime.datetime object named today using .now(). This method returns the current date and time. When you evaluate the line containing only the variable name today, the REPL displays the string representation returned by .__repr__(). This representation shows the name of the data type and all the arguments needed to re-create the object.

When you use print(), the REPL displays the representation of today returned by .__str__(). For datetime objects, this is equivalent to calling today.isoformat(" "). This returns an ISO 8601–style format that uses a space between the date and time. Therefore, this is not a Python-specific format but a standard that’s used more broadly to represent dates and times.

Often, the official string representation is a valid Python expression that you can use to create a new object with the same value. You can confirm this with the datetime.datetime object by copying the official string representation and assigning it to a new name. You can also attempt to use the informal string representation, but this won’t work:

Python
>>> new_date = datetime.datetime(2025, 9, 17, 10, 25, 55, 515728)
>>> new_date == today
True

>>> new_date = 2025-09-17 10:25:55.515728
Traceback (most recent call last):
  ...
  File "<input>", line 1
    new_date = 2025-09-17 10:25:55.515728
                    ^
SyntaxError: leading zeros in decimal integer literals are not permitted ...

The output you got from .__repr__() when you evaluated today in the REPL created a new object equal to the original one.

However, the string representation from .__str__(), which you got when you used print(), isn’t a valid Python expression, so it raises a SyntaxError.

You can also show the string representations for common built-in data types:

Read the full article at https://realpython.com/python-repr-vs-str/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

October 20, 2025 02:00 PM UTC


The Python Coding Stack

Are Tuples More Like Lists or Strings? And Why We Don't Really Care • [Club]

This post is not about tuples. It’s not about lists or strings, either. It’s about Python, about its philosophy when dealing with data types, about how you should think about data types when coding in Python.

Tuples are usually introduced to Python learners after lists and strings. They’re almost always described as “like lists, but a bit different”.

And that’s not a bad thing. I say similar things when I introduce tuples to students. But then, fast-forward a few levels of proficiency, and I’m having different discussions with students. Are tuples like lists? Are they like strings? But the point is that those questions don’t really make sense. Other questions matter a lot more in Python. Let’s explore.


This is the first post in The Club, so allow me a word or three before getting back to tuples. I sent out an email last week about the new chapter in The Python Coding Stack. Paid subscribers are now members of The Club. I’ll write more frequent, shorter posts for The Club, sometimes accompanied by short videos (videos will complement the text, they won’t replace it). And all the other stuff: forum, Q&As, code reviews, and more.

I recall when I first started The Python Coding Stack, I agonised over what the first article should be about. It’s the first. It should be special. But then I just published about whatever topic I was thinking about at the time.

I took the same route when writing the first post on The Club – this one. And I’ll do this every week. I’ll write about whatever comes up in my coding, in my teaching, whatever I think you’ll find interesting. Or send me your questions, and they may inspire me to write about them. Don’t be shy.

Now back to tuples.


A long time ago, on a social media platform (now) far, far away, I wrote a series of threads exploring real-world objects from the perspective of Python data types. I recall one of them was about a row of houses. Would a dictionary, or a list, or a tuple, or something else be the “ideal data structure” if I wanted to represent this row of houses as a data type? But there was another question I needed to ask first.

Read more

October 20, 2025 10:02 AM UTC


Talk Python to Me

#524: 38 things Python developers should learn in 2025

Python in 2025 is different. Threads really are about to run in parallel, installs finish before your coffee cools, and containers are the default. In this episode, we count down 38 things to learn this year: free-threaded CPython, uv for packaging, Docker and Compose, Kubernetes with Tilt, DuckDB and Arrow, PyScript at the edge, plus MCP for sane AI workflows. Expect practical wins and migration paths. No buzzword bingo, just what pays off in real apps. Join me along with Peter Wang and Calvin Hendrix-Parker for a fun, fast-moving conversation.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/seer'>Seer: AI Debugging, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/agntcy'>Agntcy</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Calvin Hendryx-Parker</strong>: <a href="https://github.com/calvinhp?featured_on=talkpython" target="_blank" >github.com/calvinhp</a><br/> <strong>Peter on BSky</strong>: <a href="https://bsky.app/profile/wang.social?featured_on=talkpython" target="_blank" >@wang.social</a><br/> <br/> <strong>Free-Threaded Wheels</strong>: <a href="https://hugovk.github.io/free-threaded-wheels/?featured_on=talkpython" target="_blank" >hugovk.github.io</a><br/> <strong>Tilt</strong>: <a href="https://tilt.dev?featured_on=talkpython" target="_blank" >tilt.dev</a><br/> <strong>The Five Demons of Python Packaging That Fuel Our ...</strong>: <a href="https://www.youtube.com/watch?v=qA7NVwmx3gw" target="_blank" >youtube.com</a><br/> <strong>Talos Linux</strong>: <a href="https://www.talos.dev?featured_on=talkpython" target="_blank" >talos.dev</a><br/> <strong>Docker: Accelerated Container Application Development</strong>: <a href="https://www.docker.com?featured_on=talkpython" target="_blank" >docker.com</a><br/> <strong>Scaf - Six Feet Up</strong>: <a href="https://sixfeetup.com/company/scaf-a-blueprint-for-developers?featured_on=talkpython" target="_blank" >sixfeetup.com</a><br/> <strong>BeeWare</strong>: <a href="https://beeware.org?featured_on=talkpython" target="_blank" >beeware.org</a><br/> <strong>PyScript</strong>: <a href="https://pyscript.net?featured_on=talkpython" target="_blank" >pyscript.net</a><br/> <strong>Cursor: The best way to code with AI</strong>: <a href="https://cursor.com?featured_on=talkpython" target="_blank" >cursor.com</a><br/> <strong>Cline - AI Coding, Open Source and Uncompromised</strong>: <a href="https://cline.bot?featured_on=talkpython" target="_blank" >cline.bot</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=4BQrovxemXY" target="_blank" >youtube.com</a><br/> <strong>Episode #524 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/524/38-things-python-developers-should-learn-in-2025#takeaways-anchor" target="_blank" >talkpython.fm/524</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/524/38-things-python-developers-should-learn-in-2025" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

October 20, 2025 08:00 AM UTC


Python Bytes

#454 It's some form of Elvish

<strong>Topics covered in this episode:</strong><br> <ul> <li><em>* <a href="https://pypi.org/project/djrest2/?featured_on=pythonbytes">djrest2 -</em>* A small and simple REST library for Django based on class-based views.</a></li> <li><strong><a href="https://cli.github.com/?featured_on=pythonbytes">Github CLI</a></strong></li> <li><strong><a href="https://github.com/ZA1815/caniscrape?featured_on=pythonbytes">caniscrape - Know before you scrape. Analyze any website's anti-bot protections in seconds.</a></strong></li> <li><em>* <a href="https://github.com/mikeckennedy/gittyup?featured_on=pythonbytes">🐴 GittyUp</a></em>*</li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=VTYUtixfFg4' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="454">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: <a href="https://pypi.org/project/djrest2/?featured_on=pythonbytes">djrest2 -</strong> A small and simple REST library for Django based on class-based views.</a></p> <ul> <li>Emma Levit</li> <li>Based on an interesting blog post <ul> <li><a href="https://emma.has-a.blog/articles/why-do-we-need-an-external-app-for-rest.html?featured_on=pythonbytes">Why, in 2025, do we still need a 3rd party app to write a REST API with Django?</a></li> <li>As opposed to <a href="https://www.django-rest-framework.org?featured_on=pythonbytes">using DRF</a> or <a href="https://django-ninja.dev?featured_on=pythonbytes">Django Ninja</a></li> </ul></li> - </ul> <p>Michael #2: <a href="https://cli.github.com/?featured_on=pythonbytes">Github CLI</a></p> <ul> <li>GitHub’s official command line tool</li> <li>Features <ul> <li>Checking out a pull request locally</li> <li>You can clone any repository using OWNER/REPO syntax: gh repo clone cli/cli</li> <li>Create a pull request interactively: gh pr create</li> </ul></li> <li>See all at <a href="https://cli.github.com/manual/examples?featured_on=pythonbytes">cli.github.com/manual/examples</a></li> </ul> <p>Brian #3: <a href="https://github.com/ZA1815/caniscrape?featured_on=pythonbytes">caniscrape - Know before you scrape. Analyze any website's anti-bot protections in seconds.</a></p> <ul> <li><p><a href="https://www.reddit.com/r/Python/comments/1ob3na1/i_built_a_tool_that_tells_you_how_hard_a_website/?featured_on=pythonbytes">reddit announcement and discussion</a></p></li> <li><p>caniscrape checks a website for common anti-bot mechanisms and reports:</p> <ul> <li>A difficulty score (0–10)</li> <li>Which protections are active (e.g., Cloudflare, Akamai, hCaptcha, etc.)</li> <li>What tools you’ll likely need (headless browsers, proxies, CAPTCHA solvers, etc.)</li> <li>Whether using a scraping API might be better</li> </ul> <p>This helps you decide the right scraping approach before you waste time building a bot that keeps getting blocked.</p></li> </ul> <p><strong>Michael #4: <a href="https://github.com/mikeckennedy/gittyup?featured_on=pythonbytes">🐴 GittyUp</a></strong></p> <ul> <li>Never forget to pull again: Automatically discover and update all your Git repositories with one command.</li> <li>Built initially to solve this problem</li> <li>Rebuilt and published last week as part of my upcoming <strong>Agentic AI Programming for Python</strong> course. Get notified this week at <a href="https://training.talkpython.fm/getnotified?featured_on=pythonbytes">training.talkpython.fm/getnotified</a></li> <li>Update everything in a folder tree with <code>gittyup</code></li> <li>Review changes, blockers, etc with <code>gittyup --explain</code></li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://hugovk.dev/blog/2025/lazy-imports/?featured_on=pythonbytes">Three times faster with lazy imports</a> - Hugo van Kemenade</li> <li><a href="https://mastodon.social/@hugovk/115401878603825459?featured_on=pythonbytes">Interesting discussion on Hugo’s post</a> - on Mastodon</li> <li><a href="https://grahamdumpleton.me/posts/2025/10/use-lazy-module-imports-now/?featured_on=pythonbytes">Use lazy module imports now</a> - Graham Dumpleton</li> <li>Graham’s post uses <a href="https://pypi.org/project/wrapt/?featured_on=pythonbytes">wrapt</a>, a “module for decorators, wrappers and monkey patching”, to simulate lazy imports</li> <li>Helpful comment from Adam Johnson on Graham’s post to actually do the import during type checking using <code>if TYPE_CHECKING: import ...</code></li> </ul> <p>Michael:</p> <ul> <li><a href="https://github.com/MagicStack/uvloop/releases/tag/v0.22.0?featured_on=pythonbytes">uvloop is back</a>!</li> <li><a href="https://pypiplus.com/search/?q=jinja-partials&featured_on=pythonbytes">pypi+ listened</a>. :) <ul> <li>https://www.reddit.com/r/Python/comments/1o9dey5/i_just_released_pypipluscom_20_offlineready/</li> </ul></li> <li><a href="https://fosstodon.org/@tsvenson/115406109940120814">Feedback</a> from my “Show me your ls” post.</li> </ul> <p><strong>Joke: <a href="https://x.com/pr0grammerhum0r/status/1977222725350416758?s=12&featured_on=pythonbytes">Some form of Elvish</a></strong></p>

October 20, 2025 08:00 AM UTC


Graham Dumpleton

Use lazy module imports now

I already made a post a couple of weeks ago about how one could use wrapt to implement lazy module imports for Python. This was in response to PEP 810 (explicit lazy imports) being announced.

The means of using wrapt to implement lazy module imports was based around a lazy object proxy which was being added to wrapt version 2.0.0. This new version of wrapt is now available and to make it easier to use wrapt for this purpose, an extra top level function has been added to the wrapt public API.

Before I explain how to use this new feature of wrapt, I do acknowledge that implementations for lazy module imports have existed for many years. That said, where they exist as separate Python libraries the few I found hadn't seem to have been touched in quite a while. Sure the concept probably works as it was implemented, but it can be disconcerting to see no updates and one could conclude the package has been abandoned. At least in considering using the implementation provided by wrapt, you might feel more comfortable that it will be supported. 😁

As to PEP 810, the intent wasn't therefore to bring something completely new, but to make the concept a first class feature of Python by adding new syntax.

The proposed implementation outlined in the PEP would also eliminate one of the criticisms of lazy module importers that already existed. This was that it was necessary to patch sys.modules to add in a standin for a module before it was first imported. This was needed to be done explicitly, or it was necessary to add a special import loader to the Python runtime configuration. Either way, it was necessary to modify startup code for an application to install the patch or loader.

A further issue was that the lazy import behaviour would then affect all imports of that module anywhere in the code base since it wasn't an opt in thing for the code that actually used the module, but was forced onto them by the application developer. This could have various unintended side effects.

Were PEP 810 implemented in Python (which is proposed for 3.15), the new syntax would allow code that uses a module to opt in to the lazy module import behaviour. To do this, instead of writing:

import json
from json import dumps

you would write:

lazy import json
lazy from json import dumps

In other words, the lazy keyword is proposed to mark that the import should be delayed until the first attempt to actually use something defined by the module.

My first impression is that although I understand the aim of it and the benefits it might bring in some situations, I am not sure I am that keen on having a new keyword just for this feature. There are various cases where using it would be pointless. For example, doing a lazy import and immediately creating a new class deriving from one defined by the module. But then I am sure linters could start to pick up on things like this and tell you that using a lazy import wasn't going to help much in such a case.

Anyway, if you do want to experiment with lazy module imports in Python to see what benefits it could bring without needing to compile up a special version of Python, with the new feature in wrapt 2.0.0 you can use it now. Further, you can use it with any Python version that wrapt supports, which for now is back to Python 3.8.

The way lazy module imports work with wrapt also avoids the need to modify startup code for an application like some other implementations, instead the specific code which wants to make use of lazy module imports needs to opt in just like proposed for PEP 810.

Using wrapt, instead of using PEP 810 syntax of:

lazy import json

you would use:

import wrapt

json = wrapt.lazy_import("json")

The argument to the lazy_import() function can also be a dotted path to a sub module of a package, in which case what is returned will evaluate to the sub module.

For importing a specific function from a module, you can use:

import wrapt

dumps = wrapt.lazy_import("json", "dumps")

How this is different to other lazy module importers is that sys.modules is not patched, nor is a custom import loader used. Instead the json and dumps objects in the above examples are instances of the LazyObjectProxy class from wrapt, which is implemented using the transparent object proxy class of wrapt but with lazy initialization of the wrapped object. Using this lazy initialization feature, we can delay the importing of the module until the proxy object is first used. Although everything then goes through the object proxy, in general you shouldn't notice any difference.

So if you are excited about using lazy module imports but don't want to wait until you can use Python 3.15 (presuming PEP 810 is added), then consider giving this implementation using wrapt a go instead. If you do find issues, please let me know via the issues tracker for wrapt on GitHub.


UPDATE #1

Adam Johnson pointed out that for type checking to work, possibly still need something like:

from typing import TYPE_CHECKING

import wrapt

json = wrapt.lazy_import("json")

if TYPE_CHECKING:
    import json

October 20, 2025 07:38 AM UTC

Wrapt version 2.0.0

After a few delays, I have finally released version 2.0.0 of wrapt.

This release has a major version bump for a number of reasons.

The first main reason is simply because there hasn't been any release of wrapt with any significant changes for quite a while. All releases for wrapt for sometime have included only minor fixes or updates related to supporting new Python versions.

Next is that although support for Python 2.7 was dropped some time back, the code base still included a lot of code which was Python 2.7 specific. All these accomodations for Python 2.7 have now been removed.

Finally, there have been some subtle internal changes to how a few things were implemented in wrapt. It is believed the changes should be backward compatible, but combined with the above and that wrapt seems to be seeing significant use now based on PyPi download statistics, I felt it best to be cautious and release this version of wrapt as a new major version.

For more information on all the changes you can see the release notes. The wrapt package can be installed from PyPi. If you find any issues or have questions, you can use the issue tracker on Github.

October 20, 2025 07:38 AM UTC

October 19, 2025


Hugo van Kemenade

Three times faster with lazy imports

PEP 810 proposes “explicit lazy imports” for Python 3.15:

Lazy imports defer the loading and execution of a module until the first time the imported name is used, in contrast to ‘normal’ imports, which eagerly load and execute a module at the point of the import statement.

By allowing developers to mark individual imports as lazy with explicit syntax, Python programs can reduce startup time, memory usage, and unnecessary work. This is particularly beneficial for command-line tools, test suites, and applications with large dependency graphs.

It’s not been accepted yet, but let’s try out the reference implementation on one of my CLI tools, pypistats.

Setup #

First fetch the reference implementation. From a CPython checkout:

git remote add LazyImportsCabal https://github.com/LazyImportsCabal/cpython
git fetch LazyImportsCabal
gco lazy # see https://hugovk.dev/blog/2025/my-most-used-command-line-commands/

Because we want to install NumPy and pandas, let’s pretend to be Python 3.14 so we can use the binary wheels instead of having to build from source:

--- a/Include/patchlevel.h
+++ b/Include/patchlevel.h
 /* Version parsed out into numeric values */
 /*--start constants--*/
 #define PY_MAJOR_VERSION 3
-#define PY_MINOR_VERSION 15
+#define PY_MINOR_VERSION 14
 #define PY_MICRO_VERSION 0
 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_ALPHA
 #define PY_RELEASE_SERIAL 0

 /* Version as a string */
-#define PY_VERSION "3.15.0a0"
+#define PY_VERSION "3.14.0a0"
 /*--end constants--*/
--- a/configure.ac
+++ b/configure.ac
-m4_define([PYTHON_VERSION], [3.15])
+m4_define([PYTHON_VERSION], [3.14])

Build non-debug CPython with optimisations:

GDBM_CFLAGS="-I$(brew --prefix gdbm)/include" \
 GDBM_LIBS="-L$(brew --prefix gdbm)/lib -lgdbm" \
 ./configure --enable-optimizations --with-lto \
 --with-system-libmpdec --config-cache \
 --with-openssl="$(brew --prefix openssl@3)" && make -s -j8

Install NumPy and pandas:

./python.exe -m pip install numpy pandas

And then an editable install of the CLI, because we’ll also test changing the imports:

./python.exe -m pip install -e ~/github/pypistats

Let’s check the dependencies with pipdeptree:

uvx "pipdeptree[graphviz]" --python ./python.exe --packages pypistats --graph-output svg > pipdeptree.svg

pypistats has seven direct dependencies, which result in a total of 41 dependencies six layers deep, not counting NumPy and pandas:

A tree of dependencies: seven wide, and about six layers deep.

Benchmarks #

Let’s benchmark running pypistats --help, which is meant to be quick, using hyperfine:

brew install hyperfine

Inline imports #

In the pypistats CLI, I had already applied the trick of moving heavier imports into the functions that call them (the PEP calls these “inline imports”).

Instead of the lazy keyword, I’m using the PYTHON_LAZY_IMPORTS env var here to make it easy to compare two different runs.

$ hyperfine --warmup 10 --runs 20 --export-json out.json \
 "./python.exe -m pypistats --help" \
 "PYTHON_LAZY_IMPORTS=on ./python.exe -m pypistats --help"
Benchmark 1: ./python.exe -m pypistats --help
 Time (mean ± σ): 46.2 ms ± 1.1 ms [User: 38.8 ms, System: 6.4 ms]
 Range (min … max): 45.1 ms … 49.6 ms 20 runs

Benchmark 2: PYTHON_LAZY_IMPORTS=on ./python.exe -m pypistats --help
 Time (mean ± σ): 35.3 ms ± 0.5 ms [User: 29.5 ms, System: 4.8 ms]
 Range (min … max): 34.6 ms … 36.3 ms 20 runs

Summary
 PYTHON_LAZY_IMPORTS=on ./python.exe -m pypistats --help ran
 1.31 ± 0.04 times faster than ./python.exe -m pypistats --help

Plotted with plot_progression.py:

A progression chart of 20 runs for each benchmark: non-lazy runs are about 46 ms, lazy are about 35 ms.

From 46 to 35 milliseconds, or, 1.31 times faster, not bad.

Fully lazy #

But we no longer need the inline imports trick with PEP 810!

I modified the CLI so all imports are at the top, and also removed if TYPE_CHECKING: guards. Here’s a diff.

$ hyperfine --warmup 10 --runs 20 --export-json out2.json \
 "./python.exe -m pypistats --help" \
 "PYTHON_LAZY_IMPORTS=on ./python.exe -m pypistats --help"
Benchmark 1: ./python.exe -m pypistats --help
 Time (mean ± σ): 104.1 ms ± 1.6 ms [User: 88.2 ms, System: 14.5 ms]
 Range (min … max): 101.9 ms … 109.5 ms 20 runs

Benchmark 2: PYTHON_LAZY_IMPORTS=on ./python.exe -m pypistats --help
 Time (mean ± σ): 35.7 ms ± 0.5 ms [User: 29.8 ms, System: 4.8 ms]
 Range (min … max): 34.7 ms … 36.5 ms 20 runs

Summary
 PYTHON_LAZY_IMPORTS=on ./python.exe -m pypistats --help ran
 2.92 ± 0.06 times faster than ./python.exe -m pypistats --help

A progression chart of 20 runs for each benchmark: non-lazy runs are about 104 ms, lazy are about 36 ms.

From 104 to 36 milliseconds, or 2.92 times faster, much better!


Header photo: “Lazy Man Fishing” at Cascade Locks on the Columbia River 05/1973 in the U.S. National Archives , with no known copyright restrictions.

October 19, 2025 04:05 PM UTC


Adrarsh Divakaran

Goodbye GIL - Exploring Free Threaded Python 3.14

October 19, 2025 09:50 AM UTC


Daniel Roy Greenfeld

Using Asyncpg with FastAPI and Air

Recently I've been on a few projects using PostgreSQL where SQLAlchemy and SQLModel felt like overkill. Instead of using those libraries I leaned on writing SQL queries and running those directly in asyncpg instead of using an ORM powered by asyncpg.

Here's how I got it to work

Defined a lifespan function for ASGIApp

Starlette ASGIApp frameworks like FastAPI (and by extension Air) can leverage lifespan functions, which are generators. I've commented the lifespan object for clarity.

from contextlib import asynccontextmanager
from os import environ
from typing import AsyncIterator

import asyncpg
from starlette.types import ASGIApp

DATABASE_URL = environ['DATABASE_URL']


@asynccontextmanager 
async def lifespan(app: ASGIApp) -> AsyncIterator[None]:
    """A lifespan for maintaining the connection to the PostgreSQL DB
        Without this, the connection will timeout and queries will fail.
    """
    # app.state is where the connection pool is created, which can
    # be accessed later inside of views. The is only run once during
    # app startup.
    app.state.pool = await asyncpg.create_pool(
        dsn=DATABASE_URL,
        min_size=1,
        max_size=10,
    )
    try:
        # This is where the app runs all the URL route functons.
        yield
    finally:
        # This is run once when the app is shut down.
        await app.state.pool.close()

Using the lifespan function

Just add the lifespan function to the app when it is instantiated.

Using the lifespan function for FastAPI projects

All you have to do is pass the lifespan callable to the FastAPI app instantiation.

from fastapi import FastAPI

# Adding the lifespan app
app = FastAPI(lifespan=lifespan) 

@app.get('/users')
async def users(): # every function must be async
    # Use the pool object to get the database connection object
    async with app.state.pool.acquire() as conn:
        results = await conn.fetch('SELECT * from users;')

    # FastAPI responses automatically convert dicts to JSON
    return {'count': len(results), 'users': results}

Using the lifespan function for Air projects

Air is powered by FastAPI (and Starlette), so uses this lifespan function the same way as FastAPI.

import air

# Adding the lifespan app
app = air.Air(lifespan=lifespan)


@app.get('/users')
async def users(): # every function must be async
    # Use the pool object to get the database connection object
    async with app.state.pool.acquire() as conn:
        users = await conn.fetch('SELECT * from users;')

    # Air tags are converted to HTML during the response stage
    # Jinja is also an option, but is outside the scope of this article
    return air.layouts.mvpcss(
        air.H1(f'Users: {len(users)}'),
        air.Ul(
            *[air.Li(u['email']) for u in users]
        )
    )

Incoming data

Changing data requires use of the conn.execute function. Of course these examples will show how to use pydantic to validate the incoming data before we allow it to touch our database.

Adding data with FastAPI via asyncpg

As part of the request process for REST API, FastAPI uses pydantic to validate incoming data. This results a delightfully small view for accepting data.

from fastapi import FastAPI
from pydantic import BaseModel, EmailStr

# Adding the lifespan app
app = FastAPI(lifespan=lifespan) 


class User(BaseModel):
    email: EmailStr


@app.post('/users')
async def users_add(user: User):
    # Get the conn object from the database connection pool
    async with app.state.pool.acquire() as conn:
        # Insert the record with an execute method
        await conn.execute(
            'INSERT INTO users (email, created_at) VALUES ($1, NOW())',
            user.email
        )

    return user

Adding data with Air via asyncpg

There's no consistent standard within HTML for how to construct a form, much less respond to a bad implementation. Therefore in order to handle incoming data Air needs a bit more code than FastAPI.

import air
from pydantic import BaseModel, EmailStr

# Adding the lifespan app
app = air.Air(lifespan=lifespan) 


class User(BaseModel):
    email: EmailStr


class UserForm(air.AirForm):
    model = User    


@app.post('/users')
async def users_add(request: air.Request):
    # AirForms make handling incoming forms easier
    form = await UserForm.from_request(request)

    # AirForms, once instantiated with data, have an `is_valid` property
    # which returns a boolean of whether or not the submitted data has
    # passed pydantic.
    if form.is_valid:
        # Get the conn object from the database connection pool
        async with app.state.pool.acquire() as conn:
            # Insert the record with an execute method
            await conn.execute(
                'INSERT INTO users (email, created_at) VALUES ($1, NOW())',
                form.data.email
            )
        return air.layouts.mvpcss(
            air.H1(f"User: {form.data.email}"),
        )        

    # Simplistic handling of bad signup. 
    return air.RedirectResponse('/signup')

AirForms supports reporting of bad data. I'll cover how to do that in follow-up article.

October 19, 2025 05:07 AM UTC

October 18, 2025


Graham Dumpleton

Wrapping immutable objects

I am finally close to releasing wrapt version 2.0.0. The release has been delayed a bit as someone raised a number of questions about special Python dunder methods which the ObjectProxy class in wrapt didn't support.

Some of these were omissions due to the fact that the special methods did not exist when wrapt was first implemented, nor were they a part of initial Python 3 versions when support was added for Python 3. In particular, the __matmul__, __rmatmul__ and __imatmul__ dunder methods which underly the matrix multiplication operators added in Python 3.5. In other cases that no default support for specific dunder methods existed was a more complicated situation.

I am not going to dwell on the latter in this post, but as part of the overall discussion with the person who raised the questions, they also pointed out some issues with how the dunder methods for in-place operators were handled in the ObjectProxy class, which is going to be the subject of this post, as it may be an interesting technical exploration.

An in-place operator in Python is an operator that modifies a variable directly without creating a new object. It combines an operation with assignment.

In Python, in-place operators include:

Obvious example of using an in-place operator is on integers.

value = 1
value += 1

The result being that value ends up being set to 2.

Other examples are tuples and lists:

tuple_values = (1, 2, 3)
tuple_values += (4, 5, 6)

list_values = [1, 2, 3]
list_values += [4, 5, 6]

In each of these cases the members of the respective data structures are the integers from 1 to 6.

A potential trap for programmers new to Python arises when combining variable aliasing and in-place operators.

tuple_values_1 = (1, 2, 3)
tuple_values_2 = tuple_values_1

tuple_values_2 += (4, 5, 6)

list_values_1 = [1, 2, 3]
list_values_2 = list_values_1

list_values_2 += [4, 5, 6]

For the case of the tuples, tuple_values_1 ends up as (1, 2, 3) and tuple_values_2 as (1, 2, 3, 4, 5, 6).

For the lists however, both list_values_1 and list_values_2 end up being set to [1, 2, 3, 4, 5, 6].

The reason for this is that although both variables for the tuple and list in each case initially point to the same object, the alias or reference is broken in the case of the tuple when the in-place operator is applied to it. This occurs because the instance of a tuple is an immutable where as an instance of a list can be modified.

In practice what this means is that although you use the += operator, for an immutable type such as tuple, it will be implemented under the covers as:

tuple_values_2 = tuple_values_2 + (4, 5, 6)

In other words, the variable which prior to that point had been an alias for the original tuple, is replaced with a new object which is the result of adding the extra member items to the tuple.

Although this highlights the problem of using in-place operators when using aliasing, there is actually a more insidious case where this can present itself.

Take for example the case of a class definition with attributes declared at class scope, where we then create an instance of the class, and apply the in-place operator to the attributes of the class via the instance.

class Class:
    tuple_values = (1, 2, 3)
    list_values = [1, 2, 3]

c = Class()

c.tuple_values += (4, 5, 6)
c.list_values += [4, 5, 6]

Without knowing better, you might think that the result will be that Class.tuple_values ends up being set to (1, 2, 3, 4, 5, 6) and that Class.list_values will be similarly set to [1, 2, 3, 4, 5, 6]. For the case of the tuple this isn't actually what happens.

Although Class.list_values does end up being set to [1, 2, 3, 4, 5, 6], the value of Class.tuple_values remains as (1, 2, 3).

The reason for this is that since the tuple is immutable, as mentioned above, it will actually effectively be implemented under the covers as:

c.tuple_values = c.tuple_values + (4, 5, 6)

And this is where the potentially non obvious happens.

To explain, think now that if instead of accessing the attribute via the instance of the class, you had accessed the class directly.

C.tuple_values += (4, 5, 6)

As already noted, this would have been implemented as:

C.tuple_values = C.tuple_values + (4, 5, 6)

It is obvious then that the change would be applied to the class scoped attribute and C.tuple_values would be (1, 2, 3, 4, 5, 6).

Look again now at what happened though when accessing the attribute via the instance of the class.

c.tuple_values = c.tuple_values + (4, 5, 6)

Because tuple is immutable, the existing values of the attribute is first read. At this point though, the instance of the class doesn't actually have an attribute tuple_values, so what happens is that it falls back to reading from the class scoped attribute of the same name.

The value (4, 5, 6) is added to the value read from the class scoped attribute, and the result assigned back to the attribute. In doing the assignment though, since it was accessed via the class instance, instead of updating the class scoped attribute, it results in the creation of a new attribute against that instance of the class.

To explain it another way, prior to updating the attribute, if we had done:

print(vars(c))

the result would have been an empty dictionary, showing that the instance had no attributes, but after updating tuple_values we see:

{'tuple_values': (1, 2, 3, 4, 5, 6)}

indicating that the instance now had a separate attribute to that defined on the class.

Just to complicate things even more, if you look at the attributes of the instance after updating list_values you will also see it defined on the instance as well. For it though, since a list is mutable, it is still an alias to the same list object defined as a class attribute. Confused yet?

This is because although I said that for an immutable type it gets implemented under the covers as:

c.tuple_values = c.tuple_values + (4, 5, 6)

this isn't quite true.

It is closer to say that what occurs is:

if "tuple_values" not in vars(c):
    tmp = C.tuple_values
else:
    tmp = c.tuple_values

tmp += (4, 5, 6)

c.tuple_values = tmp

but where because tuple is immutable, ends up being:

if "tuple_values" not in vars(c):
    tmp = C.tuple_values
else:
    tmp = c.tuple_values

tmp = tmp + (4, 5, 6)

c.tuple_values = tmp

For the case of the list, it is similarly implemented as:

if "list_values" not in vars(c):
    tmp = C.list_values
else:
    tmp = c.list_values

tmp += [4, 5, 6]

c.list_values = tmp

but since a list is mutable, it can be modified in place, meaning that since both the attribute on the instance and the class refer to the same list object, the change is seen when accessed via either.

There is no doubt I have explained this very badly, but if I haven't lost you, you might be thinking now what has this all got to do with using the ObjectProxy class in wrapt.

I am not going to go into details how one uses wrapt to monkey patch code, and I also question why anyone would try and wrap an instance of an immutable type in the first place, but the original problem that was raised boils down to the following code when using ObjectProxy in wrapt:

class Class:
    tuple_values = wrapt.ObjectProxy((1, 2, 3))

c = Class()

c.tuple_values += (4, 5, 6)

The point of ObjectProxy is that it acts as a transparent proxy for a wrapped object, where operations on the proxy object should end up with the same result as if the original object was used and it was not wrapped by the proxy.

If this was true, then we should expect that after the above code had executed, Class.tuple_values when accessed should have resulted in (1, 2, 3) and c.tuple_values would be (1, 2, 3, 4, 5, 6).

The problem is that this was not what was happening and instead the effective value was in both cases (1, 2, 3, 4, 5, 6).

In other words wrapt was breaking the rules of what should happen for an immutable type.

The first reason for this is that the ObjectProxy instance that replaces (wraps) the original mutable object is now a rather complicated class instance.

In order to handle the in-place operator for addition being applied to the proxy, the ObjectProxy class needs to implement the special dunder method __iadd__. When we say:

c.tuple_values += (4, 5, 6)

this gets translated into:

c.tuple_values = c.__iadd__((4, 5, 6))

In order to have the operation be applied to the wrapped object, the __iadd__ method of ObjectProxy was implemented as:

    def __iadd__(self, other):
        self.__wrapped__ += other
        return self

On face value this may seem to be correct, but fails for the case of an immutable object.

Going back to what we said occurs under the covers when we use += we now have:

if "tuple_values" not in vars(c):
    tmp = C.tuple_values
else:
    tmp = c.tuple_values

tmp = tmp.__iadd__((4, 5, 6))

c.tuple_values = tmp

The original wrapped object at this point exists as the __wrapped__ attribute on the ObjectProxy instance referenced by the temporary value.

When += is executed, that calls __iadd__ which results in:

        self.__wrapped__ += other

which as have explained, since the wrapped object is immutable is implemented as:

        self.__wrapped__ = self.__wrapped__ + other

Since though both the attribute on the instance, and the class, reference the same proxy object, and although we are replacing the tuple with the updated value, that is only occuring against the __wrapped__ attribute of the ObjectProxy instance itself.

What is meant to happen in order to be able to replace the original attribute reference, is that __iadd__ should return any new object to replace it, but as the code was written, it was always returning self. Thus, the original proxy object is what gets set as the attribute on the instance as the temporary value reference doesn't change.

As much as this is an obscure corner case which in practice would probably never arise since wrapping immutable objects is a questionable use case, the question now is how to fix this and do something different when an immutable object is being wrapped.

Although Python has various builtin immutable types, there isn't a single test one can run to determine if an object is immutable. What we can do though for our specific case of the __iadd__ method implementation, is to assume that if the wrapped object does not itself implement __iadd__ that it is immutable for the purposes of that operation.

The simplistic approach for an immutable object would then be to discard the fact that an ObjectProxy wrapper was being used, and return the result of adding the wrapped object with the argument to +=.

    def __iadd__(self, other):
        if hasattr(self.__wrapped__, "__iadd__"):
            self.__wrapped__ += other
            return self
        else:
            return self.__wrapped__ + other

This isn't a good solution though as there is going to be some reason ObjectProxy was used in the first place, and we have just thrown it away, with any custom behaviour the proxy object implemented lost.

The next alternative is to return the result of using addition within a new instance of ObjectProxy.

    def __iadd__(self, other):
        if hasattr(self.__wrapped__, "__iadd__"):
            self.__wrapped__ += other
            return self
        else:
            return ObjectProxy(self.__wrapped__ + other)

Because though any custom behaviour of an object proxy is going to be implemented by a class derived from ObjectProxy, we again are changing the expected overall behaviour of the proxy object, as only the ObjectProxy base class behaviour is preserved.

Python being a dynamic programming language with introspection capabilities builtin does mean though that we can work out what class type was used to create the proxy object in the first place. We could therefore instead use:

    def __iadd__(self, other):
        if hasattr(self.__wrapped__, "__iadd__"):
            self.__wrapped__ += other
            return self
        else:
            return type(self)(self.__wrapped__ + other)

Unfortunately though, this can also fail.

The problem now is that a custom object proxy type derived from ObjectProxy could override __init__() such that instead of it taking a single argument which is the object to be wrapped, takes one or more distinct arguments which are used in the creation of the wrapped object. If this is the case creation of the new proxy object could fail due to mismatched number of arguments or what the argument means.

A further issue is that a custom object proxy could maintain additional state within the custom object proxy which because it isn't transferred into the new proxy object would be lost.

To cut to the solution, what can be done is for ObjectProxy to be implemented as:

class ObjectProxy:
    ...

    @property
    def __object_proxy__(self):
        return ObjectProxy

    def __iadd__(self, other):
        if hasattr(self.__wrapped__, "__iadd__"):
            self.__wrapped__ += other
            return self
        else:
            return self.__object_proxy__(self.__wrapped__ + other)

That is, we add a property __object_proxy__ to ObjectProxy which returns a callable (by default ObjectProxy) which can be used to create a new instance of the proxy object for the modified wrapped object.

The reason for this convoluted approach is that although it is likely going to be a rare situation, it does allow for a custom object proxy to override how a new proxy object is created.

class CustomObjectProxy(ObjectProxy):
    def __init__(self, arg1, arg2, *, wrapped=None):
        ```The arg1 and arg2 values are inputs to originally create
        object to be wrapped. If wrapped is not None, then we adopt
        that as wrapped object but still record inputs for later use.
        ```

        self._self_arg1 = arg1
        self._self_arg2 = arg2

        # Create object to be wrapped based on inputs.

        wrapped = ...

        super().__init__(wrapped)

    @property
    def __object_proxy__(self):
        def __ctor__(wrapped):
            # We need to override what __init__() does so construct
            # class using more manual steps.

            instance = ObjectProxy.__new__(CustomObjectProxy)

            instance._self_arg1 = self._self_arg1
            instance._self_arg2 = self._self_arg2

            ObjectProxy.__init__(instance, wrapped)

            return instance

        return __ctor__

One caveat on having the __object_proxy__ property return ObjectProxy by default is that even if a custom object proxy type still accepts the wrapped object directly when being initialized, it would need to override __object_proxy__ to return the custom object proxy type, if it is ever intended to be used to wrap immutable objects.

The alternative is to define ObjectProxy as:

class ObjectProxy:
    ...

    @property
    def __object_proxy__(self):
        return type(self)

This means it will work out of the box for custom object proxy which are initialized with the object to be wrapped, but then you will have the problem mentioned before where a derived class has a custom __init__() function which takes different arguments.

Either way, if not overridden, you will get a silent failure resulting in different behaviour after an in-place operator is used, or for the case of custom arguments to __init__(), an outright exception.

For now at least opting to return ObjectProxy rather than the type of a derived custom object proxy. Someone would already need to be doing something crazy to need to wrap immutable types and so is highly unlikely to encounter the whole issue anyway. Thus returning ObjectProxy seems to be the slightly safer choice.

Having a solution, we now just need to update all the other in-place operators with similar pattern of code. For example, in-place multiplication.

    def __imul__(self, other):
        if hasattr(self.__wrapped__, "__imul__"):
            self.__wrapped__ *= other
            return self
        else:
            return self.__object_proxy__(self.__wrapped__ * other)

Or at least we are done for the pure Python implementation. Since wrapt actually uses a C extension and only falls back to the pure Python implementation if the C extension is not available for some reason, all this also had to be done for the C extension as well. How it is done in the C extension will be left to the readers imagination.

End result is that although I have never had anyone report this as a real world problem, and it was only reported as technically wrong, it will be addressed in wrapt version 2.0.0 as discussed above. Since have not seen it as a problem in the real world, I will not be back porting it to version 1.17.X.

Anyway, hope this was an interesting exploration of a strange corner case.

October 18, 2025 12:00 AM UTC


Stéphane Wirtel

Retour aux études : ma formation en intelligence artificielle chez Alyra

Depuis fin septembre, j’ai commencé une formation sur le développement en intelligence artificielle chez Alyra. Un retour passionnant vers les bases du machine learning et du deep learning, entre mathématiques, code et plaisir d’apprendre.

October 18, 2025 12:00 AM UTC

October 17, 2025


Real Python

The Real Python Podcast – Episode #270: Evolving Teaching Python in the Classroom

How is teaching young students Python changing with the advent of LLMs? This week on the show, Kelly Schuster-Paredes from the Teaching Python podcast joins us to discuss coding and AI in the classroom.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

October 17, 2025 12:00 PM UTC


Armin Ronacher

Building an Agent That Leverages Throwaway Code

In August I wrote about my experiments with replacing MCP (Model Context Protocol) with code. In the time since I utilized that idea for exploring non-coding agents at Earendil. And I’m not alone! In the meantime, multiple people have explored this space and I felt it was worth sharing some updated findings. The general idea is pretty simple. Agents are very good at writing code, so why don’t we let them write throw-away code to solve problems that are not related to code at all?

I want to show you how and what I’m doing to give you some ideas of what works and why this is much simpler than you might think.

Pyodide is the Dark Horse

The first thing you have to realize is that Pyodide is secretly becoming a pretty big deal for a lot of agentic interactions. What is Pyodide? Pyodide is an open source project that makes a standard Python interpreter available via a WebAssembly runtime. What is neat about it is that it has an installer called micropip that allows it to install dependencies from PyPI. It also targets the emscripten runtime environment, which means there is a pretty good standard Unix setup around the interpreter that you can interact with.

Getting Pyodide to run is shockingly simple if you have a Node environment. You can directly install it from npm. What makes this so cool is that you can also interact with the virtual file system, which allows you to create a persistent runtime environment that interacts with the outside world. You can also get hosted Pyodide at this point from a whole bunch of startups, but you can actually get this running on your own machine and infrastructure very easily if you want to.

The way I found this to work best is if you banish Pyodide into a web worker. This allows you to interrupt it in case it runs into time limits.

A big reason why Pyodide is such a powerful runtime, is because Python has an amazing ecosystem of well established libraries that the models know about. From manipulating PDFs or word documents, to creating images, it’s all there.

File Systems Are King

Another vital ingredient to a code interpreter is having a file system.

Not just any file system though. I like to set up a virtual file system that I intercept so that I can provide it with access to remote resources from specific file system locations. For instance, you can have a folder on the file system that exposes files which are just resources that come from your own backend API. If the agent then chooses to read from those files, you can from outside the sandbox make a safe HTTP request to bring that resource into play. The sandbox itself does not have network access, so it’s only the file system that gates access to resources.

The reason the file system is so good is that agents just know so much about how they work, and you can provide safe access to resources through some external system outside of the sandbox. You can provide read-only access to some resources and write access to others, then access the created artifacts from the outside again.

Now actually doing that is a tad tricky because the emscripten file system is sync, and most of the interesting things you can do are async. The option that I ended up going with is to move the fetch-like async logic into another web worker and use Atomics.wait to block. If your entire Pyodide runtime is in a web worker, that’s not as bad as it looks.

That said, I wish the emscripten file system API was changed to support stack swiching instead of this. While it’s now possible to hide async promises behind sync abstractions within Pyodide with call_sync, the same approach does not work for the emscripten JavaScript FS API.

I have a full example of this at the end, but the simplified pseudocode that I ended up with looks like this:

// main thread: wrap a worker so fetch() looks synchronous
fetch(url) {
  const signalBuffer = new SharedArrayBuffer(4);
  const signal = new Int32Array(signalBuffer);
  const { port1, port2 } = new MessageChannel();
  this.worker.postMessage({url, signalBuffer, port: port2}, [port2]);

  Atomics.wait(signal, 0, 0);                   // park until worker flips the signal
  const message = receiveMessageOnPort(port1);  // MessageChannel gives the payload
  port1.close();

  if (message.message.status !== "ok") {
    throw new Error(message.message.error.message);
  }
  return message.message.data;
}

// worker thread: perform async fetch, then wake the main thread
parentPort.on("message", async ({ url, signalBuffer, port }) => {
  const signal = new Int32Array(signalBuffer);
  try {
    const bytes = await fetch(url).then(r => {
      if (!r.ok) throw new Error(`HTTP ${r.status}`);
      return r.arrayBuffer();
    });
    port.postMessage({ status: "ok", data: new Uint8Array(bytes) });
    Atomics.store(signal, 0, 1);          // mark success
  } catch (error) {
    port.postMessage({ status: "error", error: serialize(error) });
    Atomics.store(signal, 0, -1);         // mark failure
  } finally {
    Atomics.notify(signal, 0);            // unblock the waiting main thread
    port.close();
  }
});

Durable Execution

Lastly now that you have agents running, you really need durable execution. I would describe durable execution as the idea of being able to retry a complex workflow safely without losing progress. The reason for this is that agents can take a very long time, and if they interrupt, you want to bring them back to the state they were in. This has become a pretty hot topic. There are a lot of startups in that space and you can buy yourself a tool off the shelf if you want to.

What is a little bit disappointing is that there is no truly simple durable execution system. By that I mean something that just runs on top of Postgres and/or Redis in the same way as, for instance, there is pgmq.

The easiest way to shoehorn this yourself is to use queues to restart your tasks and to cache away the temporary steps from your execution. Basically, you compose your task from multiple steps and each of the steps just has a very simple cache key. It’s really just that simple:

function myAgenticLoop(taskID, initialState) {
  let stepCount = 0;
  let state = initialState;
  while (stepCount < MAX_STEPS) {
    let cacheKey = `${taskID}:${stepCount}`;
    let cachedState = loadStateFromCache(cacheKey);
    if (cachedState !== null) {
      state = cachedState.state;
    } else {
      state = runAgenticStep(state);
      storeStateInCache(cacheKey, state);
    }
    stepCount++;
    if (reachedEndCondition(state)) {
      break;
    }
  }
  return state;
}

You can improve on this greatly, but this is the general idea. The state is basically the conversation log and whatever else you need to keep around for the tool execution (e.g., whatever was thrown on the file system).

What Other Than Code?

What tools does an agent need that are not code? Well, the code needs to be able to do something interesting so you need to give it access to something. The most interesting access you can provide is via the file system, as mentioned. But there are also other tools you might want to expose. What Cloudflare proposed is connecting to MCP servers and exposing their tools to the code interpreter. I think this is a quite interesting approach and to some degree it’s probably where you want to go.

Some tools that I find interesting:

Putting it Together

If you want to see what this roughly looks like, I vibe-coded a simple version of this together. It uses a made-up example but it does show how a sandbox with very little tool availability can create surprising results: mitsuhiko/mini-agent.

When you run it, it looks up the current IP from a special network drive that triggers an async fetch, and then it (usually) uses pillow or matplotlib to make an image of that IP address. Pretty pointless, but a lot of fun!

What an agent run looks like
Prompt: Figure out the current ip address and make me a picture of it.

Step 1:
Stop reason: tool_use
Tool call: execute_python {
  code: '\n' +
    '# Read the current IP address\n' +
    "with open('/network/current-ip', 'r') as f:\n" +
    '    ip_address = f.read().strip()\n' +
    '    \n' +
    'print(f"Current IP Address: {ip_address}")\n'
}
Tool result: {
  stdout: 'Current IP Address: 255.255.255.255',
  stderr: '',
  success: true
}
[Cache] Saved state to: agent-cache/task-0-step-1.json

Step 2:
Stop reason: tool_use
Tool call: execute_python {
  code: '\n' +
    'import matplotlib.pyplot as plt\n' +
    'import matplotlib.patches as patches\n' +
    'from matplotlib.patches import FancyBboxPatch\n' +
    'import numpy as np\n' +
    '\n' +
    '# Create figure and axis\n' +
    'fig, ax = plt.subplots(1, 1, figsize=(12, 8))\n' +
    'ax.set_xlim(0, 10)\n' +
    'ax.set_ylim(0, 10)\n' +
    "ax.axis('off')\n" +
    '\n' +
    '# Add background gradient\n' +
    'gradient = np.linspace(0, 1, 256).reshape(1, -1)\n' +
    'gradient = np.vstack([gradient] * 256)\n' +
    "ax.imshow(gradient, extent=[0, 10, 0, 10], aspect='auto', cmap='cool', alpha=0.3)\n" +
    '\n' +
    '# Title\n' +
    "ax.text(5, 9, 'Current IP Address', fontsize=28, weight='bold', \n" +
    "        ha='center', va='top', color='#1a1a1a')\n" +
    '\n' +
    '# IP Address Box\n' +
    'fancy_box = FancyBboxPatch((0.5, 4.5), 9, 2.5, \n' +
    '                           boxstyle="round,pad=0.2", \n' +
    "                           edgecolor='#2196F3', facecolor='#E3F2FD',\n" +
    '                           linewidth=3)\n' +
    'ax.add_patch(fancy_box)\n' +
    '\n' +
    '# IP Address Text\n' +
    "ax.text(5, 5.75, ip_address, fontsize=48, weight='bold', \n" +
    "        ha='center', va='center', color='#1976D2', family='monospace')\n" +
    '\n' +
    '# Decorative elements - network nodes\n' +
    'circle_positions = [(2, 2), (8, 2), (2, 8), (8, 8), (5, 1), (5, 9)]\n' +
    'for pos in circle_positions:\n' +
    "    circle = patches.Circle(pos, 0.3, color='#FF5722', alpha=0.7)\n" +
    '    ax.add_patch(circle)\n' +
    '\n' +
    '# Connection lines\n' +
    'for i in range(len(circle_positions)):\n' +
    '    for j in range(i+1, len(circle_positions)):\n' +
    '        x = [circle_positions[i][0], circle_positions[j][0]]\n' +
    '        y = [circle_positions[i][1], circle_positions[j][1]]\n' +
    "        ax.plot(x, y, color='#2196F3', alpha=0.2, linewidth=1)\n" +
    '\n' +
    '# Info text\n' +
    "ax.text(5, 3.8, '🌐 Your Network Location 🌐', fontsize=14, \n" +
    "        ha='center', va='top', color='#555', style='italic')\n" +
    '\n' +
    'plt.tight_layout()\n' +
    "plt.savefig('/output/ip_address.png', dpi=150, bbox_inches='tight', facecolor='white')\n" +
    'print("✅ Picture saved to /output/ip_address.png")\n' +
    'plt.close()\n' +
    '\n' +
    'print(f"\\n📍 Your IP Address: {ip_address}")\n'
}
Tool result: {
  stdout: 'Loading Pillow, contourpy, cycler, fonttools, kiwisolver, matplotlib, numpy, packaging, pyparsing, python-dateutil, pytz, sixLoaded Pillow, contourpy, cycler, fonttools, kiwisolver, matplotlib, numpy, packaging, pyparsing, python-dateutil, pytz, six',
  stderr: 'Traceback (most recent call last):\n' +
    '  File "/lib/python313.zip/_pyodide/_base.py", line 597, in eval_code_async\n' +
    '    await CodeRunner(\n' +
    '    ...<9 lines>...\n' +
    '    .run_async(globals, locals)\n' +
    '  File "/lib/python313.zip/_pyodide/_base.py", line 411, in run_async\n' +
    '    coroutine = eval(self.code, globals, locals)\n' +
    '  File "<exec>", line 8, in <module>\n' +
    '  File "/lib/python3.13/site-packages/matplotlib/pyplot.py", line 1613, in subplots\n' +
    '    fig = figure(**fig_kw)\n' +
    '  File "/lib/python3.13/site-packages/matplotlib/pyplot.py", line 934, in figure\n' +
    '    manager = new_figure_manager(\n' +
    '        num, figsize=figsize, dpi=dpi,\n' +
    '        facecolor=facecolor, edgecolor=edgecolor, frameon=frameon,\n' +
    '        FigureClass=FigureClass, **kwargs)\n' +
    '  File "/lib/python3.13/site-packages/matplotlib/pyplot.py", line 464, in new_figure_manager\n' +
    '    _warn_if_gui_out_of_main_thread()\n' +
    '    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^\n' +
    '  File "/lib/python3.13/site-packages/matplotlib/pyplot.py", line 441, in _warn_if_gui_out_of_main_thread\n' +
    '    canvas_class = cast(type[FigureCanvasBase], _get_backend_mod().FigureCanvas)\n' +
    '                                                ~~~~~~~~~~~~~~~~^^\n' +
    '  File "/lib/python3.13/site-packages/matplotlib/pyplot.py", line 280, in _get_backend_mod\n' +
    '    switch_backend(rcParams._get("backend"))  # type: ignore[attr-defined]\n' +
    '    ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^\n' +
    '  File "/lib/python3.13/site-packages/matplotlib/pyplot.py", line 342, in switch_backend\n' +
    '    module = importlib.import_module(cbook._backend_module_name(newbackend))\n' +
    '  File "/lib/python313.zip/importlib/__init__.py", line 88, in import_module\n' +
    '    return _bootstrap._gcd_import(name[level:], package, level)\n' +
    '           ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n' +
    '  File "<frozen importlib._bootstrap>", line 1387, in _gcd_import\n' +
    '  File "<frozen importlib._bootstrap>", line 1360, in _find_and_load\n' +
    '  File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked\n' +
    '  File "<frozen importlib._bootstrap>", line 935, in _load_unlocked\n' +
    '  File "<frozen importlib._bootstrap_external>", line 1026, in exec_module\n' +
    '  File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed\n' +
    '  File "/lib/python3.13/site-packages/matplotlib/backends/backend_webagg.py", line 23, in <module>\n' +
    '    from js import document\n' +
    "ImportError: cannot import name 'document' from 'js' (unknown location)\n",
  success: false
}
[Cache] Saved state to: agent-cache/task-0-step-2.json

Step 3:
Stop reason: tool_use
Tool call: execute_python {
  code: '\n' +
    'from PIL import Image, ImageDraw, ImageFont\n' +
    'import os\n' +
    '\n' +
    '# Read the current IP address\n' +
    "with open('/network/current-ip', 'r') as f:\n" +
    '    ip_address = f.read().strip()\n' +
    '\n' +
    '# Create a new image with a nice gradient-like background\n' +
    'width, height = 1200, 800\n' +
    "image = Image.new('RGB', (width, height), color='white')\n" +
    'draw = ImageDraw.Draw(image)\n' +
    '\n' +
    '# Draw background with gradient effect using rectangles\n' +
    'for i in range(height):\n' +
    '    ratio = i / height\n' +
    '    r = int(30 + (100 * ratio))\n' +
    '    g = int(100 + (150 * ratio))\n' +
    '    b = int(200 + (55 * ratio))\n' +
    '    draw.rectangle([(0, i), (width, i+1)], fill=(r, g, b))\n' +
    '\n' +
    '# Draw main box\n' +
    'box_margin = 100\n' +
    'box_top = 250\n' +
    'box_height = 300\n' +
    'draw.rectangle(\n' +
    '    [(box_margin, box_top), (width - box_margin, box_top + box_height)],\n' +
    '    fill=(240, 248, 255),\n' +
    '    outline=(33, 150, 243),\n' +
    '    width=5\n' +
    ')\n' +
    '\n' +
    '# Draw rounded corners effect\n' +
    'for i in range(20):\n' +
    '    draw.arc([(box_margin, box_top), (box_margin + 40, box_top + 40)], \n' +
    '             0, 90, fill=(33, 150, 243), width=5)\n' +
    '\n' +
    '# Add decorative network nodes\n' +
    'node_radius = 15\n' +
    'nodes = [(150, 150), (1050, 150), (150, 650), (1050, 650), (600, 100), (600, 700)]\n' +
    'for node in nodes:\n' +
    '    draw.ellipse(\n' +
    '        [(node[0] - node_radius, node[1] - node_radius),\n' +
    '         (node[0] + node_radius, node[1] + node_radius)],\n' +
    '        fill=(255, 87, 34),\n' +
    '        outline=(200, 50, 0),\n' +
    '        width=2\n' +
    '    )\n' +
    '\n' +
    '# Draw connection lines\n' +
    'for i, node1 in enumerate(nodes):\n' +
    '    for node2 in nodes[i+1:]:\n' +
    '        draw.line([node1, node2], fill=(33, 150, 243, 50), width=1)\n' +
    '\n' +
    '# Try to use a large default font\n' +
    'try:\n' +
    '    title_font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf", 60)\n' +
    '    ip_font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuMono-Bold.ttf", 90)\n' +
    '    subtitle_font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 30)\n' +
    'except:\n' +
    '    title_font = ImageFont.load_default()\n' +
    '    ip_font = ImageFont.load_default()\n' +
    '    subtitle_font = ImageFont.load_default()\n' +
    '\n' +
    '# Draw title\n' +
    'draw.text((width // 2, 80), "Current IP Address", font=title_font, \n' +
    '          fill=(255, 255, 255), anchor="mm")\n' +
    '\n' +
    '# Draw IP address\n' +
    'draw.text((width // 2, box_top + box_height // 2), ip_address, font=ip_font, \n' +
    '          fill=(25, 118, 210), anchor="mm")\n' +
    '\n' +
    '# Draw subtitle\n' +
    'draw.text((width // 2, box_top + box_height + 50), "🌐 Your Network Location 🌐", \n' +
    '          font=subtitle_font, fill=(100, 100, 100), anchor="mm")\n' +
    '\n' +
    '# Save the image\n' +
    "output_path = '/output/ip_address.png'\n" +
    'image.save(output_path)\n' +
    'print(f"✅ Picture saved to {output_path}")\n' +
    'print(f"\\n📍 Your IP Address: {ip_address}")\n'
}
Tool result: {
  stdout: 'Pillow already loaded from default channelNo new packages to load✅ Picture saved to /output/ip_address.png📍 Your IP Address: 255.255.255.255',
  stderr: '',
  success: true
}
[Cache] Saved state to: agent-cache/task-0-step-3.json

Step 4:
Stop reason: end_turn

Final result: Perfect! 🎉 I've successfully:

1. **Found your IP address**: **255.255.255.255**
2. **Created a picture** showing your IP address with:
   - A nice gradient blue background
   - Your IP address prominently displayed in the center
   - Decorative network nodes and connection lines
   - Professional styling and formatting

The image has been saved to `/output/ip_address.png` and is ready for you to download!
[Cache] Saved state to: agent-cache/task-0-step-4.json

Total steps: 4

Making 1 file(s) available in ./output:
  ✓ ip_address.png

4he same approach has also been leveraged by Anthropic and Cloudflare. There is some further reading that might give you more ideas:

October 17, 2025 12:00 AM UTC

October 16, 2025


Ari Lamstein

Visualizing Border Patrol Encounters Under the Second Trump Administration

Summary: I just updated my Immigration Enforcement Streamlit App with Border Patrol data from the current administration. This post walks through what changed and why it matters.

Last week, I published Visualizing 25 Years of Border Patrol Data in Python, which explored long-term trends in Border Patrol “encounters” along the Southwest Land Border. At the time, the dataset ended in November 2024.

I’ve now extended the dataset to include the current fiscal year—and the results are striking.

What’s new?

The encounters module now integrates two sources:

You can view the updated visualization in my Immigration Enforcement Streamlit App.

What does the data show?

The lowest monthly encounter values in the entire 25-year dataset all occur after the start of President Trump’s second term, beginning in February 2025.

To visualize this, you can run:


import encounters

encounters.get_sw_border_encounters_graph()

The graph includes vertical lines marking presidential transitions, allowing readers to compare enforcement trends across administrations.

When sorting the dataset by monthly encounter totals, the seven lowest months all fall under the current administration. The next lowest values? They appear at the start of Trump’s first term in 2017.

This pattern echoes a point made by a DHS statistician during my earlier research: while the number of people attempting to cross the border is unknowable, it’s likely sensitive to changes in administration policy. The encounters dataset doesn’t capture causality, but it’s probably the best proxy we have—and it offers a clear visual correlation that invites further exploration.

To reproduce this analysis

You can sort the dataset by monthly encounter totals using:


df = encounters.get_sw_border_encounters()

df.sort_values('encounters').reset_index(drop=True).head(10)

This returns the ten lowest monthly values in the dataset:

date encounters
0 2025-07-01 4596
1 2025-06-01 6068
2 2025-08-01 6321
3 2025-03-01 7180
4 2025-02-01 8353
5 2025-04-01 8377
6 2025-05-01 8725
7 2017-04-01 11130
8 2017-03-01 12200
9 2017-05-01 14520

These values are not only historically low—they’re also tightly clustered, suggesting a sustained shift rather than a one-off anomaly.

Why this matters

By extending the dataset and updating the app, I hope to support more timely, transparent conversations about immigration enforcement. The code is open-source, the data sources are documented, and the graph is annotated to support hypothesis generation—not partisan conclusions.

This project doesn’t claim causality, but it does offer a reproducible lens for exploring how policy shifts may correlate with enforcement outcomes. It’s designed to empower journalists, educators, and analysts to ask better questions—not just accept surface-level narratives.

What’s next?

I’m exploring ways to surface encounter demographics and regional breakdowns—details that could help contextualize enforcement patterns and policy shifts.

If you have ideas, feedback, or use cases you’d like to see supported, I’d love to hear from you. Feel free to reach out.

October 16, 2025 04:00 PM UTC


Python Bytes

#453 Python++

<strong>Topics covered in this episode:</strong><br> <ul> <li><em>* <a href="https://pypiplus.com?featured_on=pythonbytes">PyPI+</a></em>*</li> <li><em>* <a href="https://github.com/floRaths/uv-ship?featured_on=pythonbytes">uv-ship</a> - a CLI-tool for shipping with uv</em>*</li> <li><em>* <a href="https://blog.miguelgrinberg.com/post/python-3-14-is-here-how-fast-is-it?featured_on=pythonbytes">How fast is 3.14?</a></em>*</li> <li><em>* <a href="https://github.com/feldroy/air?featured_on=pythonbytes">air - a new web framework built with FastAPI, Starlette, and Pydantic.</a></em>*</li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=VwJFlrvT7sk' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="453">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1: <a href="https://pypiplus.com?featured_on=pythonbytes">PyPI+</a></strong></p> <ul> <li>Very nice search and exploration tool for PyPI</li> <li>Minor but annoying bug: content-types ≠ content_types on PyPI+ but they are in Python itself. Minimum Python version seems to be interpreted as max Python version.</li> <li>See dependency graphs and more</li> <li>Examples <ul> <li><a href="https://pypiplus.com/project/content-types/0.3.0/?featured_on=pythonbytes">content-types</a></li> <li><a href="https://pypiplus.com/project/jinja_partials/0.3.0/?featured_on=pythonbytes">jinja-partials</a></li> <li><a href="https://pypiplus.com/project/fastapi_chameleon/0.1.17/?featured_on=pythonbytes">fastapi-chameleon</a></li> </ul></li> </ul> <p><strong>Brian #2: <a href="https://github.com/floRaths/uv-ship?featured_on=pythonbytes">uv-ship</a> - a CLI-tool for shipping with uv</strong></p> <ul> <li>“<code>uv-ship</code> is a lightweight companion to <a href="https://docs.astral.sh/uv/?featured_on=pythonbytes">uv</a> that removes the risky parts of cutting a release. It verifies the repo state, bumps your project metadata and optionally refreshes the changelog. It then commits, tags &amp; pushes the result, while giving you the chance to review every step.”</li> </ul> <p><strong>Michael #3: <a href="https://blog.miguelgrinberg.com/post/python-3-14-is-here-how-fast-is-it?featured_on=pythonbytes">How fast is 3.14?</a></strong></p> <ul> <li>by Miguel Grinberg</li> <li>A big focus on threaded vs. non-threaded Python</li> <li>Some times its faster, other times, it’s slower</li> </ul> <p><strong>Brian #4: <a href="https://github.com/feldroy/air?featured_on=pythonbytes">air - a new web framework built with FastAPI, Starlette, and Pydantic.</a></strong></p> <ul> <li>An very new project in Alpha stage by Daniel &amp; Audrey Felderoy, the “Two Scoops of Django” people.</li> <li>Air Tags are an interesting thing.</li> <li>Also <a href="https://feldroy.github.io/air/why/?featured_on=pythonbytes">Why?</a> is amazing <ul> <li>“<strong>Don't use AIR”</strong></li> <li>“Every release could break your code! If you have to ask why you should use it, it's probably not for you.”</li> <li>“If you want to use Air, you can. But we don't recommend it.”</li> <li>“It'll likely infect you, your family, and your codebase with an evil web framework mind virus, , …”</li> </ul></li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://discuss.python.org/t/python-3-15-alpha-1/104358?u=hugovk&featured_on=pythonbytes">Python 3.15a1 is available</a> <ul> <li><a href="https://github.com/astral-sh/uv/releases/tag/0.9.3?featured_on=pythonbytes">uv python install 3.15</a> already works</li> </ul></li> <li><a href="https://pythontest.com/python-lazy-imports-now/?featured_on=pythonbytes">Python lazy imports you can use today</a> - one of two blog posts I threatened to write recently</li> <li><a href="https://pythontest.com/testing-with-python-3-14?featured_on=pythonbytes">Testing against Python 3.14</a> - the other one</li> <li><a href="https://pythontest.com/testing-with-python-3-14/#supporting-314t--free-threaded-python">Free Threading has some trove classifiers</a></li> </ul> <p>Michael:</p> <ul> <li>Blog post about the book: <a href="https://mkennedy.codes/posts/talk-python-in-production-book-is-out/?featured_on=pythonbytes">Talk Python in Production book is out!</a> In particular, the extras are interesting.</li> <li><a href="https://github.com/mikeckennedy/aiusage?featured_on=pythonbytes">AI Usage TUI</a></li> <li><a href="https://mkennedy.codes/posts/show-me-your-ls/?featured_on=pythonbytes">Show me your ls</a></li> <li><a href="https://helium.computer?featured_on=pythonbytes">Helium Browser</a> is interesting. But also has Python as a big role. <ul> <li>GitHub says Languages <ul> <li><strong>Python 97.4% 👀</strong></li> <li>Shell 1.9%</li> <li>Other 0.7%</li> </ul></li> </ul></li> <li>Smallest Python release? <a href="https://docs.python.org/release/3.13.9/whatsnew/changelog.html#python-3-13-9">3.13.9</a></li> </ul> <p><strong>Joke: <a href="https://x.com/itsfoss2/status/1973983885005324607?s=12&featured_on=pythonbytes">An unforgivable crime</a></strong></p>

October 16, 2025 08:00 AM UTC

October 15, 2025


Real Python

Polars vs pandas: What's the Difference?

Polars and pandas both provide DataFrame-based data analysis in Python, but they differ in syntax, performance, and features. In this tutorial on Polars vs pandas, you’ll compare their method chaining styles, run timed performance tests, explore LazyFrame optimizations in Polars, convert data between the two libraries, and create plots with their built-in tools. You’ll also examine scenarios where each library’s strengths make it the better choice.

By the end of this tutorial, you’ll understand that:

  • Polars expressions and contexts let you build clear, optimized query pipelines without mutating your original data.
  • LazyFrames with query optimization in Polars can outperform pandas for grouped and aggregated workloads.
  • Streaming in Polars enables processing datasets that don’t fit in memory, which pandas can’t handle natively.
  • .to_pandas() and from_pandas() let you convert between DataFrame formats, and Narwhals offers a library-agnostic API.
  • Built-in plotting uses Altair for Polars and Matplotlib for pandas, allowing quick visualization directly from DataFrames.

To get the most out of this tutorial, it’s recommended that you already have a basic understanding of how to work with both pandas and Polars DataFrames, as well as Polars LazyFrames.

To complete the examples in this tutorial, you’ll use various tools and the Python REPL. You’ll use the command line to run some scripts that time your code and reveal how pandas and Polars compare. You’ll also take advantage of the plotting capabilities of Jupyter Notebook.

Much of the data you’ll use will be random and self-generated. You’ll also use a cleansed and reformatted Apache Parquet version of some freely available retail data from the UC Irvine Machine Learning Repository. Parquet files are optimized to store data and analyze it efficiently. This enables you to achieve optimal performance from the pandas and Polars libraries.

Before you start, you should download the online_retail.parquet file from the tutorial downloadables and place it into your project directory.

You’ll need to install the pandas and Polars libraries, as well as PyArrow, Matplotlib, Vega-Altair, and Narwhals, to make sure your code has everything it needs to run. You’ll also use NumPy, which is currently installed automatically when you install pandas.

You may also want to consider creating your own virtual environment within your project folder to install the necessary libraries. This will prevent them from interfering with your current setup.

You can install the required libraries using these commands at your command prompt:

Shell
$ python -m pip install polars \
                        pandas \
                        pyarrow \
                        narwhals \
                        altair \
                        jupyterlab \
                        matplotlib

All the code examples are provided in the downloadable materials for this tutorial, which you can download by clicking the link below:

Get Your Code: Click here to download the free sample code you’ll use to learn the differences between Polars and pandas.

Now that you’re set up, it’s time to get started and learn about the main differences between Polars and pandas.

Take the Quiz: Test your knowledge with our interactive “Polars vs pandas: What's the Difference?” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Polars vs pandas: What's the Difference?

Take this quiz to test your knowledge of the Polars vs pandas tutorial and review the key differences between these open-source Python libraries.

Do Polars and pandas Use the Same Syntax?

There are similarities between Polars and pandas. For example, they both support Series and DataFrames and can perform many of the same data analysis computations. However, there are some differences in their syntax.

To explore this, you’ll use the order details in your online_retail.parquet file to analyze both pandas and Polars DataFrames. This file contains the following data:

Column Name Description
InvoiceNo Invoice number
StockCode Stock code of item
Description Item description
Quantity Quantity purchased
InvoiceDate Date invoiced
UnitPrice Item price
CustomerID Customer identifier
Country Country of purchase made

Next, you’ll analyze some of this data with pandas and then with Polars.

Using Index-Based Syntax in pandas

Suppose you want a DataFrame with a new Total column that contains the total cost of each purchase. You also want to apply filtering so you can concentrate on specific data.

To achieve this, you might write the following pandas code in your REPL:

Python pandas_polars_demo.py
>>> import pandas as pd

>>> orders_pandas = pd.read_parquet("online_retail.parquet")

>>> orders_pandas["Total"] = (
...     orders_pandas["Quantity"] * orders_pandas["UnitPrice"]
... )

>>> orders_pandas[["InvoiceNo", "Quantity", "UnitPrice", "Total"]][
...     orders_pandas["Total"] > 100
... ].head(3)
    InvoiceNo  Quantity  UnitPrice  Total
46     536371        80       2.55  204.0
65     536374        32      10.95  350.4
82     536376        48       3.45  165.6

This code uses pandas index-based syntax, inspired by NumPy, on which pandas was originally built. First, you add a new Total column to your DataFrame. The column is calculated by multiplying the values of the Quantity and UnitPrice columns together. This operation permanently changes your original DataFrame.

Read the full article at https://realpython.com/polars-vs-pandas/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

October 15, 2025 02:00 PM UTC

Quiz: Polars vs pandas: What's the Difference?

In this quiz, you’ll test your understanding of the Polars vs pandas: What’s the Difference? tutorial. Most of the answers are somewhere in the tutorial, but you may need to use some of the links to clarify some of the questions.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

October 15, 2025 12:00 PM UTC


Lucas Cimon

Spotlight on pdfly, the Swiss Army knife for PDF files

pdfly logo

Project documentation: pdfly.readthedocs.io

pdfly is the youngest project of the py-pdf organization. It has been created by Martin Thoma in 2022.

It's simply a CLI tool to manipulate PDF files, written in Python and based on the fpdf2 & pypdf libraries.

I'm a maintainer of the project 🙂

What can it do & what's next?

Find out by reading the full article


Permalink

October 15, 2025 11:48 AM UTC


Seth Michael Larson

Re(blog, tweet, toot, skoot, skeеt, post)

Have you noticed the similar terms used for sharing someone else's content with attribution from your own account? Reblogging was the original term for “blogging” another user's content, first developed by project “reBlog” and popularized by Tumblr. Remember that “blog” is a truncation of “weblog” (... should it have been ’blog to capture the shortening?)

Anyway, here's a railroad diagram of all the different words I could think of:

re blog t wee oo Twitter & Mastodon sk ee oo Bluesky t post boost path { stroke-width: 3; stroke: black; fill: rgba(0,0,0,0); } text { font: bold 14px monospace; text-anchor: middle; white-space: pre; } text.diagram-text { font-size: 12px; } text.diagram-arrow { font-size: 16px; } text.label { text-anchor: start; } text.comment { font: italic 12px monospace; } g.non-terminal text { /*font-style: italic;*/ } rect { stroke-width: 3; stroke: black; fill: hsl(120,100%,90%); } rect.group-box { stroke: gray; stroke-dasharray: 10 5; fill: none; } path.diagram-text { stroke-width: 3; stroke: black; fill: white; cursor: help; } g.diagram-text:hover path.diagram-text { fill: #eee; }

Tweet was coined for Twitter, a “micro-blogging” platform, so retweets were what reblogging was called on that platform. That naming has since changed, but wouldn't you know it: tweets used to be known as “twits”. Given Twitters owner that name makes more sense now than ever.

Toots” are an elephant-themed “tweet” for Mastodon, with “boost” being the official term. “Skoot” was initially proposed for Bluesky, but cleverer folks suggested “skeеt”, much to Jay's dismay (stop trying to make skoot happen).

Now you see less “platform-specific” terminology being thrown around, like “post” and “repost”. Personally, I'm not a fan: check your posts at the door, we're goin’ ‘bloggin’! 🏄︎

I and many other blogs publish a “blogroll”, or a list of other blogs and pages that we've “reblogged” to our own website. If you're interested, give those a read and discover something new by surfing the web like we did in the 90s.



Thanks for keeping RSS alive! ♥

October 15, 2025 12:00 AM UTC

October 14, 2025


The Python Coding Stack

Creating a Singleton Class in Python And Why You (Probably) Don’t Need It

If you spend long enough in the programming world, you’ll come across the term singleton at some point. And if you hadn’t seen this term yet, well, now you have!

In Python, you don’t need singleton classes. Almost never. But creating one is a great exercise in understanding how Python creates objects. And discussing alternatives to a singleton class helps you explore other aspects of Python.

“Hey Stephen, sounds great, but you haven’t told us what this singleton thing is yet!”

Fair point – here we go.

What’s a Singleton?

The singleton pattern is a design pattern in which a class can produce only one instance. Not two, not three, just one. And each time you try to create a new instance of a singleton class, you get the same object again.

Let me pick a trivial example of when you may need this. You’re writing a game. Perhaps several players can play games simultaneously. And you need a leaderboard. But you only want one leaderboard. You don’t want each game to create its own leaderboard. Otherwise, what’s the point of the leaderboard?

There are other examples in programming when singletons appear to be the solution: creating a connection to a database or to a hardware device – you only want one connection – creating a logger or a configuration manager. But they sound too serious and proper. So I’ll stick with the leaderboard in a game example for this discussion.

Creating a Leaderboard • First Attempt

Let’s say you have a Game class and you create Game instances each time a game is played. I won’t write this class as it’s not needed, and I want to keep this article relatively brief (famous last words!).

The Game class needs to access a leaderboard. Each Game instance needs to access a leaderboard – the same leaderboard. Let’s say you create a class to manage this leaderboard:

All code blocks are available in text format at the end of this article • #1 • The code images used in this article are created using Snappify. [Affiliate link]

You add the .scores data attribute, which is a dictionary, in the class’s .__init__() method. If this is all you need, you definitely don’t need a class. But you add some methods to manage the leaderboard:

#2

Now you have a bit more functionality. Let’s review these methods briefly:

Looks good? Let’s try it out. For simplicity, you can just add these lines to the same script where you define your class:

#3

You create an instance of Leaderboard and call .add_score() three times. If we had a Game class, the first line, which creates a Leaderboard instance and assigns it to an identifier, would be included in the Game.__init__(), but here I’m just creating this variable in the main script. Here’s the displayed leaderboard:

Kate: 15
Stephen: 14

All seems fine. Kate is leading with 15 points. I’m second (also last) with 14 points.

But, later in your program, possibly within a different Game instance, you write the following:

#4

And here’s the output now:

Kate: 15
Stephen: 14
​
Now Dealing With ‘another_leaderboard’
Sarah: 13
Max: 7

Recall that the first two lines of the output are from the code you wrote earlier.

But where’s Kate in the new leaderboard? And where am I? This code creates a new Leaderboard object, unrelated to the previous one. You can confirm this by showing the identity of each object, using id(), or by using the is operator:

#5

The outputs from these three calls to print() are below:

4347130752
4347064592
False

The two objects have different identities. They’re not the same object. The identity values you get will be different from mine, but what matters here is that they’re different from each other.

You could make sure you don’t call Leaderboard() again in your code. But this is not always possible or straightforward. And you’d still need to make sure your documentation makes this really clear. And will your users read the documentation? Who knows. You may be opening the door to bugs.

Creating a Leaderboard • Second Attempt Using a Singleton Class

Now, you may be thinking: “Hey, I read about this great design pattern: the singleton pattern. Surely, this is a great time to use it…”.

The singleton is an important pattern in some languages. (Spoiler alert: it’s not so useful in Python – you’ll see why later). So let’s try to implement it in Python.

Let’s start with this question: “What’s the first thing that happens when you create an instance of a class in Python?”

If you answered: “Python calls its .__init__() method to initialise the object”, then you’re not quite right. Something else happens first. The .__init__() method initialises a “blank” instance of the class. But what creates that instance in the first place? It’s not .__init__().

It’s .__new__(). In most cases, you don’t need to worry about defining .__new__() when you define a class. The default behaviour is good enough. But in this case, you want to have more control over what happens when you create a new instance of the Leaderboard class:

#6

You add a few things to the class definition:

When you don’t define a .__new__() special method, Python uses the default implementation, which is inherited from the base class object. All Python classes inherit from the object class. However, since you define a .__new__() method in your class, Python uses this method instead of the default .__new__().

But then you need to call super().__new__(), which creates the new blank object using object.__new__(). However, you only do this if the ._instance class attribute is None – that’s what if cls._instance is None: does. Let’s understand what’s happening here.

The first time you create an instance of the class, you create the new blank instance since ._instance is None at first. You then assign this new instance to the class attribute ._instance. The .__new__() method should return the instance, which is then passed on to .__init__(). But we’ll get there later.

What happens if you try to create a new Leaderboard object again? The second time your code calls Leaderboard.__new__(), the class attribute ._instance is no longer None. It now refers to an object of the class. Therefore, the code bypasses the creation of a new object and returns the existing one.

So, is the problem solved? Let’s find out. Here’s a reminder of the code used to explore this class (with a few extra print() calls):

#7

Here’s the output now:

Show leaderboard:
Kate: 15
Stephen: 14
​
Show leaderboard:
Sarah: 13
Max: 7
​
4344263552
4344263552
True

There’s good news and bad news – which one do you want first?

Let’s start with the good news. The variable names leaderboard and another_leaderboard refer to the same object. Notice how the identity returned by id() is the same, and the is expression returns True. They’re the same object. When you call Leaderboard() the second time, your code doesn’t create a new instance. It uses the existing one.

Great.

But the leaderboards displayed are still different. Why?

You now have the same object – you’re not creating a new one. But you’re reinitialising it the second time you call Leaderboard(). When you call a class to create an instance by using Leaderboard() (with parentheses), Python first calls the class’s .__new__(). But you dealt with that already – it doesn’t create a new instance. However, the instance returned by .__new__() is passed to the class’s .__init__().

And here’s a reminder of what your Leaderboard.__init__() does:

#8

Your instance already exists. It already has a .scores data attribute with some data in it. However, when you call .__init__() again, you overwrite .scores with an empty dictionary. Any data you already had is gone.

So, you now need to fix this, too. What a pain:

#9

If you haven’t seen the built-in hasattr() function yet, it stands for has attribute. You pass an object and the name of an attribute, and it returns True or False depending on whether that object has that attribute!

In this case, you pass self as the first argument. And self is the name that refers to the instance you’re dealing with. You also pass the string “initialised” as the second argument.

You won’t overwrite .scores when you call Leaderboard() a second time, or a third time… You’re ensuring that an object can only be initialised once.

Run the code now:

Show leaderboard:
Kate: 15
Stephen: 14
​
Show leaderboard:
Kate: 15
Stephen: 14
Sarah: 13
Max: 7
​
4336350080
4336350080
True

There’s still only one Leaderboard object. But now, you’re not overwriting any data it already has, either. As you can see, when you call another_leaderboard = Leaderboard(), you don’t create a new instance. Instead, you reuse the one you already have. And you don’t erase its existing data, but add on to it.

Now, your class can only have one instance of the Leaderboard class. It’s a singleton!

That’s a lot of work to create a useable singleton class.

And I won’t open the can of worms of the scenario where you may have multiple concurrent threads. Two or more threads may access .__new__() at the same time. And that’s not good!


The Python Coding Stack is getting bigger and better, and not just because there are more of you reading every week. I’ll send out an email announcing more articles, more goodies and more value soon. Stay tuned.

And make sure you upgrade to a paid subscription to make sure you don’t miss anything–now is a good time to upgrade, before monthly and yearly subscription prices go up (they only go up for new paid subscribers, never for existing ones)

Subscribe now


We’ve Learnt Stuff By Creating a Singleton

Creating a singleton class, as you did above, is a useful exercise to help you understand how Python creates and initialises new objects. From a learning perspective, it’s a great exercise.

But, do you need this in your code?

Generally, no.

It’s a lot of work.

There’s an easier way to achieve the same effect in Python (which may not exist in other programming languages).

And maybe you don’t really need a single global object that you refer to from all over your program.

Let’s explore some options. And no, I won’t cover all the options. I said I want to keep this article brief (but I’m already on 2k+ words). My aim here is to start you off on a journey to think about what goes where, when, and how…

Alternative to the Singleton Class • Move The Class to a New Module

Let’s roll back the changes to the Leaderboard class. Delete the .__new__() method and the ._instance class attribute. And revert back to the original, simpler .__init__(). However, place the class in its own script. Let’s call it leaderboard.py:

#10

Note how .__new__() is no longer there and .__init__() simply creates the .scores data attribute.

There’s also one more line of code after the class definition – and only one. You create an instance of the class.

Now, let’s go back to your main script, which could be anywhere else in your program. Let’s call the main script main.py:

#11

The class defined in leaderboard.py is not a singleton class. It’s a regular class. However, you create the one and only instance you’ll ever create within leaderboard.py. Then, you simply import this instance using from leaderboard import leaderboard. The variable name and module name don’t have to be the same, of course, but in this case they are.

Ah, what if you import the module more than once? I can hear you think… Python only loads a module once. Here’s a demonstration. Add the following print() call to leaderboard.py:

#12

Now, go back to main.py and import leaderboard a second time anywhere else in your code:

#13

Run this code. How many times do you see the text leaderboard.py loaded in your output?

leaderboard.py loaded
Show leaderboard:
Kate: 15
Stephen: 14
​
Show leaderboard:
Kate: 15
Stephen: 14
Sarah: 13
Max: 7

Once. Modules are only loaded once. And the final output still shows the correct, combined leaderboard.

And there you go – you can only have one instance of Leaderboard now, without any of the hassle of messing around with .__new__() and .__init__().

Modules in Python give you a shortcut to create singleton-like behaviour.

Sometimes, You Can Simplify Further

In fact, do you really need the Leaderboard class? Sometimes, you do, or you prefer to have everything relating to the leaderboard in a single object. In that case, the solutions in the previous section and in the one after this one are great.

But can you live with a simpler option?

#14

Note that some functions’ names are updated to make them more readable since they’re no longer methods in a class.

Sure, this option may not always be possible. And some purists will scoff at these functions accessing and mutating a global variable (although you can modify the functions to accept the dictionary as one of the arguments, if you prefer).

The leading underscore in _scores indicates that you don’t intend this variable to be used by the user. It’s not meant for public use. Users should only access it through the functions.

If you prefer, you can place ._scores and the functions in a separate module and import them. As you saw earlier, Python imports a module only once. Therefore, anything defined within a module is effectively a singleton! In Python, the behaviour of modules makes creating a single instance of a class to share across your code much easier – even trivial. Other languages don’t have this option, which is why the singleton pattern exists.

So, if you think a singleton class is the solution to your problem, consider whether this simpler option will do!

Need More Flexibility and Future-Proof Code?

Here’s yet another option. Create a file called game_services.py:

#15

You can also define the Leaderboard class within this module, if you prefer, but here I’m leaving it in its own module, leaderboard.py. The GameServices class has a single data attribute, .leaderboard, which contains an instance of Leaderboard. This instance of Leaderboard is created when you create an instance of GameServices, which you do in the final line in this script.

You’re using composition in this case. The GameServices class has access to Leaderboard by having a Leaderboard instance as one of its attributes. You can read more about composition in this pair of articles: Choose Your Fighter • Let’s Play (#1 in Inheritance vs Composition Pair) and Choose Your Fighter • Inheritance vs. Composition (#2 in Inheritance vs Composition Pair).

Back in main.py, you can now import this game_services instance:

#16

At first sight, this version seems similar to the first alternative I presented above, just a bit more complex. However, instead of creating an instance of Leaderboard that is then used elsewhere, in this version, the Leaderboard instance is included in a container, the GameServices object. You then use the instance of the GameServices object wherever needed.

There’s more boilerplate code, but you also get more flexibility with this version. What if you want to replace the leaderboard with a different one for testing purposes? The classic singleton class is hard to test. This option simplifies things because you can assign a new Leaderboard to game_services.leaderboard or create a separate GameServices object for this purpose. Nothing else needs to change in your code.

You can also use a different implementation of Leaderboard, say you have an AlternativeLeaderboard class you want to experiment with. It’s easier and safer to make these changes when the leaderboard is included in the GameServices object.

And what if you later decide you want multiple leaderboards? Perhaps one for a version of the game and a different leaderboard for another version of the game? You no longer want a singleton! But with this version of the code, you can easily create another data attribute in GameServices. Sure, you’d be able to do so if using Leaderboard directly, as in the first example. But this option makes it safer and easier to expand your code.

And perhaps, you have other services you want to share, not just a leaderboard. You can also add more data attributes.


Support The Python Coding Stack


Final Words

Note how the alternatives of the singleton class use standard classes that don’t need a .__new__() and extra work in the .__init__(), or they don’t use classes at all. They rely on composition within another class and on the fact that Python loads a module only once per program, so instances created in a module are effectively singletons when used elsewhere in the program.

There are other alternatives for situations where you may be tempted to use a singleton. And there may be some less common scenarios when the answer is still to create a singleton class.

So, I’m not stating that you absolutely never need to create a singleton class in Python. But in most cases, there are neater and more Pythonic alternatives.

Still, creating a singleton class, as we did earlier in this article, is a useful learning exercise!


This publication is entirely supported by its readers – there are no adverts, no sponsors! But it takes a lot of effort and time to get one of these articles out. If you want to support this publication further, and get exclusive articles, videos, and more goodies, you can become a paid subscriber.

Subscribe now

You can also support this publication by making a one-off contribution of any amount you wish.

Photo by Marek Piwnicki: https://www.pexels.com/photo/born-by-the-stars-17806401/


Code in this article uses Python 3.14

The code images used in this article are created using Snappify. [Affiliate link]

You can also support this publication by making a one-off contribution of any amount you wish.

Support The Python Coding Stack


For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!

Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.

And you can find out more about me at stephengruppetta.com

Further reading related to this article’s topic:


Appendix: Code Blocks

Code Block #1
class Leaderboard:
    def __init__(self):
        self.scores = {}
Code Block #2
class Leaderboard:
    def __init__(self):
        self.scores = {}

    def add_score(self, player, score):
        if player in self.scores:
            self.scores[player] += score
        else:
            self.scores[player] = score

    def get_leaderboard(self):
        return sorted(
            self.scores.items(),
            key=lambda item: item[1],
            reverse=True,
        )

    def display(self):
        for player, score in self.get_leaderboard():
            print(f”{player}: {score}”)

    def reset(self):
        # You may want to add a confirmation step in a real application,
        # or save a backup to a file first
        self.scores.clear()
Code Block #3
# ...

leaderboard = Leaderboard()
leaderboard.add_score(”Stephen”, 10)
leaderboard.add_score(”Kate”, 15)
leaderboard.add_score(”Stephen”, 4)
leaderboard.display()
Code Block #4
# ...
# later in the game, or in another game instance...
print(”\nNow Dealing With ‘another_leaderboard’”)
another_leaderboard = Leaderboard()
another_leaderboard.add_score(”Max”, 7)
another_leaderboard.add_score(”Sarah”, 13)
another_leaderboard.display()
Code Block #5
# ...
print(id(leaderboard))
print(id(another_leaderboard))
print(leaderboard is another_leaderboard)
Code Block #6
class Leaderboard:
    _instance = None
    
    def __new__(cls, *args, **kwargs):
        if cls._instance is None:
            cls._instance = super().__new__(cls)
        return cls._instance
      
    # ...
Code Block #7
# ...

leaderboard = Leaderboard()
leaderboard.add_score(”Stephen”, 10)
leaderboard.add_score(”Kate”, 15)
leaderboard.add_score(”Stephen”, 4)
print(”Show leaderboard:”)
leaderboard.display()

# later in the game, or in another game instance...
another_leaderboard = Leaderboard()
another_leaderboard.add_score(”Max”, 7)
another_leaderboard.add_score(”Sarah”, 13)
print(”\nShow leaderboard:”)
another_leaderboard.display()

print()
print(id(leaderboard))
print(id(another_leaderboard))
print(leaderboard is another_leaderboard)
Code Block #8
class Leaderboard:
    # ...
    
    def __init__(self):
        self.scores = {}
		
    # ...
Code Block #9
class Leaderboard:
    # ...

    def __init__(self):
        # Prevent reinitialisation
        if not hasattr(self, “initialised”):
            self.scores = {}
            self.initialised = True

    # ...
Code Block #10
# leaderboard.py

class Leaderboard:
    def __init__(self):
        self.scores = {}

    def add_score(self, player, score):
        if player in self.scores:
            self.scores[player] += score
        else:
            self.scores[player] = score

    def get_leaderboard(self):
        return sorted(
            self.scores.items(),
            key=lambda item: item[1],
            reverse=True,
        )

    def display(self):
        for player, score in self.get_leaderboard():
            print(f”{player}: {score}”)

    def reset(self):
        # You may want to add a confirmation step in a real application,
        # or save a backup to a file first
        self.scores.clear()
        

# Create a single instance of Leaderboard
leaderboard = Leaderboard()
Code Block #11
# main.py

from leaderboard import leaderboard

leaderboard.add_score(”Stephen”, 10)
leaderboard.add_score(”Kate”, 15)
leaderboard.add_score(”Stephen”, 4)
print(”Show leaderboard:”)
leaderboard.display()

# later in the game, or in another game instance...
leaderboard.add_score(”Max”, 7)
leaderboard.add_score(”Sarah”, 13)
print(”\nShow leaderboard:”)
leaderboard.display()
Code Block #12
# leaderboard.py

print(”leaderboard.py loaded”)

class Leaderboard:
    # ...
Code Block #13
# main.py

from leaderboard import leaderboard

leaderboard.add_score(”Stephen”, 10)
leaderboard.add_score(”Kate”, 15)
leaderboard.add_score(”Stephen”, 4)
print(”Show leaderboard:”)
leaderboard.display()

# Note, we usually never place imports in the middle of a file,
# but this is just to illustrate that the singleton instance
# is shared even if we import it again.
from leaderboard import leaderboard

# later in the game, or in another game instance...
leaderboard.add_score(”Max”, 7)
leaderboard.add_score(”Sarah”, 13)
print(”\nShow leaderboard:”)
leaderboard.display()
Code Block #14
_scores = {}

def add_score(player, score):
    if player in _scores:
        _scores[player] += score
    else:
        _scores[player] = score

def get_leaderboard():
    return sorted(
        _scores.items(),
        key=lambda item: item[1],
        reverse=True,
    )

def display_leaderboard():
    for player, score in get_leaderboard():
        print(f”{player}: {score}”)

def reset_leaderboard():
    # You may want to add a confirmation step in a real application,
    # or save a backup to a file first
    _scores.clear()

add_score(”Stephen”, 10)
add_score(”Kate”, 15)
add_score(”Stephen”, 4)
print(”Show leaderboard:”)
display_leaderboard()

# later in the game, or in another game instance...
add_score(”Max”, 7)
add_score(”Sarah”, 13)
print(”\nShow leaderboard:”)
display_leaderboard()
Code Block #15
# game_services.py

from leaderboard import Leaderboard

class GameServices:
    def __init__(self):
        self.leaderboard = Leaderboard()

game_services = GameServices()
Code Block #16
# main.py

from game_services import game_services

game_services.leaderboard.add_score(”Stephen”, 10)
game_services.leaderboard.add_score(”Kate”, 15)
game_services.leaderboard.add_score(”Stephen”, 4)
print(”Show leaderboard:”)
game_services.leaderboard.display()

# later in the game, or in another game instance...
game_services.leaderboard.add_score(”Max”, 7)
game_services.leaderboard.add_score(”Sarah”, 13)
print(”\nShow leaderboard:”)
game_services.leaderboard.display()

For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!

Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.

And you can find out more about me at stephengruppetta.com

October 14, 2025 10:04 PM UTC