skip to navigation
skip to content

Planet Python

Last update: March 09, 2026 04:43 PM UTC

March 09, 2026


The Python Coding Stack

Field Notes: First, Second, and Five Hundred and Twenty-Third • [Club]

Have you ever needed to convert numbers like 22 into the string “twenty-two” or 523 into “five hundred and twenty-third”?

March 09, 2026 03:25 PM UTC


Real Python

Python Gains frozendict and Other Python News for March 2026

Catch up on the latest Python news: frozendict joins the built-ins, Django patches SQL injections, and AI SDKs race to add WebSocket transport.

March 09, 2026 02:00 PM UTC

Quiz: Introduction to Python SQL Libraries

Learn to connect Python to databases, run queries, and manage data using SQLite, PostgreSQL, MySQL, and SQL basics.

March 09, 2026 12:00 PM UTC

Quiz: Pydantic AI: Build Type-Safe LLM Agents in Python

Learn the trade-offs of using Pydantic AI in production, including validation retries, structured outputs, tool usage, and token costs.

March 09, 2026 12:00 PM UTC


Python Anywhere

How PythonAnywhere Became a Publish Target for BeeWare Apps

tl;dr

You can now deploy BeeWare apps as web apps on PythonAnywhere with a single command. Install the pythonanywhere-briefcase-plugin, run briefcase publish web static, done. There’s a step-by-step tutorial if you want to try it right now.

March 09, 2026 06:00 AM UTC

March 08, 2026


Django Weblog

DSF member of the month - Theresa Seyram Agbenyegah

For March 2026, we welcome Theresa Seyram Agbenyegah as our DSF member of the month! ⭐

Theresa portrait, a pretty black woman with short hair. She is looking at the camera with a big smile. She wears a white t-shirt with written in green

Theresa is a passionate community builder serving in the DSF Events Support Working Group. She has demonstrated strong leadership by taking on roles such as LOC Programmes Lead at PyCon Africa 2024 and Programs Chair for PyCon Ghana 2025. She also organized DjangoGirls events across multiple PyCons, including PyCon Ghana 2022 and PyCon Africa 2024.

You can learn more about Theresa by visiting Theresa's LinkedIn profile and her GitHub Profile.

Let’s spend some time getting to know Theresa better!

Can you tell us a little about yourself (hobbies, education, etc)?

I’m Theresa Seyram Agbenyegah, mostly referred to in the community as Stancy; a backend engineer, social entrepreneur, and an open source advocate/contributor passionate about using technology for impact. My background is in technology, community management, and systems design. Over the years, I have grown into roles that combine engineering, leadership, and ecosystem building.

I know many folks call you Stancy, me included, why specifically this name?

So “Stancy” is my initials 😁, People think it is my nickname.

How did you start using Django?

I was introduced to Django through a Django Girls workshop, and oh i’m a Django girl. I loved how opinionated yet flexible it was. The “batteries-included” philosophy made backend architecture feel structured without being restrictive.

The admin interface especially blew my mind early on; being able to scaffold powerful internal tools so quickly felt magical.

What other frameworks do you know, and if you had magical powers, what would you add to Django?

I have worked with Flask, FastAPI, and explored the Dart framework. Each has strengths, especially FastAPI in performance and modern async patterns.

If I had magical powers, I would:

But overall, Django’s maturity and ecosystem are hard to beat.

What projects are you working on now?

I’m not working on any big projects at the moment, I'm mostly working on client projects at work.

Which Django libraries are your favorite (core or 3rd party)?

Some of my favorites:

The ecosystem really makes Django powerful.

What are the top three things in Django that you like?

  1. The admin interface
  2. The ORM
  3. The strong community and documentation (FYI: it gives me a sense of belonging)
    Django feels stable, mature, and production-ready which builds developer confidence.

You have been in the organization of PyCon Africa and DjangoGirls that happen during this conference in 2024. That's great, do you have any advice for people who would like to join or create their own DjangoGirls event in their city?

Start small and start with intention.

You don’t need a massive budget. What you need is:

Most importantly, center the participants. The goal isn’t just teaching Django, it’s building confidence and introducing them to the Tech industry.

How did you become a leader of the PyLadies Ghana chapter?

My Leadership journey in the PyLadies Ghana community began with a simple step: attending a Django Girls workshop at Ho while I was in school. At the time, I was just curious and eager to learn more about programming. After the workshop, I was introduced to the PyLadies Ghana community and added to the group. That was my first real connection to a tech community.

I started by simply showing up, participating in conversations, attending events, and learning from others in the community. Over time, I became more involved. I joined the PyLadies Ghana Tema Chapter, where I supported the community lead with organizing activities that are bootcamps, meetups,etc. Through that experience, I had the opportunity to contribute more actively.

Because of my commitment and willingness to help, I was later asked to volunteer as a co-lead of PyLadies Ghana Tema Chapter. I accepted the opportunity and began working more closely with the Lead to organize events, support members, and grow the community. It was a period of learning, collaboration, and service.

As I continued contributing and volunteering, more opportunities opened up. When there was a chance to volunteer with PyLadies Ghana programs and events, I stepped forward again and volunteered as PyLadies Ghana Programs and Events Lead. That experience eventually led to me becoming a lead.

Looking back, my journey with PyLadies Ghana has been shaped by community, consistency, and volunteering. What started as attending a workshop grew into leadership and the chance to help create opportunities for others. It reminds me that sometimes all it takes is showing up, contributing where you can, and being willing to grow with the community.

You have been organizing a lot of events in Africa, especially in Ghana. How do you envision organizing an event? Would you like additional support?

For me, events are ecosystems, not just gatherings.

Focus on:

Yes, more funding support, institutional partnerships for internships, and long-term sponsorship pipelines would significantly help African tech communities scale sustainably.

International Women’s Day is a reminder that representation is not a trend, it's a necessity.

We need more women building systems, shaping infrastructure, leading conversations, and owning technical spaces.

And to every woman in tech: your presence is powerful. Keep building. Keep speaking. Keep leading. Keep mentoring and raising the next tech women.

What are your hobbies or what do you do when you’re not working?

When I’m not working, I’m usually reading books/articles, mentoring, watching movies or documentaries, cooking, reflecting, or exploring new ideas around technology and social impact. I also enjoy quiet strategy sessions with myself, thinking about how to build things that outlive me.

Is there anything else you’d like to say?

Technology is more than code, it's access, power, and possibility.

I hope more people see themselves not just as users of technology, but as architects of it.


Thank you for doing the interview, Stancy !

March 08, 2026 06:00 AM UTC

March 07, 2026


EuroPython

Humans of EuroPython: Cristián Maureira-Fredes

Ever wonder what powers EuroPython? 🐍 No it’s not coffee—It&aposs volunteers! From stage MCs to sponsor ambassadors, Wi-Fi wizards to vibe guardians, we’re the invisible threads weaving community magic. No title, no capes—just passion. 

Join us in celebrating one

March 07, 2026 08:55 PM UTC

March 06, 2026


Talk Python to Me

#539: Catching up with the Python Typing Council

You're adding type hints to your Python code, your editor is happy, autocomplete is working great. But then you switch tools and suddenly there are red squiggles everywhere. Who decides what a float annotation actually means? Or whether passing None where an int is expected should be an error? It turns out there's a five-person council dedicated to exactly these questions -- and two brand-new Rust-based type checkers are raising the bar. On this episode, I sit down with three members of the Python Typing Council -- Jelle Zijlstra, Rebecca Chen, and Carl Meyer -- to learn how the type system is governed, where the spec and the type checkers agree and disagree, and get the council's official advice on how much typing is just enough.

March 06, 2026 04:58 PM UTC


Peter Bengtsson

logger.error or logger.exception in Python

Consider this Python code:


try:
    1 / 0
except Exception as e:
    logger.error("An error occurred while dividing by zero.: %s", e)

The output of this is:


An error occurred while dividing by zero.: division by zero

No traceback. Perhaps you don't care because you don't need it.
I see code like this quite often and it's curious that you even use logger.error if it's not a problem. And it's curious that you include the stringified exception into the logger message.

Another common pattern I see is use of exc_info=True like this:


try:
    1 / 0
except Exception:
    logger.error("An error occurred while dividing by zero.", exc_info=True)

Its output is:


An error occurred while dividing by zero.
Traceback (most recent call last):
  File "/Users/peterbengtsson/dummy.py", line 23, in <module>
    1 / 0
    ~~^~~
ZeroDivisionError: division by zero

Ok, now you get the traceback and the error value (division by zero in this case).

But a more convenient function is logger.exception which looks like this:


try:
    1 / 0
except Exception:
    logger.exception("An error occurred while dividing by zero.")

Its output is:


An error occurred while dividing by zero.
Traceback (most recent call last):
  File "/Users/peterbengtsson/dummy.py", line 9, in <module>
    1 / 0
    ~~^~~
ZeroDivisionError: division by zero

So it's sugar for logger.error.

Also, a common logging config is something like this:

import logging

logger = logging.getLogger(__name__)
logging.basicConfig(
    format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", level=logging.ERROR
)

So if you use logger.exception what will it print? In short, the same as if you used logger.error. For example, with the logger.exception("An error occurred while dividing by zero.") line above:


2026-03-06 10:45:23,570 - ERROR - __main__ - An error occurred while dividing by zero.
Traceback (most recent call last):
  File "/Users/peterbengtsson/dummy.py", line 12, in <module>
    1 / 0
    ~~^~~
ZeroDivisionError: division by zero

Bonus - add_note

You can, if it's applicable, inject some more information about the exception. Consider:


try:
    n / 0
except Exception as exception:
    exception.add_note(f"The numerator was {n}.")
    logger.exception("An error occurred while dividing by zero.")

The net output of this is:


2026-03-06 10:48:34,279 - ERROR - __main__ - An error occurred while dividing by zero.
Traceback (most recent call last):
  File "/Users/peterbengtsson/dummy.py", line 13, in <module>
    1 / 0
    ~~^~~
ZeroDivisionError: division by zero
The numerator was 123.

March 06, 2026 03:27 PM UTC


Real Python

Quiz: Python Stacks, Queues, and Priority Queues in Practice

Test your knowledge of stacks, queues, deques, and priority queues with practical questions and Python coding exercises.

March 06, 2026 12:00 PM UTC


Anarcat

Wallabako retirement and Readeck adoption

Today I have made the tough decision of retiring the Wallabako project. I have rolled out a final (and trivial) 1.8.0 release which fixes the uninstall procedure and rolls out a bunch of dependency updates.

Why?

The main reason why I'm retiring Wallabako is that I have completely stopped using it. It's not the first time: for a while, I wasn't reading Wallabag articles on my Kobo anymore. But I had started working on it again about four years ago. Wallabako itself is about to turn 10 years old.

This time, I stopped using Wallabako because there's simply something better out there. I have switched away from Wallabag to Readeck!

And I'm also tired of maintaining "modern" software. Most of the recent commits on Wallabako are from renovate-bot. This feels futile and pointless. I guess it must be done at some point, but it also feels we went wrong somewhere there. Maybe Filippo Valsorda is right and one should turn dependabot off.

I did consider porting Wallabako to Readeck for a while, but there's a perfectly fine Koreader plugin that I've been pretty happy to use. I was worried it would be slow (because the Wallabag plugin is slow), but it turns out that Readeck is fast enough that this doesn't matter.

Moving from Wallabag to Readeck

Readeck is pretty fantastic: it's fast, it's lightweight, everything Just Works. All sorts of concerns I had with Wallabag are just gone: questionable authentication, questionable API, weird bugs, mostly gone. I am still looking for multiple tags filtering but I have a much better feeling about Readeck than Wallabag: it's written in Golang and under active development.

In any case, I don't want to throw shade at the Wallabag folks either. They did solve most of the issues I raised with them and even accepted my pull request. They have helped me collect thousands of articles for a long time! It's just time to move on.

The migration from Wallabag was impressively simple. The importer is well-tuned, fast, and just works. I wrote about the import in this issue, but it took about 20 minutes to import essentially all articles, and another 5 hours to refresh all the contents.

There are minor issues with Readeck which I have filed (after asking!):

But overall I'm happy and impressed with the result.

I'm also both happy and sad at letting go of my first (and only, so far) Golang project. I loved writing in Go: it's a clean language, fast to learn, and a beauty to write parallel code in (at the cost of a rather obscure runtime).

It would have been much harder to write this in Python, but my experience in Golang helped me think about how to write more parallel code in Python, which is kind of cool.

The GitLab project will remain publicly accessible, but archived, for the foreseeable future. If you're interested in taking over stewardship for this project, contact me.

Thanks Wallabag folks, it was a great ride!

March 06, 2026 03:05 AM UTC


Israel Fruchter

Maybe ORM/ODM are not dead? Yet...

So, let’s pick up where we left off. A couple of weeks ago, I wrote about how I took a 4-year-old fever dream—an ODM for Cassandra and ScyllaDB called coodie—and let an AI build the whole thing while I sipped my morning coffee.

It was a fun experiment. But then, a funny coincidence happened (or maybe the algorithm just has a sick sense of humor).

Right after I hit publish and started feeling good about my newfound “prompt engineer” status, I was listening to an episode of the pythonbytes podcast discussing Michael Kennedy’s recent post, Raw+DC: The ORM pattern of 2026?. The overarching thesis of their discussion? ORMs and ODMs are fundamentally dead. They are a relic of the past, bloated, abstraction-heavy, and ultimately, absolute performance killers.

I actually fired off a Twitter thread in response to it. And honestly, at first, I had to concede. They make a really good point. I spend my days deep in the ScyllaDB trenches, where we fight for every single microsecond. Putting a thick Python abstraction layer on top of a highly optimized driver usually sounds like a brilliant way to turn a sports car into a tractor.

But it got me thinking. How bad was coodie? Was my AI-generated Beanie-wannabe actually a performance disaster waiting to happen?

Giving it a test run

Like any respectable developer looking for an excuse to avoid real work, I decided to put my money where my AI-generated mouth is. Instead of sitting at my desk, I just offloaded the whole task to the Copilot agent from my phone to run some extensive benchmarks.

I didn’t just want to compare coodie to existing solutions like cqlengine. I wanted to establish an absolute performance floor. I wanted to test it against the Raw+DC pattern (Python dataclasses + hand-written CQL with prepared statements) to see exactly how much the “ORM tax” was really costing us.

The test spun up a local ScyllaDB node, hammered it with various workloads—inserts, reads, conditional updates, batch operations—and fetched the results back.

The finally surprising results

I had the agent run the script. I fully expected coodie to be heavily penalized. We all accept slower performance in exchange for autocomplete, declarative schemas, and not writing raw CQL strings.

I stared at the results the agent sent back to my screen. Then I told it to clear the cache, restart the Scylla container, and run it again just to be sure.

The results were genuinely surprising, and running them actually highlighted a few spots where we could squeeze out even more performance, leading straight to PR #190 to apply those lessons learned.

Here is the breakdown of what the agent and I found across the board.

Three-way Benchmark Results (scylla driver)

Benchmark Raw+DC (µs) coodie (µs) cqlengine (µs) coodie vs Raw+DC coodie vs cqlengine
single-insert 456 485 615 1.06× 0.79× ✅
insert-if-not-exists 1,180 1,170 1,370 ~1.00× 0.85× ✅
insert-with-ttl 448 469 640 1.05× 0.73× ✅
get-by-pk 461 520 665 1.13× 0.78× ✅
filter-secondary-index 1,370 2,740 8,530 2.00× 🟠 0.32× ✅
filter-limit 575 627 1,220 1.09× 0.51× ✅
count 904 1,500 1,590 1.66× 🟡 0.94× ✅
partial-update 409 960 542 2.35× 🔴 1.77× ❌
update-if-condition (LWT) 1,140 1,620 1,340 1.42× 🟡 1.21× ❌
single-delete 941 925 1,190 ~1.00× 0.78× ✅
bulk-delete 872 921 1,200 1.06× 0.77× ✅
batch-insert-10 596 634 1,700 1.06× 0.37× ✅
batch-insert-100 42,800 1,960 52,900 0.05× 🚀 0.04× ✅
collection-write 448 485 679 1.08× 0.71× ✅
collection-read 478 508 689 1.06× 0.74× ✅
collection-roundtrip 939 1,060 1,380 1.13× 0.77× ✅
model-instantiation 0.671 2.02 12.1 3.01× 🔴 0.17× ✅
model-serialization 10.1 2.05 4.56 0.20× 🚀 0.45× ✅

Summary of Findings

Near Parity on the Hot Paths: For standard reads (get-by-pk), inserts, deletes, and basic limited queries, coodie is operating with a completely negligible overhead compared to writing raw CQL by hand (usually hovering around 5-13% tax). It also outperforms cqlengine across the board on these operations.

Pydantic V2 is a beast: We need to remember that Pydantic V2 is basically a blazing-fast Rust engine wearing a Python trench coat. The data validation, object instantiation, and serialization happen at the speed of Rust, keeping the overhead minimal from the start. Just look at the serialization and batch performance multipliers!

The AI kept it lean: The code the LLM generated wasn’t building massive ASTs or doing unnecessary query translations. coodie essentially formats a clean dictionary and hands it straight to the underlying Scylla driver to do its native magic. It gets out of the way.

Lessons Learned (PR #190): Even with great initial numbers, running the full benchmark suite revealed bottlenecks on things like partial-update and count. We realized we were doing redundant data validation during read operations when fetching data we already knew was valid straight from the database. By bypassing the extra validation pass and loading the raw rows more directly into the Pydantic models (using model_construct() for DB data), we can shave off the remaining overhead.

Wrapping up

So, maybe ORM/ODM are not dead? Yet…

If the final overhead of using an ODM on hot paths is a measly 5-13%, but in exchange I get full type-safety, declarative schemas, and zero boilerplate, I am taking the ODM every single time.

It seems my digital sidekick built something that is actually production-ready, and with a little bit of human-driven optimization, it screams.

You can check out the code, star it, or run your own tests over at github.com/fruch/coodie.

PRs are welcome. יאללה, let’s see how fast we can make it.

March 06, 2026 12:00 AM UTC

March 05, 2026


The Python Coding Stack

You Store Data and You Do Stuff With Data • The OOP Mindset

Why use classes and objects?

March 05, 2026 10:58 PM UTC


Real Python

Quiz: Spyder: Your IDE for Data Science Development in Python

Test your knowledge of the Spyder IDE for Python data science, including its Variable Explorer, Plots pane, and Profiler.

March 05, 2026 12:00 PM UTC

Quiz: How to Use the OpenRouter API to Access Multiple AI Models via Python

Test your Python skills with OpenRouter: learn unified API access, model switching, provider routing, and fallback strategies.

March 05, 2026 12:00 PM UTC


scikit-learn

Update on array API adoption in scikit-learn

Author: Lucy Liu Note: this blog post is a cross-post of a Quansight Labs blog post.

March 05, 2026 12:00 AM UTC


Armin Ronacher

AI And The Ship of Theseus

March 05, 2026 12:00 AM UTC

March 04, 2026


PyCharm

Cursor is now available as an AI agent inside JetBrains IDEs through the Agent Client Protocol. Select it from the agent picker, and it has full access to your project. If you’ve spent any time in the AI coding space, you already know Cursor. It has been one of the most requested additions to the […]

March 04, 2026 04:41 PM UTC


Python Morsels

Invent your own comprehensions in Python

Python doesn't have tuple, frozenset, or Counter comprehensions, but you can invent your own by passing a generator expression to any iterable-accepting callable.

Table of contents

  1. Generator expressions pair nicely with iterable-accepting callables
  2. Tuple comprehensions
  3. frozenset comprehensions
  4. Counter comprehensions
  5. Aggregate with reducer functions
  6. Invent your own comprehensions with generator expressions

Generator expressions pair nicely with iterable-accepting callables

Generator expressions work really nicely with Python's any and all functions:

>>> numbers = [2, 1, 3, 4, 7, 11, 18]
>>> any(n > 1 for n in numbers)
True
>>> all(n > 1 for n in numbers)
False

In fact, I rarely see any and all used without a generator expression passed to them.

Note that generator expressions are made with parentheses:

>>> (n**2 for n in numbers)
<generator object <genexpr> at 0x74c535589b60>

But when a generator expression is the sole argument passed into a function:

>>> all((n > 1 for n in numbers))
False

The double set of parentheses (one to form the generator expression and one for the function call) can be turned into just a single set of parentheses:

>>> all(n > 1 for n in numbers)
False

This special allowance was added to Python's syntax because it's very common to see generator expressions passed in as the sole argument to specific functions.

Note that passing generator expressions into iterable-accepting functions and classes makes something that looks a bit like a custom comprehension. Every iterable-accepting function/class is a comprehension-like tool waiting to happen.

Tuple comprehensions

Python does not have tuple …

Read the full article: https://www.pythonmorsels.com/custom-comprehensions/

March 04, 2026 03:30 PM UTC


PyCharm

Cursor Joined the ACP Registry and Is Now Live in Your JetBrains IDE

March 04, 2026 03:28 PM UTC


Real Python

How to Use the OpenRouter API to Access Multiple AI Models via Python

Access models from popular AI providers in Python through OpenRouter's unified API with smart routing, fallbacks, and cost controls.

March 04, 2026 02:00 PM UTC

Quiz: Build Your Weekly Python Study Schedule: 7 Days to Consistent Progress

Build a consistent Python study habit with a repeatable 7-day plan. Learn to set specific goals, schedule your week, and make practice stick.

March 04, 2026 12:00 PM UTC


Glyph Lefkowitz

What Is Code Review For?

Code review is not for catching bugs.

March 04, 2026 05:24 AM UTC


Seth Michael Larson

Relative “Dependency Cooldowns” in pip v26.0 with crontab

March 04, 2026 12:00 AM UTC

March 03, 2026


PyCoder’s Weekly

Issue #724: Unit Testing Performance, Ordering, FastAPI, and More (March 3, 2026)

March 03, 2026 07:30 PM UTC