skip to navigation
skip to content

Planet Python

Last update: November 12, 2025 09:43 PM UTC

November 12, 2025


Real Python

The Python Standard REPL: Try Out Code and Ideas Quickly

The Python standard REPL (Read-Eval-Print Loop) lets you run code interactively, test ideas, and get instant feedback. You start it by running the python command, which opens an interactive shell included in every Python installation.

In this tutorial, you’ll learn how to use the Python REPL to execute code, edit and navigate code history, introspect objects, and customize the REPL for a smoother coding workflow.

By the end of this tutorial, you’ll understand that:

  • You can enter and run simple or compound statements in a REPL session.
  • The implicit _ variable stores the result of the last evaluated expression and can be reused in later expressions.
  • You can reload modules dynamically with importlib.reload() to test updates without restarting the REPL.
  • The modern Python REPL supports auto-indentation, history navigation, syntax highlighting, quick commands, and autocompletion, which improves your user experience.
  • You can customize the REPL with a startup file, color themes, and third-party libraries like Rich for a better experience.

With these skills, you can move beyond just running short code snippets and start using the Python REPL as a flexible environment for testing, debugging, and exploring new ideas.

Get Your Code: Click here to download the free sample code that you’ll use to explore the capabilities of Python’s standard REPL.

Take the Quiz: Test your knowledge with our interactive “The Python Standard REPL: Try Out Code and Ideas Quickly” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

The Python Standard REPL: Try Out Code and Ideas Quickly

Test your understanding of the Python standard REPL. The Python REPL allows you to run Python code interactively, which is useful for testing new ideas, exploring libraries, refactoring and debugging code, and trying out examples.

Getting to Know the Python Standard REPL

In computer programming, you’ll find two kinds of programming languages: compiled and interpreted languages. Compiled languages like C and C++ have an associated compiler program that converts the language’s code into machine code.

This machine code is typically saved in an executable file. Once you have an executable, you can run your program on any compatible computer system without needing the compiler or the source code.

In contrast, interpreted languages like Python need an interpreter program. This means that you need to have a Python interpreter installed on your computer to run Python code. Some may consider this characteristic a drawback because it can make your code distribution process much more difficult.

However, in Python, having an interpreter offers one significant advantage that comes in handy during your development and testing process. The Python interpreter allows for what’s known as an interactive Read-Eval-Print Loop (REPL), or shell, which reads a piece of code, evaluates it, and then prints the result to the console in a loop.

The Python REPL is a built-in interactive coding playground that you can start by typing python in your terminal. Once in a REPL session, you can run Python code:

Python
>>> "Python!" * 3
Python!Python!Python!
>>> 40 + 2
42

In the REPL, you can use Python as a calculator, but also try any Python code you can think of, and much more! Jump to starting and terminating REPL interactive sessions if you want to get your hands dirty right away, or keep reading to gather more background context first.

Note: In this tutorial, you’ll learn about the CPython standard REPL, which is available in all the installers of this Python distribution. If you don’t have CPython yet, then check out How to Install Python on Your System: A Guide for detailed instructions.

The standard REPL has changed significantly since Python 3.13 was released. Several limitations from earlier versions have been lifted. Throughout this tutorial, version differences are indicated when appropriate.

To dive deeper into the new REPL features, check out these resources:

The Python interpreter can execute Python code in two modes:

  1. Script, or program
  2. Interactive, or REPL

In script mode, you use the interpreter to run a source file—typically a .py file—as an executable program. In this case, Python loads the file’s content and runs the code line by line, following the script or program’s execution flow.

Alternatively, interactive mode is when you launch the interpreter using the python command and use it as a platform to run code that you type in directly.

In this tutorial, you’ll learn how to use the Python standard REPL to run code interactively, which allows you to try ideas and test concepts when using and learning Python. Are you ready to take a closer look at the Python REPL? Keep reading!

What Is Python’s Interactive Shell or REPL?

When you run the Python interpreter in interactive mode, you open an interactive shell, also known as an interactive session. In this shell, your keyboard is the input source, and your screen is the output destination.

Note: In this tutorial, the terms interactive shell, interactive session, interpreter session, and REPL session are used interchangeably.

Here’s how the REPL works: it takes input consisting of Python code, which the interpreter parses and evaluates. Next, the interpreter displays the result on your screen, and the process starts again as a loop.

Read the full article at https://realpython.com/python-repl/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 12, 2025 02:00 PM UTC


Peter Bengtsson

Using AI to rewrite blog post comments

Using AI to correct and edit blog post comments as part of the moderation process.

November 12, 2025 12:42 PM UTC


Python Software Foundation

Python is for everyone: Join in the PSF year-end fundraiser & membership drive!

The Python Software Foundation (PSF) is the charitable organization behind Python, dedicated to advancing, supporting, and protecting the Python programming language and the community that sustains it. That mission and cause are more than just words we believe in. Our tiny but mighty team works hard to deliver the projects and services that allow Python to be the thriving, independent, community-driven language it is today. Some of what the PSF does includes producing PyCon US, hosting the Python Package Index (PyPI), supporting 5 Developers-in-Residence, maintaining critical community infrastructure, and more.

Python is for teaching, learning, playing, researching, exploring, creating, working– the list goes on and on and on! Support this year's fundraiser with your donations and memberships to help the PSF, the Python community, and the language stay strong and sustainable. Because Python is for everyone, thanks to you.

There are two direct ways to join through donate.python.org

 

>>> Donate or Become a Member Today! <<<

 

If you already donated and/or you’re already a member, you can:

 

Your donations and support:

 

Highlights from 2025:

November 12, 2025 11:39 AM UTC


Python Morsels

Unnecessary parentheses in Python

Python's ability to use parentheses for grouping can often confuse new Python users into over-using parentheses in ways that they shouldn't be used.

Table of contents

  1. Parentheses can be used for grouping
  2. Python's if statements don't use parentheses
  3. Parentheses can go anywhere
  4. Parentheses for wrapping lines
  5. Parentheses that make statements look like functions
  6. Parentheses can go in lots of places
  7. Use parentheses sometimes
  8. Consider readability when adding or removing parentheses

Parentheses can be used for grouping

Parentheses are used for 3 things in Python: calling callables, creating empty tuples, and grouping.

Functions, classes, and other [callable][] objects can be called with parentheses:

>>> print("I'm calling a function")
I'm calling a function

Empty tuples can be created with parentheses:

>>> empty = ()

Lastly, parentheses can be used for grouping:

>>> 3 * (4 + 7)
33

Sometimes parentheses are necessary to convey the order of execution for an expression. For example, 3 * (4 + 7) is different than 3 * 4 + 7:

>>> 3 * (4 + 7)
33
>>> 3 * 4 + 7
19

Those parentheses around 4 + 7 are for grouping that sub-expression, which changes the meaning of the larger expression.

All confusing and unnecessary uses of parentheses are caused by this third use: grouping parentheses.

Python's if statements don't use parentheses

In JavaScript if statements look …

Read the full article: https://www.pythonmorsels.com/unnecessary-parentheses/

November 12, 2025 03:30 AM UTC


Seth Michael Larson

Blogrolls are the Best(rolls)

Happy 6-year blogiversary to me! 🎉 To celebrate I want to talk about other peoples’ blogs, more specifically the magic of “blogrolls”. Blogrolls are “lists of other sites that you read, are a follower of, or recommend”. Any blog can host a blogroll, or sometimes websites can be one big blogroll.

I’ve hosted a blogroll on my own blog since 2023 and encourage other bloggers to do so. My own blogroll is generated from the list of RSS feeds I subscribe to and articles that I “favorite” within my RSS reader. If you want to be particularly fancy you can add an RSS feed (example) to your blogroll that provides readers a method to “subscribe” for future blogroll updates.

Blogrolls are like catnip for me: I cannot resist opening and Ctrl-clicking every link until I can’t see my tabs anymore. The feeling is akin to the first deep breath of air before starting a hike: there’s a rush of new information, topics, and potential new blogs to follow.

Blogrolls can bridge the “effort chasm” I frequently hear as an issue when I recommend folks try an RSS feed reader. We’re not used to empty feeds anymore; self-curating blogs until you receive multiple articles per day takes time and effort. Blogrolls can help here, especially ones that publish using the importable OPML format.

You can instantly populate your feed reader app with hundreds of feeds from blogs that are likely relevant to you. Simply create an account on a feed reader, import the blogroll OPML document from a blogger you enjoy, and watch the articles “roll” in. Blogrolls are almost like Bluesky “Starter Packs” in this way!

Hopefully this has convinced you to either curate your own blogroll or to start looking for (or asking for!) blogrolls from your favorite writers on the Web. Share your favorite blogroll with me on email or social media. Title inspired by “Hexagons are the Best-agons”.



Thanks for keeping RSS alive! ♥

November 12, 2025 12:00 AM UTC

November 11, 2025


Ahmed Bouchefra

Let’s be honest. There’s a huge gap between writing code that works and writing code that’s actually good. It’s the number one thing that separates a junior developer from a senior, and it’s something a surprising number of us never really learn.

If you’re serious about your craft, you’ve probably felt this. You build something, it functions, but deep down you know it’s brittle. You’re afraid to touch it a year from now.

Today, we’re going to bridge that gap. I’m going to walk you through eight design principles that are the bedrock of professional, production-level code. This isn’t about fancy algorithms; it’s about a mindset. A way of thinking that prepares your code for the future.

And hey, if you want a cheat sheet with all these principles plus the code examples I’m referencing, you can get it for free. Just sign up for my newsletter from the link in the description, and I’ll send it right over.

Ready? Let’s dive in.

1. Cohesion & Single Responsibility

This sounds academic, but it’s simple: every piece of code should have one job, and one reason to change.

High cohesion means you group related things together. A function does one thing. A class has one core responsibility. A module contains related classes.

Think about a UserManager class. A junior dev might cram everything in there: validating user input, saving the user to the database, sending a welcome email, and logging the activity. At first glance, it looks fine. But what happens when you want to change your database? Or swap your email service? You have to rip apart this massive, god-like class. It’s a nightmare.

The senior approach? Break it up. You’d have:

Then, your main UserService class delegates the work to these other, specialized classes. Yes, it’s more files. It looks like overkill for a small project. I get it. But this is systems-level thinking. You’re anticipating future changes and making them easy. You can now swap out the database logic or the email provider without touching the core user service. That’s powerful.

2. Encapsulation & Abstraction

This is all about hiding the messy details. You want to expose the behavior of your code, not the raw data.

Imagine a simple BankAccount class. The naive way is to just have public attributes like balance and transactions. What could go wrong? Well, another developer (or you, on a Monday morning) could accidentally set the balance to a negative number. Or set the transactions list to a string. Chaos.

The solution is to protect your internal state. In Python, we use a leading underscore (e.g., _balance) as a signal: “Hey, this is internal. Please don’t touch it directly.”

Instead of letting people mess with the data, you provide methods: deposit(), withdraw(), get_balance(). Inside these methods, you can add protective logic. The deposit() method can check for negative amounts. The withdraw() method can check for sufficient funds.

The user of your class doesn’t need to know how it all works inside. They just need to know they can call deposit(), and it will just work. You’ve hidden the complexity and provided a simple, safe interface.

3. Loose Coupling & Modularity

Coupling is how tightly connected your code components are. You want them to be as loosely coupled as possible. A change in one part shouldn’t send a ripple effect of breakages across the entire system.

Let’s go back to that email example. A tightly coupled OrderProcessor might create an instance of EmailSender directly inside itself. Now, that OrderProcessor is forever tied to that specific EmailSender class. What if you want to send an SMS instead? You have to change the OrderProcessor code.

The loosely coupled way is to rely on an “interface,” or what Python calls an Abstract Base Class (ABC). You define a generic Notifier class that says, “Anything that wants to be a notifier must have a send() method.”

Then, your OrderProcessor just asks for a Notifier object. It doesn’t care if it’s an EmailNotifier or an SmsNotifier or a CarrierPigeonNotifier. As long as the object you give it has a send() method, it will work. You’ve decoupled the OrderProcessor from the specific implementation of the notification. You can swap them in and out interchangeably.


A quick pause. I want to thank boot.dev for sponsoring this discussion. It’s an online platform for backend development that’s way more interactive than just watching videos. You learn Python and Go by building real projects, right in your browser. It’s gamified, so you level up and unlock content, which is surprisingly addictive. The core content is free, and with the code techwithtim, you get 25% off the annual plan. It’s a great way to put these principles into practice. Now, back to it.

4. Reusability & Extensibility

This one’s a question you should always ask yourself: Can I add new functionality without editing existing code?

Think of a ReportGenerator function that has a giant if/elif/else block to handle different formats: if format == 'text', elif format == 'csv', elif format == 'html'. To add a JSON format, you have to go in and add another elif. This is not extensible.

The better way is, again, to use an abstract class. Create a ReportFormatter interface with a format() method. Then create separate classes: TextFormatter, CsvFormatter, HtmlFormatter, each with their own format() logic.

Your ReportGenerator now just takes any ReportFormatter object and calls its format() method. Want to add JSON support? You just create a new JsonFormatter class. You don’t have to touch the ReportGenerator at all. It’s extensible without being modified.

5. Portability

This is the one everyone forgets. Will your code work on a different machine? On Linux instead of Windows? Without some weird version of C++ installed?

The most common mistake I see is hardcoding file paths. If you write C:\Users\Ahmed\data\input.txt, that code is now guaranteed to fail on every other computer in the world.

The solution is to use libraries like Python’s os and pathlib to build paths dynamically. And for things like API keys, database URLs, and other environment-specific settings, use environment variables. Don’t hardcode them! Create a .env file and load them at runtime. This makes your code portable and secure.

6. Defensibility

Write your code as if an idiot is going to use it. Because someday, that idiot will be you.

This means validating all inputs. Sanitizing data. Setting safe default values. Ask yourself, “What’s the worst that could happen if someone provides bad input?” and then guard against it.

In a payment processor, don’t have debug_mode=True as the default. Don’t set the maximum retries to 100. Don’t forget a timeout. These are unsafe defaults.

And for the love of all that is holy, validate your inputs! Don’t just assume the amount is a number or that the account_number is valid. Check it. Raise clear errors if it’s wrong. Protect your system from bad data.

7. Maintainability & Testability

The most expensive part of software isn’t writing it; it’s maintaining it. And you can’t maintain what you can’t test.

Code that is easy to test is, by default, more maintainable.

Look at a complex calculate function that parses an expression, performs the math, handles errors, and writes to a log file all at once. How do you even begin to test that? There are a million edge cases.

The answer is to break it down. Have a separate OperationParser. Have simple add, subtract, multiply functions. Each of these small, pure components is incredibly easy to test. Your main calculate function then becomes a simple coordinator of these tested components.

8. Simplicity (KISS, DRY, YAGNI)

Finally, after all that, the highest goal is simplicity.

Phew, that was a lot. But these patterns are what it takes to level up. It’s a shift from just getting things done to building things that last.

If you enjoyed this, let me know. I’d love to make more advanced videos like this one. See you in the next one.

November 11, 2025 09:03 PM UTC


PyCoder’s Weekly

Issue #708: Debugging Live Code, NiceGUI, Textual, and More (Nov. 11, 2025)

#708 – NOVEMBER 11, 2025
View in Browser »

The PyCoder’s Weekly Logo


Debugging Live Code With CPython 3.14

Python 3.14 added new capabilities to attach to and debug a running process. Learn what this means for debugging and examining your running code.
SURISTER

NiceGUI Goes 3.0

Talk Python interviews Rodja Trappe and Falko Schindler, creators of the NiceGUI toolkit. They talk about what it can do and how it works.
TALK PYTHON

AI Code Reviews Without the Noise

alt

Sentry’s AI Code Review has caught more than 30,000 bugs before they hit production. 🤯 What it hasn’t caught: about a million spammy style nitpicks. Plus, it now predicts bugs 50% faster, and provides agent prompts to automate your fixes. Learn more about Sentry’s AI Code Review →
SENTRY sponsor

Building UIs in the Terminal With Python Textual

Learn to build rich, interactive terminal UIs in Python with Textual: a powerful library for modern, event-driven TUIs.
REAL PYTHON course

PEP 810: Explicit Lazy Imports (Accepted)

PYTHON.ORG

PEP 791: math.integer — submodule for integer-specific mathematics functions (Final)

PYTHON.ORG

PyCon US, Long Beach CA, 2026: Call for Proposals Open

PYCON.BLOGSPOT.COM

Django Security Release: 5.2.8, 5.1.14, and 4.2.26

DJANGO SOFTWARE FOUNDATION

EuroPython 2025 Videos Available

YOUTUBE.COM video

Python Jobs

Python Video Course Instructor (Anywhere)

Real Python

Python Tutorial Writer (Anywhere)

Real Python

More Python Jobs >>>

Articles & Tutorials

How Often Does Python Allocate?

How often does Python allocate? The answer is “very often”. This post demonstrates how you can see that for yourself. See also the associated HN discussion
ZACK RADISIC

Improving Security and Integrity of Python Package Archives

Python packages are built on top of archive formats like ZIP which can be problematic as features of the format can be abused. A recent white paper outlines dangers to PyPI and what can be done about it.
PYTHON SOFTWARE FOUNDATION

The 2025 AI Stack, Unpacked

alt

Temporal’s industry report explores how teams like Snap, Descript, and ZoomInfo are building production-ready AI systems, including what’s working, what’s breaking, and what’s next. Download today to see how your stack compares →
TEMPORAL sponsor

10 Smart Performance Hacks for Faster Python Code

Some practical optimization hacks, from data structures to built-in modules, that boost speed, reduce overhead, and keep your Python code clean.
DIDO GRIGOROV

Understanding the PSF’s Current Financial Outlook

A summary of the Python Software Foundation’s current financial outlook and what that means to the variety of community groups it supports.
PYTHON SOFTWARE FOUNDATION

__dict__: Where Python Stores Attributes

Most Python objects store their attributes in a __dict__ dictionary. Modules and classes always use __dict__, but not everything does.
TREY HUNNER

My Favorite Django Packages

A descriptive list of Mattias’s favorite Django packages divided into areas, including core helpers, data structures, CMS, PDFs, and more.
MATTHIAS KESTENHOLZ

A Close Look at a FastAPI Example Application

Set up an example FastAPI app, add path and query parameters, and handle CRUD operations with Pydantic for clean, validated endpoints.
REAL PYTHON

Quiz: A Close Look at a FastAPI Example Application

Practice FastAPI basics with path parameters, request bodies, async endpoints, and CORS. Build confidence to design and test simple Python web APIs.
REAL PYTHON

An Annual Release Cycle for Django

Carlton wants Django to move to an annual release cycle. This post explains why he thinks this way and what the benefits might be.
CARLTON GIBSON

Behave: ML Tests With Behavior-Driven Development

This walkthrough shows how to use the Behave library to bring behavior-driven testing to data and machine learning Python projects.
CODECUT.AI • Shared by Khuyen Tran

Polars and Pandas: Working With the Data-Frame

This post compares the syntax of Polars and pandas with a quick peek at the changes coming in pandas 3.0.
JUMPINGRIVERS.COM • Shared by Aida Gjoka

Projects & Code

moneyflow: Personal Finance Data Interface for Power Users

GITHUB.COM/WESM

wove: Beautiful Python Async

GITHUB.COM/CURVEDINF

tiny8: A Tiny CPU Simulator Written in Python

GITHUB.COM/SQL-HKR

FuncToWeb: Transform Pythons Function Into a Web Interface

GITHUB.COM/OFFERRALL

dj-spinners: Pure SVG Loading Spinners for Django

GITHUB.COM/ADAMGHILL

Events

Weekly Real Python Office Hours Q&A (Virtual)

November 12, 2025
REALPYTHON.COM

Python Leiden User Group

November 13, 2025
PYTHONLEIDEN.NL

Python Kino-Barcamp Südost

November 14 to November 17, 2025
BARCAMPS.EU

Python Atlanta

November 14, 2025
MEETUP.COM

PyCon Wroclaw 2025

November 15 to November 16, 2025
PYCONWROCLAW.COM

PyCon Ireland 2025

November 15 to November 17, 2025
PYCON.IE


Happy Pythoning!
This was PyCoder’s Weekly Issue #708.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

November 11, 2025 07:30 PM UTC


Daniel Roy Greenfeld

Visiting Tokyo, Japan from November 12 to 24

I'm excited to announce that me and Audrey will be visiting Japan from November 12 to November 24, 2025! This will be our first time in Japan, and we can't wait to explore Tokyo. Yes, we'll be in Tokyo for most of it, near the Shinjuku area, working from coffee shops, meeting some colleagues, and exploring the city during our free time. Our six year old daughter is with us, so our explorations will be family-friendly.

Unfortunately, we'll be between Python meetups in the Tokyo area. However, if you are in Toyo and write software in any shape or form, and would like to get together for coffee or a meal, please let me know!

If you do Brazilian Jiu-Jitsu in Tokyo, please let me know as well! I'd love to drop by a gym while I'm there.

November 11, 2025 02:45 PM UTC


Real Python

Python Operators and Expressions

Python operators enable you to perform computations by combining objects and operators into expressions. Understanding Python operators is essential for manipulating data effectively.

This video course covers arithmetic, comparison, Boolean, identity, membership, bitwise, concatenation, and repetition operators, along with augmented assignment operators. You’ll also learn how to build expressions using these operators and explore operator precedence to understand the order of operations in complex expressions.

By the end of this video course, you’ll understand that:


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 11, 2025 02:00 PM UTC


Glyph Lefkowitz

The “Dependency Cutout” Workflow Pattern, Part I

Tell me if you’ve heard this one before.

You’re working on an application. Let’s call it “FooApp”. FooApp has a dependency on an open source library, let’s call it “LibBar”. You find a bug in LibBar that affects FooApp.

To envisage the best possible version of this scenario, let’s say you actively like LibBar, both technically and socially. You’ve contributed to it in the past. But this bug is causing production issues in FooApp today, and LibBar’s release schedule is quarterly. FooApp is your job; LibBar is (at best) your hobby. Blocking on the full upstream contribution cycle and waiting for a release is an absolute non-starter.

What do you do?

There are a few common reactions to this type of scenario, all of which are bad options.

I will enumerate them specifically here, because I suspect that some of them may resonate with many readers:

  1. Find an alternative to LibBar, and switch to it.

    This is a bad idea because a transition to a core infrastructure component could be extremely expensive.

  2. Vendor LibBar into your codebase and fix your vendored version.

    This is a bad idea because carrying this one fix now requires you to maintain all the tooling associated with a monorepo1: you have to be able to start pulling in new versions from LibBar regularly, reconcile your changes even though you now have a separate version history on your imported version, and so on.

  3. Monkey-patch LibBar to include your fix.

    This is a bad idea because you are now extremely tightly coupled to a specific version of LibBar. By modifying LibBar internally like this, you’re inherently violating its compatibility contract, in a way which is going to be extremely difficult to test. You can test this change, of course, but as LibBar changes, you will need to replicate any relevant portions of its test suite (which may be its entire test suite) in FooApp. Lots of potential duplication of effort there.

  4. Implement a workaround in your own code, rather than fixing it.

    This is a bad idea because you are distorting the responsibility for correct behavior. LibBar is supposed to do LibBar’s job, and unless you have a full wrapper for it in your own codebase, other engineers (including “yourself, personally”) might later forget to go through the alternate, workaround codepath, and invoke the buggy LibBar behavior again in some new place.

  5. Implement the fix upstream in LibBar anyway, because that’s the Right Thing To Do, and burn credibility with management while you anxiously wait for a release with the bug in production.

    This is a bad idea because you are betraying your users — by allowing the buggy behavior to persist — for the workflow convenience of your dependency providers. Your users are probably giving you money, and trusting you with their data. This means you have both ethical and economic obligations to consider their interests.

    As much as it’s nice to participate in the open source community and take on an appropriate level of burden to maintain the commons, this cannot sustainably be at the explicit expense of the population you serve directly.

    Even if we only care about the open source maintainers here, there’s still a problem: as you are likely to come under immediate pressure to ship your changes, you will inevitably relay at least a bit of that stress to the maintainers. Even if you try to be exceedingly polite, the maintainers will know that you are coming under fire for not having shipped the fix yet, and are likely to feel an even greater burden of obligation to ship your code fast.

    Much as it’s good to contribute the fix, it’s not great to put this on the maintainers.

The respective incentive structures of software development — specifically, of corporate application development and open source infrastructure development — make options 1-4 very common.

On the corporate / application side, these issues are:

But there are problems on the open source side as well. Those problems are all derived from one big issue: because we’re often working with relatively small sums of money, it’s hard for upstream open source developers to consume either money or patches from application developers. It’s nice to say that you should contribute money to your dependencies, and you absolutely should, but the cost-benefit function is discontinuous. Before a project reaches the fiscal threshold where it can be at least one person’s full-time job to worry about this stuff, there’s often no-one responsible in the first place. Developers will therefore gravitate to the issues that are either fun, or relevant to their own job.

These mutually-reinforcing incentive structures are a big reason that users of open source infrastructure, even teams who work at corporate users with zillions of dollars, don’t reliably contribute back.

The Answer We Want

All those options are bad. If we had a good option, what would it look like?

It is both practically necessary3 and morally required4 for you to have a way to temporarily rely on a modified version of an open source dependency, without permanently diverging.

Below, I will describe a desirable abstract workflow for achieving this goal.

Step 0: Report the Problem

Before you get started with any of these other steps, write up a clear description of the problem and report it to the project as an issue; specifically, in contrast to writing it up as a pull request. Describe the problem before submitting a solution.

You may not be able to wait for a volunteer-run open source project to respond to your request, but you should at least tell the project what you’re planning on doing.

If you don’t hear back from them at all, you will have at least made sure to comprehensively describe your issue and strategy beforehand, which will provide some clarity and focus to your changes.

If you do hear back from them, in the worst case scenario, you may discover that a hard fork will be necessary because they don’t consider your issue valid, but even that information will save you time, if you know it before you get started. In the best case, you may get a reply from the project telling you that you’ve misunderstood its functionality and that there is already a configuration parameter or usage pattern that will resolve your problems with no new code. But in all cases, you will benefit from early coordination on what needs fixing before you get to how to fix it.

Step 1: Source Code and CI Setup

Fork the source code for your upstream dependency to a writable location where it can live at least for the duration of this one bug-fix, and possibly for the duration of your application’s use of the dependency. After all, you might want to fix more than one bug in LibBar.

You want to have a place where you can put your edits, that will be version controlled and code reviewed according to your normal development process. This probably means you’ll need to have your own main branch that diverges from your upstream’s main branch.

Remember: you’re going to need to deploy this to your production, so testing gates that your upstream only applies to final releases of LibBar will need to be applied to every commit here.

Depending on your LibBar’s own development process, this may result in slightly unusual configurations where, for example, your fixes are written against the last LibBar release tag, rather than its current5 main; if the project has a branch-freshness requirement, you might need two branches, one for your upstream PR (based on main) and one for your own use (based on the release branch with your changes).

Ideally for projects with really good CI and a strong “keep main release-ready at all times” policy, you can deploy straight from a development branch, but it’s good to take a moment to consider this before you get started. It’s usually easier to rebase changes from an older HEAD onto a newer one than it is to go backwards.

Speaking of CI, you will want to have your own CI system. The fact that GitHub Actions has become a de-facto lingua franca of continuous integration means that this step may be quite simple, and your forked repo can just run its own instance.

Optional Bonus Step 1a: Artifact Management

If you have an in-house artifact repository, you should set that up for your dependency too, and upload your own build artifacts to it. You can often treat your modified dependency as an extension of your own source tree and install from a GitHub URL, but if you’ve already gone to the trouble of having an in-house package repository, you can pretend you’ve taken over maintenance of the upstream package temporarily (which you kind of have) and leverage those workflows for caching and build-time savings as you would with any other internal repo.

Step 2: Do The Fix

Now that you’ve got somewhere to edit LibBar’s code, you will want to actually fix the bug.

Step 2a: Local Filesystem Setup

Before you have a production version on your own deployed branch, you’ll want to test locally, which means having both repositories in a single integrated development environment.

At this point, you will want to have a local filesystem reference to your LibBar dependency, so that you can make real-time edits, without going through a slow cycle of pushing to a branch in your LibBar fork, pushing to a FooApp branch, and waiting for all of CI to run on both.

This is useful in both directions: as you prepare the FooApp branch that makes any necessary updates on that end, you’ll want to make sure that FooApp can exercise the LibBar fix in any integration tests. As you work on the LibBar fix itself, you’ll also want to be able to use FooApp to exercise the code and see if you’ve missed anything - and this, you wouldn’t get in CI, since LibBar can’t depend on FooApp itself.

In short, you want to be able to treat both projects as an integrated development environment, with support from your usual testing and debugging tools, just as much as you want your deployment output to be an integrated artifact.

Step 2b: Branch Setup for PR

However, for continuous integration to work, you will also need to have a remote resource reference of some kind from FooApp’s branch to LibBar. You will need 2 pull requests: the first to land your LibBar changes to your internal LibBar fork and make sure it’s passing its own tests, and then a second PR to switch your LibBar dependency from the public repository to your internal fork.

At this step it is very important to ensure that there is an issue filed on your own internal backlog to drop your LibBar fork. You do not want to lose track of this work; it is technical debt that must be addressed.

Until it’s addressed, automated tools like Dependabot will not be able to apply security updates to LibBar for you; you’re going to need to manually integrate every upstream change. This type of work is itself very easy to drop or lose track of, so you might just end up stuck on a vulnerable version.

Step 3: Deploy Internally

Now that you’re confident that the fix will work, and that your temporarily-internally-maintained version of LibBar isn’t going to break anything on your site, it’s time to deploy.

Some deployment heritage should help to provide some evidence that your fix is ready to land in LibBar, but at the next step, please remember that your production environment isn’t necessarily emblematic of that of all LibBar users.

Step 4: Propose Externally

You’ve got the fix, you’ve tested the fix, you’ve got the fix in your own production, you’ve told upstream you want to send them some changes. Now, it’s time to make the pull request.

You’re likely going to get some feedback on the PR, even if you think it’s already ready to go; as I said, despite having been proven in your production environment, you may get feedback about additional concerns from other users that you’ll need to address before LibBar’s maintainers can land it.

As you process the feedback, make sure that each new iteration of your branch gets re-deployed to your own production. It would be a huge bummer to go through all this trouble, and then end up unable to deploy the next publicly released version of LibBar within FooApp because you forgot to test that your responses to feedback still worked on your own environment.

Step 4a: Hurry Up And Wait

If you’re lucky, upstream will land your changes to LibBar. But, there’s still no release version available. Here, you’ll have to stay in a holding pattern until upstream can finalize the release on their end.

Depending on some particulars, it might make sense at this point to archive your internal LibBar repository and move your pinned release version to a git hash of the LibBar version where your fix landed, in their repository.

Before you do this, check in with the LibBar core team and make sure that they understand that’s what you’re doing and they don’t have any wacky workflows which may involve rebasing or eliding that commit as part of their release process.

Step 5: Unwind Everything

Finally, you eventually want to stop carrying any patches and move back to an official released version that integrates your fix.

You want to do this because this is what the upstream will expect when you are reporting bugs. Part of the benefit of using open source is benefiting from the collective work to do bug-fixes and such, so you don’t want to be stuck off on a pinned git hash that the developers do not support for anyone else.

As I said in step 2b6, make sure to maintain a tracking task for doing this work, because leaving this sort of relatively easy-to-clean-up technical debt lying around is something that can potentially create a lot of aggravation for no particular benefit. Make sure to put your internal LibBar repository into an appropriate state at this point as well.

Up Next

This is part 1 of a 2-part series. In part 2, I will explore in depth how to execute this workflow specifically for Python packages, using some popular tools. I’ll discuss my own workflow, standards like PEP 517 and pyproject.toml, and of course, by the popular demand that I just know will come, uv.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!


  1. if you already have all the tooling associated with a monorepo, including the ability to manage divergence and reintegrate patches with upstream, you already have the higher-overhead version of the workflow I am going to propose, so, never mind. but chances are you don’t have that, very few companies do. 

  2. In any business where one must wrangle with Legal, 3 hours is a wildly optimistic estimate. 

  3. c.f. @mcc@mastodon.social 

  4. c.f. @geofft@mastodon.social 

  5. In an ideal world every project would keep its main branch ready to release at all times, no matter what but we do not live in an ideal world. 

  6. In this case, there is no question. It’s 2b only, no not-2b. 

November 11, 2025 01:44 AM UTC


Ahmed Bouchefra

Tired of Pip and Venv? Meet UV, Your New All-in-One Python Tool

Hey there, how’s it going?

Let’s talk about the Python world for a second. If you’ve been around for a while, you know the drill. You start a new project, and the ritual begins: create a directory, set up a virtual environment with venv, remember to activate it, pip install your packages, and then pip freeze everything into a requirements.txt file.

It works. It’s fine. But it always felt a bit… clunky. A lot of steps. A lot to explain to newcomers.

Well, I’ve been playing with a new tool that’s been gaining a ton of steam, and honestly? I don’t think I’m going back. It’s called UV, and it comes from Astral, the same team behind the super-popular linter, ruff.

The goal here is ambitious. UV wants to be the single tool that replaces pip, venv, pip-tools, and even pipx. It’s an installer, an environment manager, and a tool runner all rolled into one. And because it’s written in Rust, it’s ridiculously fast.

So, let’s walk through what a typical project setup looks like the old way… and then see how much simpler it gets with UV.

The Old Way: The Pip & Venv Dance

Okay, so let’s say we’re starting a new Flask app. The old-school workflow would look something like this:

  1. mkdir old-way-project && cd old-way-project
  2. python3 -m venv .venv (Create the virtual environment)
  3. source .venv/bin/activate (Activate it… don’t forget!)
  4. pip install flask requests (Install our packages)
  5. pip freeze > requirements.txt (Save our dependencies for later)

It’s a process we’ve all done a hundred times. But it’s also a process with a few different tools and concepts you have to juggle. For someone just starting out, it’s a lot to take in.

The New Way: Just uv

Now, let’s do the same thing with UV.

Instead of creating a directory myself, I can just run:

uv init new-app

This one command creates a new directory, cds into it, and sets up a modern Python project structure. It initializes a Git repository, creates a sensible .gitignore, and gives us a pyproject.toml file. This is the modern way to manage project metadata and dependencies.

But wait… where’s the virtual environment? Where’s the activation step?

Here’s the magic. You don’t have to worry about it.

Let’s add Flask and Requests to our new project. Instead of pip, we use uv add:

uv add flask requests

When I run this, a few amazing things happen:

  1. UV sees I don’t have a virtual environment yet, so it creates one for me automatically.
  2. It installs Flask and Requests into that environment at lightning speed.
  3. It updates my pyproject.toml file to list flask and requests as dependencies.
  4. It creates a uv.lock file, which records the exact versions of every single package and sub-dependency. This is what solves the classic “but it works on my machine!” problem.

All of that, with one command, and I never had to type source ... activate.

Running Your Code (This Is the Coolest Part)

“Okay,” you might be thinking, “but how do I run my code if the environment isn’t active?”

Simple. You just tell UV to run it for you.

uv run main.py

UV finds the project’s virtual environment and runs your script inside it, even though your main shell doesn’t have it activated.

Now, get ready for the part that really blew my mind.

Let’s say I accidentally delete my virtual environment.

rm -rf .venv

Normally, this would be a disaster. I’d have to recreate the environment, activate it, and reinstall everything from my requirements.txt file. It would be a whole thing.

But with UV? I just run the same command again:

uv run main.py

UV sees the environment is gone. It reads the uv.lock file, instantly recreates the exact same environment with the exact same packages, and then runs the code. It all happens in a couple of seconds. It’s just… seamless.

If you’re sharing the project with a teammate, they just clone it and run uv sync. That’s it. Their environment is ready to go, perfectly matching yours.

It Even Replaces Pipx for Tools

Another thing I love is how it handles command-line tools. I used to use pipx to install global tools like linters and formatters. UV has that built-in, too.

Want to install ruff?

uv tool install ruff

This installs it in an isolated environment but makes it available everywhere.

But even better is the uvx command, which lets you run a tool without permanently installing it.

Let’s say I want to quickly check my code with ruff but I don’t want to install it.

uvx ruff check .

UV will download ruff to a temporary environment, run the command, and then clean up after itself. It’s perfect for trying out new tools or running one-off commands without cluttering your system.

My Takeaway

I know, I know… another new tool to learn. It can feel overwhelming. But this one is different. It doesn’t just add another layer; it simplifies and replaces a whole stack of existing tools with something faster, smarter, and more intuitive.

The smart caching alone is a huge win. If you have ten projects that all use Flask, UV only stores it once on your disk, saving a ton of space and making new project setups almost instantaneous.

I’ve fully switched my workflow over to UV, and I can’t see myself going back. It just gets out of the way and lets me focus on the code.

November 11, 2025 12:00 AM UTC

The Anatomy of a Scalable Python Project

Ever start a Python project that feels clean and simple, only to have it turn into a tangled mess a few months later? Yeah, I’ve been there more times than I can count.

Today, I want to pull back the curtain and show you the anatomy of a Python project that’s built to last. This is the setup I use for all my production projects. It’s a blueprint that helps keep things sane, organized, and ready to grow without giving you a massive headache.

We’ll walk through everything—folder structure, config, logging, testing, and tooling. The whole package.

So, What Does “Scalable” Even Mean?

It’s a word that gets thrown around a lot, right? “Scalable.” But what does it actually mean in practice?

For me, it boils down to a few things:

  1. Scales with Size: Your codebase is going to grow. That’s a good thing! It means you’re adding features. A scalable structure means you don’t have to constantly refactor everything just to add something new. The foundation is already there.
  2. Scales with Your Team: If you bring on another developer, they shouldn’t need a two-week onboarding just to figure out where to put a new function. The boundaries should be clear, and the layout should be predictable.
  3. Scales with Environments: Moving from your local machine to staging and then to production should be… well, boring. In a good way. Your config should be centralized, making environment switching a non-event.
  4. Scales with Speed: Your local setup should be a breeze. Tests should run fast. Docker should just work. You want to eliminate friction so you can actually focus on building things.

Over the years, I’ve worked with everything from TypeScript to Java to C++, and while the specifics change, the principles of good structure are universal. This is the flavor that I’ve found works beautifully for Python.

The Blueprint: A Balanced Folder Structure

You want just enough structure to keep things organized, but not so much that you’re digging through ten nested folders to find a single file. It’s a balance.

Here’s the high-level view:

/
├── app/      # Your application's source code
├── tests/    # Your tests
├── .env      # Environment variables (for local dev)
├── Dockerfile
├── docker-compose.yml
├── pyproject.toml
└── ... other config files

Right away, you see the most important separation: your app code and your tests live in their own top-level directories. This is crucial. Don’t mix them.

Diving Into the app Folder

This is where the magic happens. Inside app, I follow a simple pattern. For this example, we’re looking at a FastAPI app, but the concepts apply anywhere.

app/
├── api/
│   └── v1/
│       └── users.py   # The HTTP layer (routers)
├── core/
│   ├── config.py    # Centralized configuration
│   └── logging.py   # Logging setup
├── db/
│   └── schema.py    # Database models (e.g., SQLAlchemy)
├── models/
│   └── user.py      # Data contracts (e.g., Pydantic schemas)
├── services/
│   └── user.py      # The business logic!
└── main.py          # App entry point

Let’s break it down.

main.py - The Entry Point

This file is kept as lean as possible. Seriously, there’s almost nothing in it. It just initializes the FastAPI app and registers the routers from the api folder. That’s it.

api/ - The Thin HTTP Layer

This is where your routes live. If you look inside api/v1/users.py, you won’t find any business logic. You’ll just see the standard GET, POST, PUT, DELETE endpoints. Their only job is to handle the HTTP request and response. They act as a thin translator, calling into the real logic somewhere else.

core/ - The Cross-Cutting Concerns

This folder is for things that are used all over your application.

db/ and models/ - The Data Layers

services/ - The Heart of Your Application

This is the most important folder, in my opinion. This is where your actual business logic lives. The UserService takes a database session and does the real work: querying for users, creating a new user, running validation logic, etc.

Why is this so great?

Let’s Talk About Testing

Your tests folder should mirror your app folder’s structure. This makes it incredibly easy to find the tests for any given piece of code.

tests/
└── api/
    └── v1/
        └── test_users.py

For testing, I use an in-memory SQLite database. This keeps my tests completely isolated from my production database and makes them run super fast.

FastAPI has a fantastic dependency injection system that makes testing a dream. In my tests, I can just “override” the dependency that provides the database session and swap it with my in-memory test database. Now, when I run a test that hits my API, it’s running against a temporary, clean database every single time.

Tooling That Ties It All Together

How It All Flows Together

So, let’s trace a request:

  1. A GET /users request hits the router in api/v1/users.py.
  2. FastAPI’s dependency injection system automatically creates a UserService instance, giving it a fresh database session.
  3. The route calls the list_users method on the service.
  4. The service runs a query against the database, gets the results, and returns them.
  5. The router takes those results, formats them as a JSON response, and sends it back to the client.

The beauty of this is the clean separation of concerns. The API layer handles HTTP. The service layer handles business logic. The database layer handles persistence.

This structure lets you start small and add complexity later without making a mess. The boundaries are clear, which makes development faster, testing easier, and onboarding new team members a whole lot smoother.

Of course, this is a starting point. You might need a scripts/ folder for data migrations or other custom tasks. But this foundation… it’s solid. It’s been a game-changer for me, and I hope it can be for you too.

November 11, 2025 12:00 AM UTC

November 10, 2025


Brian Okken

Explore Python dependencies with `pipdeptree` and `uv pip tree`

Sometimes you just want to know about your dependencies, and their dependencies.

I’ve been using pipdeptree for a while, but recently switched to uv pip tree.
Let’s take a look at both tools.

pipdeptree

pipdeptree is pip installable, but I don’t want pipdeptree itself to be reported alongside everything else installed, so I usually install it outside of a project. We can use it system wide by:

usage

The --python auto tells pipdeptree to look at the current environment.

November 10, 2025 11:24 PM UTC


Patrick Altman

Using Vite with Vue and Django

Using Vite with Vue and Django

I&aposve been building web applications with Vue and Django for a long time. I don&apost remember my first one—certainly before Vite was available. As soon as I switched to using Vite, I ended up building a template tag to join the frontend and backend together rather than having separate projects. I&aposve always found things simpler to have Django serve everything.

While preparing this post to share the latest version of what is essentially a small set of files we copy between projects, I started exploring the idea of open-sourcing the solution.

The goal was twofold

  1. To create a reusable package instead of relying on copy-and-paste code, and
  2. To contribute something back to the open-source community.

In the process, I stumbled upon an excellent existing project — django-vite.

So now I think we might give this a good look to switch to and add a Redis backend.

For now though, I think it&aposs still worth sharing our simple solution in case it&aposs a better fit for you (I haven&apost fully examined django-vite yet).

The Problem

The problem we are trying to solve is using Vite to bundle/build our Vue frontend and yet have Django be able to serve the bundle entry point JS and CSS entry points automatically. Running vite build will yield output like:

main-2uqS21f4.js
main-BCI6Z1XL.css

Without any extra tooling, we&aposd have to commit build output, hard-code these cache-busting file names to the base template, every time we made a change that could affect the bundle.

This was completely unacceptable.

The Solution

Vite offers the ability to generate a manifest file that will map the cache-busting file name with their base name in a machine readable format. This will allow us to leverage builds happening on CI/CD as part of our Docker image build, and then read the manifest produced by Vite, to keep everything neat and simple.

Here is the setting in the vite.config.ts key to this:

{
  // ...
  build: {
    manifest: true,
    // ...
  }
  // ...
}

This will produce a file in your output folder (under .vite/) called manifest.json.

Here is a snippet; note that you typically won’t need to inspect it manually:

"main.ts": {
    "file": "assets/main-2uqS21f4.js",
    "name": "main",
    "src": "main.ts",
    "isEntry": true,
    "imports": [
      "_runtime-D84vrshd.js",
      "_forms-OJiVtksU.js",
      "_analytics-CCPQRNnj.js",
      "_forms-pro-qreHBaUb.js",
      "_icons-3wXMhf1p.js",
      "_pv-DzJUpav-.js",
      "_vue-mapbox-BRpo1ix7.js",
      "_mapbox--vATkUHK.js"
    ],
    "dynamicImports": [
      "views/HomeView.vue",
      "views/dispatch/DispatchNewOrdersView.vue",
      ...

This is the key to tying things together dynamically. We constructed a template tag so that we could dynamically add our entry point in our base template:

{% load vite %}

<html>
  <head>
    <!-- ... base head template stuff -->
    {% vite_styles &aposmain.ts&apos %}
  </head>
  <body>
    <!-- ... base template stuff -->
  
    {% vite_scripts &aposmain.ts&apos %}
  </body>
</html>

The idea behind this type of solution is conceptually pretty simple. The template tag needs to read the manifest.json, find the referenced entry point main.ts, then return the staticfiles based path to what&aposs in the file key (e.g. assets/main-2uqS21f4.js before rendering the template).

Given this, we need to optimize by reducing file I/O hits on every request, and since we’ll use caching we must also handle cache invalidation. Every deployment is a candidate for invalidation because the bundle could change at deployment, but not between.

We&aposll solve the caching using Redis. Since we have multiple nodes in our web app cluster local memory isn&apost an option. We&aposll solve the cache invalidation with a management command that runs at the end of each deployment. This uses a short stack (keeping only the latest n versions) instead of deleting.

We use a stack so we can push the new manifest to the top of the queue while leaving older references around. Requests to updated nodes can then fetch the latest bundle, while allowing older nodes to still work and serve up their existing (older) bundle. This enables random rolling upgrades on our cluster allowing us to push up updates in middle of a work day without disrupting end users.

All of this is done with basically a template tag python module and a management command.

Template Tag

We have this template tag module stored as vite.py, so that you can load it with {% load vite %} which then exposes the {% vite_styles %} and {% vite_scripts %} template tags.

import json
import re
import typing

from django import template
from django.conf import settings
from django.core.cache import cache
from django.templatetags.static import static
from django.utils.safestring import mark_safe


if typing.TYPE_CHECKING:  # pragma: no cover
    from django.utils.safestring import SafeString

    ChunkType = typing.TypedDict("chunk", {"file": str, "css": str, "imports": list[str]})
    ManifestType = typing.Mapping[str, ChunkType]
    ScriptsStylesType = typing.Tuple[list[str], list[str]]


DEV_SERVER_ROOT = "http://localhost:3001/static"


register = template.Library()


def is_absolute_url(url: str) -> bool:
    return re.match("^https?://", url) is not None


def set_manifest() -> "ManifestType":
    with open(settings.MANIFEST_LOADER["output_path"]) as fp:
        manifest: "ManifestType" = json.load(fp)

    cache.set(settings.MANIFEST_LOADER["cache_key"], manifest, None)
    return manifest


def get_manifest() -> "ManifestType":
    if manifest := cache.get(settings.MANIFEST_LOADER["cache_key"]):
        if settings.MANIFEST_LOADER["cache"]:
            return manifest

    return set_manifest()


def vite_manifest(entries_names: typing.Sequence[str]) -> "ScriptsStylesType":
    if settings.DEBUG:
        scripts = [f"{DEV_SERVER_ROOT}/@vite/client"] + [
            f"{DEV_SERVER_ROOT}/{name}"
            for name in entries_names
        ]
        styles = []
        return scripts, styles

    manifest = get_manifest()

    _processed = set()

    def _process_entries(names: typing.Sequence[str]) -> "ScriptsStylesType":
        scripts = []
        styles = []

        for name in names:
            if name in _processed:
                continue
            chunk = manifest[name]

            import_scripts, import_styles = _process_entries(chunk.get("imports", []))
            scripts.extend(import_scripts)
            styles.extend(import_styles)

            scripts.append(chunk["file"])
            styles.extend(chunk.get("css", []))

            _processed.add(name)
        return scripts, styles

    return _process_entries(entries_names)


@register.simple_tag(name="vite_styles")
def vite_styles(*entries_names: str) -> "SafeString":
    _, styles = vite_manifest(entries_names)
    styles = map(lambda href: href if is_absolute_url(href) else static(href), styles)
    return mark_safe("\n".join(map(lambda href: f&apos<link rel="stylesheet" href="{href}" />&apos, styles)))  # nosec


@register.simple_tag(name="vite_scripts")
def vite_scripts(*entries_names: str) -> "SafeString":
    scripts, _ = vite_manifest(entries_names)
    scripts = map(lambda src: src if is_absolute_url(src) else static(src), scripts)
    return mark_safe("\n".join(map(lambda src: f&apos<script type="module" src="{src}"></script>&apos, scripts)))  # nosec

Here are a few features this supports::

  1. If running in local development, it will bypass loading from the manifest and load the @vite/client and point to the dev server that is running in a docker compose instance so we get HMR (Hot Module Replacement).
  2. It relies on some settings that control if caching is enabled, what the cache key is (we set it to the RELEASE_VERSION which is pulled from the environment and tied to the git sha or tag.
  3. We leverage the Django cache backend here for getting from and setting to the cache independent on what the actual cache backend is. This layer of indirection only works for this tag though and not for our cache invalidation management command.

The settings we use:

MANIFEST_LOADER = {
    "cache": not DEBUG,
    "cache_key": f"vite_manifest:{RELEASE_VERSION}",
    "output_path": f"{STATIC_ROOT}/.vite/manifest.json",
}

The management command gets a bit fancy with invalidation mainly to support running a multi-node cluster.

If you run a single web instance this probably isn&apost a lot of benefit.

However, we encountered issues spinning up additional nodes: some were updated, others weren’t, and we were seeing 500 errors during deployment because we needed to support both versions in the cache.

Our short term solution was to just put entire site into maintenance mode during deploys, but that&aposs kind of annoying for pushing out some simple fixes. This technique has solved that for us with this management command that lives in post_deploy.py

from django.conf import settings
from django.core.cache import cache
from django.core.management import BaseCommand
from redis.exceptions import RedisError

from ...templatetags.vite import set_manifest


class Command(BaseCommand):

    def success(self, message: str):
        self.stdout.write(self.style.SUCCESS(message))

    def warning(self, message: str):
        self.stdout.write(self.style.WARNING(message))

    def error(self, message: str):
        self.stdout.write(self.style.ERROR(message))

    def set_new_manifest_in_cache(self):
        current_version = settings.RELEASE_VERSION
        if not current_version:
            self.warning(
                "RELEASE_VERSION is empty; skipping cleanup to avoid deleting default keys."
            )
            return

        prefix = "vite_manifest:*"  # Match all versionsed keys
        recent_versions_key = "recent-manifest-versions"  # Redis key for tracking versions

        try:
            redis_client = cache._client.get_client()

            # Add current version to the front of the list (in bytes)
            redis_client.lpush(recent_versions_key, current_version.encode("utf-8"))

            # Keep only the last 5 versions
            redis_client.ltrim(recent_versions_key, 0, 5)

            # Get recent versions as a set for quick lookup (decoding to strings)
            recent_versions = {
                v.decode("utf-8")
                for v in redis_client.lrange(recent_versions_key, 0, -1)
            }

            self.success(f"Recent versions: {recent_versions}")

            cursor = "0"
            deleted_count = 0
            while cursor != 0:
                cursor, keys = redis_client.scan(cursor=cursor, match=prefix, count=100)  # Batch scan
                for key in keys:
                    key_str = key.decode("utf-8")
                    self.success(f"Checking key: {key_str}")
                    # If the key&aposs version is not in recent versions, delete it
                    if not any(key_str.endswith(f":{version}") for version in recent_versions):
                        redis_client.delete(key)
                        deleted_count += 1
                        self.success(f"Deleted old manifest cache key: {key_str}")

            self.success(
                f"Added current version &apos{current_version}&apos and deleted {deleted_count} old manifest cache keys."
            )

            set_manifest()
            self.success("Updated Vite manifest in cache.")
        except RedisError as e:
            self.error(f"Redis error: {e}")

    def handle(self, *args, **options):
        self.set_new_manifest_in_cache()

This isn&apost the prettiest code. We could probably tidy it up by extracting the Redis operations and/or the main while loop to make things more readable. But for now it&aposs working and we haven&apost had to touch it in a while.

The latest six versions in our cache:

Using Vite with Vue and Django

We had to break out of the pure django cache backend here to get access to some redis specific operations for the stack operations. Again, this is something that might be worth tidying up if we build a cache backend for django-vite but maybe not necessary if we build a Redis specific backend.

Not only do we invalidate the latest cache by pushing the version key down the stack, but we then seed the cache with the current version to save some time on a lazy load.

Summary

Next up is for us to take a hard look at django-vite as this seems to be a well structured and maintained project. Perhaps we can move to using this, retire our custom code, and then contribute what remains lacking either to the project or via a sidecar package.

Have you dealt with these problems in a different way? If so, we&aposd love to hear from you and learn about your approach.

November 10, 2025 05:58 PM UTC


PyCharm

This blog post was brought to you by Damaso Sanoja, draft.dev.

Deciding whether to use Python or Rust isn’t just a syntax choice; it’s a career bet. According to the StackOverflow Developer Survey, Python dominates in accessibility, with 66.4% of people learning to code choosing it as their entry point. Python usage skyrocketed from 32% in 2017 to 57% in 2024, making it the tool of choice for “more than half of the world’s programmers.” GitHub has also reported that Python overtook JavaScript as their most-used language for the first time in over a decade.

But Rust’s trajectory tells a different story. Despite its complexity, its usage has grown from 7% to 11% since 2020. It’s also been the “Most Admired programming language” for nine straight years, with over 80% of developers wanting to continue using it.

The real question isn’t which language will “win” – it’s which one positions you for the problems you want to solve. This article explores their syntax, performance, learning curves, type systems, memory management, concurrency, ecosystems, and tooling to help you make a decision that aligns with both market realities and your career goals.

TL;DR:
Python and Rust serve different goals. Python wins on accessibility, flexibility, and a vast ecosystem. Ideal for rapid development, data science, and automation. Rust shines in performance, safety, and concurrency, making it perfect for systems programming and large-scale, reliable software.
In short: choose Python to build fast, iterate faster, and reach users quickly; choose Rust to build software that runs fast, scales safely, and lasts. Many developers now combine both; using Rust for performance-critical parts inside Python apps.

Syntax and Readability

Python’s syntax is praised for its clean, English-like structure that closely resembles written algorithms or planning notes. It prioritizes developer approachability and rapid prototyping through dynamic typing, which means you don’t need to declare variable types explicitly. This allows for very concise code. Python’s PEP 8 style guide establishes consistent formatting conventions that make Python code predictable and easy to scan.

For example, you can define a simple add function, assign a string to a variable, and create a formatted message with just a few lines of code:

# Python: Flexible and concise

def add(a, b):

  return a + b

name = "World"

message = f"Hello, {name}!"

Rust uses C-style syntax that requires you to specify types (fn add(a: i32, b: i32) -> i32) and declare when variables can change (let mut message = String::new();). This looks more verbose and adds learning challenges like ownership (like &str for borrowed string slices), but it’s intentional. Rust’s explicit approach catches more errors at compile time and makes program structure clearer, even though you might write more code. Tools like rustfmt handle code formatting automatically.

Here’s how the same functionality looks in Rust, with explicit type declarations and mutability:

// Rust: Explicit types, mutability, and function signature

// Function definition

fn add(a: i32, b: i32) -> i32 {

    a + b

}

fn main() {

    let name: &str = "World";

    let message: String = format!("Hello, {}!", name);

    println!("{}", message);

    let mut count: i32 = 0;

    count += 1;

    println!("Count: {}", count);

    // Using the add function

    let sum = add(5, 10);

    println!("Sum: {}", sum);

}

This core difference – Python’s easy flow versus Rust’s strict structure – is often what developers first notice when choosing between them, and it affects both how you write code and how fast it runs.

Performance and Speed

The differences between Rust and Python become even more apparent when looking at their raw performance.

Rust compiles directly to machine code before you run it, so it’s fast and efficient. Python reads and translates your code line by line as it runs, which is more flexible but slower. Rust’s zero-cost abstractions and lack of a garbage collector also contribute to its speed advantage, especially for CPU-heavy tasks.

You can see the difference in real-world benchmarks where Rust consistently runs faster and uses less memory. However, it’s worth noting that benchmark results are frequently updated and can vary based on the specific tests and implementations used.

These are some commonly observed performance patterns:

Performance differences will depend on your specific setup and code, but Rust consistently wins for heavy computational work.

The speed gap means Rust works better for large, time-critical systems, while Python’s speed is often good enough when you want to build and test ideas quickly rather than squeeze out maximum performance.

Ease of Use and Learning Curve

Python is famous for its gentle learning curve; its simple syntax and flexibility make it great for beginners who want to build things quickly. For example, reading user input and displaying a personalized greeting takes just a few straightforward lines:

# Python: Reading and printing a name

name = input("Enter your name: ")

print(f"Hello, {name}!")

Rust’s strict compiler and ownership system require much more upfront learning. The same greeting task in Rust immediately throws several complex concepts at you. This code shows how you need explicit imports for basic input/output, function signatures that handle errors (io::Result<()>), mutable string creation (String::new()), explicit error checking (the ? operator), and details like flushing output and trimming input:

// Rust: Reading and printing a name with error handling

use std::io::{self, Write}; // Import necessary modules

fn main() -> io::Result<()> { // main function returns a Result for I/O errors

    print!("Enter your name: ");

    io::stdout().flush()?; // Ensure the prompt is displayed before input

    let mut name = String::new(); // Declare a mutable string to store input

    io::stdin().read_line(&mut name)?; // Read line, handle potential error

    println!("Hello, {}!", name.trim()); // Trim whitespace (like newline) and print

    Ok(()) // Indicate success

}

This heavy upfront learning makes Rust harder to start with, but it’s part of Rust’s design. The language acts like a strict teacher, forcing you to write safe, efficient code from the beginning. You get more control and fewer runtime crashes in exchange for the initial difficulty.

Python gets you started immediately, while Rust makes you work harder upfront for more reliable code.

Typing System (Static vs. Dynamic)

Python uses dynamic typing, where types are checked at runtime, providing great flexibility and accelerating initial development as you rarely need explicit type declarations. This freedom, however, means type-related errors (like TypeError) may only surface during execution, which is a potential risk in larger systems. To address this, Python developers often use tools like mypy, a static type checker that analyzes code before runtime to catch type-related errors early and improve code safety in larger projects.

Rust uses static typing, which means it checks all your types before your code even runs. While this demands more upfront effort to satisfy the compiler’s rigor, it’s a powerful early warning system and significantly reduces type-related runtime errors and improves code maintainability. Rust’s type system includes enums like Option<T> and Result<T, E> to handle the absence of values and recoverable errors explicitly. You need to weigh Python’s development speed against Rust’s early error detection and type safety checks that happen before the program runs.

Memory Management

Python uses automatic garbage collection with reference counting and a cycle detector to clean up memory automatically. This frees you from the burden of memory management, which simplifies coding. But it can cause unpredictable pauses and performance overhead, which can be a problem for time-critical applications like financial trading systems or robotics, where consistent timing is crucial.

Rust takes a completely different approach with its ownership system, which handles memory management while your code compiles. Through ownership rules, borrowing (utilizing & for immutable and &mut for mutable references), and lifetimes, Rust prevents common memory errors like null pointer dereferences, dangling pointers, and use-after-free errors in safe code – all without a runtime garbage collector. While unsafe blocks allow you to bypass these protections when needed, most Rust code benefits from these safety guarantees. This results in highly predictable performance and fine-grained control. However, mastering the borrow checker and its concepts can be challenging if you’re used to garbage-collected languages.

The choice here is stark: Python offers easy memory management, while Rust demands more work upfront for memory safety and performance.

Concurrency and Parallelism

The threading module in Python’s common CPython implementation allows you to easily create threads:

# Python: Basic thread creation

import threading

def python_task():

    print("Hello from a Python thread!")

# To run it:

# thread = threading.Thread(target=python_task)

# thread.start()

# thread.join()

However, Python’s Global Interpreter Lock (GIL) restricts multiple native threads from executing Python bytecode simultaneously for CPU-bound tasks, which limits true parallelism on multicore processors. This means that even on a system with multiple CPU cores, only one Python thread can actually execute at any given time – threads must take turns rather than running simultaneously on different cores. This makes Python’s threading module most effective for I/O-bound operations, where threads can release the GIL while waiting for external operations.

For efficient single-threaded concurrency in I/O tasks, Python offers asyncio. For true CPU parallelism, you need the multiprocessing module, which creates separate processes to bypass the GIL but uses more memory and requires complex communication between processes. While experimental efforts like PEP 703 aim to make the GIL optional, Python’s concurrency requires navigating these trade-offs.

Rust is known for its fearless concurrency, thanks to its ownership and type systems that prevent data races at compile time. Creating a basic thread is straightforward using std::thread:

// Rust: Basic thread creation

use std::thread;

fn rust_task() {

    println!("Hello from a Rust thread!");

}

// To run (e.g., in main function):

// let handle = thread::spawn(rust_task);

// handle.join().unwrap();

Critically, threads like this in Rust can run in true parallel on different CPU cores because Rust does not have a GIL. Compile-time checks involving Send and Sync traits ensure that data shared between threads is handled safely. For more complex scenarios, Rust offers powerful tools like async/await for asynchronous programming and types like Arc and Mutex for safe shared-state concurrency.

Summing up, Rust excels at building highly concurrent, multithreaded applications that demand maximum hardware efficiency.

Ecosystem and Libraries

Python has an extensive, mature ecosystem; PyPI (Python Package Index) offers an unparalleled array of libraries for nearly every domain, especially data science (eg NumPy, pandas, and TensorFlow), web development (eg Django, Flask, and FastAPI), and automation, complemented by a batteries-included standard library.

Rust’s ecosystem, managed via the Cargo build tool and package manager sourcing from crates.io, is younger but expanding. It’s particularly good for systems programming, WebAssembly, networking, and command line tools, with libraries engineered for performance and safety. Interestingly, Rust is increasingly used to build high-performance tools for the Python world (eg Polars, Ruff, and uv), showing how the two languages can also work together rather than just compete.

The ecosystem choice often depends on project requirements: Python’s mature, extensive libraries make it ideal for rapid development and established domains like data science and web development, while Rust’s performance-focused ecosystem shines when you need maximum efficiency and safety or are building foundational tools that others will depend on.

IDE and Tooling Support

Python’s great tooling matches its popularity; it’s been the #2 programming language since early 2020. Comprehensive IDEs like PyCharm and interactive tools like Jupyter Notebooks give you everything you need: debuggers, linters, test runners, and more. All of which makes Python even easier to use.

Rust’s tooling evolution reflects its remarkable developer satisfaction story. How does a systems language achieve 83% developer retention and “Most Admired” status for nine consecutive years? Partly through exceptional tooling that makes complex concepts manageable. Cargo revolutionized build management, rust-analyzer provides precise code intelligence, and Clippy offers intelligent linting. As more companies adopt Rust, IDEs are also becoming more mature, with tools like JetBrains RustRover providing specialized support for Rust’s ownership model.

Color-coded inlay error descriptions

It’s a classic feedback loop: Python’s popularity drives better tooling, which makes Python even more popular. Rust’s exceptional tooling experience explains why developers who try it stick with it. Both languages benefit from this virtuous cycle, serving different developer needs and project requirements.

Comparison Table: Rust vs. Python

Here’s an overview of the key differences between Rust and Python:

FeatureRustPython
Syntax & ReadabilityExplicit, C-family syntax; can be verbose but aims for clarity and may offer lower cognitive complexity; rustfmt for style.Minimalist, near-pseudocode style; generally beginner-friendly and concise; PEP 8 for style conventions.
Performance & SpeedCompiled to optimized machine code; very high raw performance and no GC pauses; ideal for CPU-bound and time-sensitive systems.Primarily interpreted; slower execution speed for CPU-bound tasks; potential GC pauses; often adequate for I/O-bound tasks and rapid development.
Ease of Use & Learning CurveSteeper learning curve due to its strict compiler, ownership model, and lifetimes; rewards effort with safe and efficient code.Gentle learning curve with simple syntax and dynamic typing; facilitates rapid development and prototyping; very accessible to beginners.
Typing SystemStatic, strong compile-time type enforcement; significantly reduces runtime type errors and enhances maintainability; expressive type system.Dynamic typing with runtime type checking; offers flexibility and reduces boilerplate, but type errors may surface at runtime; optional static type hints available.
Memory ManagementCompile-time enforcement via ownership, borrowing, and lifetimes; no garbage collector, leading to predictable performance and control.Automatic garbage collection (primarily reference counting with a cycle detector); simplifies memory management for developers but can introduce overhead and pauses.
Concurrency & Parallelism“Fearless concurrency” due to compile-time data-race prevention; no GIL, enabling true multicore parallelism for CPU-bound tasks.GIL in CPython limits true CPU-bound parallelism in threads; asyncio is effective for I/O-bound concurrency; multiprocessing for CPU-bound tasks.
Ecosystem & LibrariesRapidly growing ecosystem via Cargo and crates.io; strong in systems programming, WebAssembly, networking, and CLI tools; performance-focused libraries.Vast, mature ecosystem via PyPI; dominant in data science, machine learning, web development, and automation; extensive “batteries-included” standard library.
IDE & Tooling SupportSignificantly improved and maturing; strong support from rust-analyzer, Cargo, and Clippy; dedicated IDEs like RustRover by JetBrains enhance productivity.Excellent and mature tooling; widely supported by IDEs like PyCharm and VS Code, Jupyter Notebooks for data science, and a rich collection of linters and debuggers.


Career Insights

So, the differences are clear, but how do those differences actually influence which you should use?

The career implications of choosing Python versus Rust extend far beyond syntax preferences—they shape your earning potential and job market positioning. Python dominates the hiring landscape with 42% of recruiters actively seeking Python skills, translating into substantial opportunities and a more mature and stable job market. This demand drives competitive compensation, with Python developers earning up to $151,000 annually in the US as per the ZipRecruiter report.

Python’s career appeal lies in its versatility across high-growth sectors. The language powers data science, machine learning, web backends, enterprise applications, and automation, which are all in high demand.

Rust presents a different but compelling career trajectory. While job openings are fewer, the compensation could be higher: some Rust developers earn up to $198,375 annually. However, Rust’s lower market share also means more volatile job opportunities, according to ZipRecruiter, Rust developer wages ranging dramatically from $40.38 to $64.66 per hour and an average of $53 per hour. This wage variation suggests that Rust’s job market is less predictable than Python’s, which enjoys a consistent demand for developers.

Overall, Rust’s career advantage lies in specialization. 43% of organizations are currently using Rust for non-trivial production use cases like server backends, web3 services, and cloud technologies. These infrastructure roles have the potential to command premium salaries due to their critical nature and the specialized expertise required.

Your career choice comes down to whether you want Python’s broad job market or Rust’s growing specialization in high-performance, reliable systems.

Hybrid Development Strategies: Rust-Enhanced Python Development

The choice between Python and Rust isn’t always black and white. Many developers are going for a third option: use both languages strategically. Keep Python for your main application logic where readability matters, but drop in Rust for the parts that need serious speed.

This hybrid approach is gaining real momentum. Major Python tools are being rewritten in Rust with dramatic results. Pydantic-core rewrote its validation logic in Rust and got 17x faster performance. Ruff provides Python linting that’s 10–100x faster than existing tools while installing with a simple pip install ruff.

The PyO3 ecosystem makes this integration surprisingly easy. You can write performance-critical functions in Rust and call them directly from Python. Start by prototyping in Python’s friendly syntax, then optimize the slow parts with Rust. Tools like pyo3_bindgen even automatically generate the connections between your Python and Rust code.

This co-usage model is appearing in production across major projects, including FastAPI, Jupyter, Pydantic, and Polars, showing its viability at scale. This approach offers immediate performance gains without requiring complete codebase rewrites, making it an attractive middle ground between Python’s accessibility and Rust’s performance characteristics.

Conclusion: Choosing Your Language – Rust and Python

This comparison shows that Rust and Python are built for different needs and approaches. Modern software development increasingly reveals that their strengths work together rather than compete.

Python remains excellent for rapid application development, data analysis, machine learning, and scripting, propelled by its accessible syntax, flexibility, and mature ecosystem. On the other hand, Rust is the best choice for performance-critical applications, systems programming, embedded development, and projects where memory safety and efficient concurrency are essential.

If you’re trying to build reliable, high-performance systems with Rust or exploring the language professionally, JetBrains RustRover offers a minimally configured environment with deep language support. With a non-commercial license available for learners and hobbyists, it’s a superb tool to grow with as you delve into Rust.

Curious about how Rust can benefit from other programming languages? Check out our previous blog post comparing Rust with Java and Go.

November 10, 2025 04:40 PM UTC


Real Python

Python 3.14 Released and Other Python News for November 2025

Python 3.14 is out now, bringing t-strings, deferred annotations, better error messages, and plenty more to explore. As developers start adopting the new version, Python 3.15 begins its alpha phase, and Python 3.9 officially retires. Meanwhile, Django 6.0 enters beta, new PEPs propose lazy imports and changes to the assert syntax, and the PSF makes waves with a notable funding decision.

Here’s what’s been happening in the world of Python!

Join Now: Click here to join the Real Python Newsletter and you’ll never miss another Python tutorial, course, or news update.

Python Releases

Last month’s headline news was the release of Python 3.14—a milestone update that introduces significant enhancements to the language itself and its runtime. Meanwhile, hot on its heels, the core team has begun development of Python 3.15. As Python continues to evolve, we must also bid farewell to an older release. Python 3.9 has reached its end of life, reminding everyone to keep their environments up to date for safety and support.

Python 3.14 Final Released

Python 3.14.0 was released on October 7, 2025, delivering a packed set of improvements and new capabilities. This is a major release that the community has been eagerly awaiting, and it doesn’t disappoint. Some of the most notable features of Python 3.14 include:

  • Deferred annotations by default: Following years of work on PEP 563, PEP 649, and PEP 749, Python now evaluates type annotations lazily. Forward references in annotations no longer need a special __future__ import, and a new annotationlib module provides tools to introspect annotations as real objects instead of strings.
  • Template strings (t-strings): PEP 750 introduces t-strings, a new string literal prefix t"" that returns a Template object, capturing both static and interpolated parts of the string. This feature enables custom processing of string templates and safer substitution patterns, offering a more controlled alternative to f-strings.
  • Modernized REPL and error messages: Building on improvements from Python 3.13, the interactive Python REPL gets real-time syntax highlighting and smarter auto-completion. Syntax and runtime errors are also more informative and user-friendly, helping developers diagnose issues faster.
  • Multiple interpreters in the standard library: PEP 734 adds a concurrent.interpreters module, finally exposing Python’s long-existing multiple interpreter support to Python code. This allows spawning isolated interpreters within a single process, unlocking new concurrency models and multi-core usage without separate processes.
  • Free-threaded Python: An officially supported build variant allows running Python without the Global Interpreter Lock (GIL), paving the way for true multi-core parallelism.

In addition to these headline features, Python 3.14 comes with numerous smaller enhancements and optimizations. There’s a new compression.zstd module for Zstandard compression (PEP 784) and support for UUID versions 6-8 with faster generation for existing versions. Additionally, there are optional brackets in except statements (PEP 758) and built-in HMAC implementations from the HACL* project for improved security.

The standard library tools like unittest, argparse, json, and calendar now produce colored output on the terminal, and the new zero-overhead debugger interface (PEP 768) lays the groundwork for more powerful debugging tools. Official installers for Python 3.14 even include an experimental JIT compiler enabled by default, hinting at performance boosts on the horizon.

With so much new in Python 3.14, now is a great time to experiment with it. Many of these features, like t-strings and annotation changes, are fully available by default, while others, such as the no-GIL build or JIT, may require special opt-in. You can read more in the official What’s new in Python 3.14 document for a comprehensive overview.

As always, be sure to check that your critical third-party libraries support Python 3.14 before upgrading in production, but initial support is strong with many popular projects already shipping wheels for 3.14. Congratulations to the core developers and community on this significant release!

Python 3.15 Alpha 1 Released

Even as Python 3.14 takes center stage, the core development team has promptly turned the page to Python 3.15. In mid-October, Python 3.15.0a1 was released as the first alpha of the next version. This overlap is part of Python’s now-annual cadence—while one version is finalized, planning and development for the next are already in motion.

Python 3.15 is scheduled for final release in October 2026. The alpha period, which will run through April 2026, is when new features land. As of alpha 1, only a few initial enhancements are present, since many features are still under development.

Notably, Python 3.15.0a1 includes PEP 686, which will make UTF-8 the default character encoding for open files, eliminating locale-dependent default encodings. It also has a new dedicated profiling API (PEP 799) to better support performance, and a C API for more efficient bytes object creation (PEP 782). Naturally, improvements to error messages continue as well.

Alpha releases are intended for testing only. You wouldn’t use them in production, but they’re invaluable for library maintainers and curious users to try out emerging changes. If you maintain a distribution package, now is a good time to start ensuring compatibility with Python 3.15. And if you’re just interested in what’s coming, you can install a pre-release version alongside your stable Python.

The Python 3.15 Release Schedule lays out the roadmap with monthly alpha releases through early 2026, a feature freeze by May (beta phase), and a final 3.15.0 release. This overlapping development cycle means we get to enjoy new Python features every year without missing a beat.

One side benefit of testing alpha releases is that you can provide feedback or catch regressions early. So, if you have time, give 3.15.0a1 a spin in a test environment. It’s an exciting glimpse into Python’s future, even if most of the big changes are yet to come in subsequent alphas.

Python 3.9 Reaches End of Life

Exactly five years after its initial release, Python 3.9 has officially reached its end of life (EOL) as of October 2025. This means that Python 3.9 will no longer receive security fixes or bug patches going forward. If you’re still using Python 3.9 in any projects, then now is the time to plan an upgrade to a later version—Python 3.10 or later—because continuing to run an EOL Python version poses security risks.

The end of life comes as no surprise. PEP 596 set Python 3.9’s support timeline to expire in October 2025. The core developers prepared one final security release, Python 3.9.25, which was published on Halloween (October 31, 2025) as a last hurrah for the 3.9 series. In a lighthearted announcement, release manager Łukasz Langa quipped that:

Python 3.9 is now officially dead… since it’s Halloween 🎃 (Source)

With that release, the 3.9 branch in CPython’s GitHub repository is now closed, and no further updates will be made.

Why does this matter? Running an unsupported Python means that any new vulnerabilities discovered in the interpreter or the standard library won’t be patched for that version. In today’s security climate, that’s a serious concern.

Read the full article at https://realpython.com/python-news-november-2025/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 10, 2025 02:00 PM UTC


PyCharm

Rust vs. Python: Finding the right balance between speed and simplicity

November 10, 2025 12:02 PM UTC


Talk Python to Me

#527: MCP Servers for Python Devs

Today we’re digging into the Model Context Protocol, or MCP. Think LSP for AI: build a small Python service once and your tools and data show up across editors and agents like VS Code, Claude Code, and more. My guest, Den Delimarsky from Microsoft, helps build this space and will keep us honest about what’s solid versus what's just shiny. We’ll keep it practical: transports that actually work, guardrails you can trust, and a tiny server you could ship this week. By the end, you’ll have a clear mental model and a path to plug Python into the internet of agents.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentryagents'>Sentry AI Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/nordstellar'>NordStellar</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Den Delimarsky</strong>: <a href="https://den.dev?featured_on=talkpython" target="_blank" >den.dev</a><br/> <br/> <strong>Agentic AI Programming for Python Course</strong>: <a href="https://training.talkpython.fm/courses/agentic-ai-programming-for-python" target="_blank" >training.talkpython.fm</a><br/> <br/> <strong>Model Context Protocol</strong>: <a href="https://modelcontextprotocol.io?featured_on=talkpython" target="_blank" >modelcontextprotocol.io</a><br/> <strong>Model Context Protocol Specification (2025-03-26)</strong>: <a href="https://modelcontextprotocol.io/specification/2025-03-26?featured_on=talkpython" target="_blank" >modelcontextprotocol.io</a><br/> <strong>MCP Python Package (PyPI)</strong>: <a href="https://pypi.org/project/mcp?featured_on=talkpython" target="_blank" >pypi.org</a><br/> <strong>Awesome MCP Servers (punkpeye) GitHub Repo</strong>: <a href="https://github.com/punkpeye/awesome-mcp-servers?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Visual Studio Code Docs: Copilot MCP Servers</strong>: <a href="https://code.visualstudio.com/docs/copilot/customization/mcp-servers?featured_on=talkpython" target="_blank" >code.visualstudio.com</a><br/> <strong>GitHub MCP Server (GitHub repo)</strong>: <a href="https://github.com/github/github-mcp-server?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>GitHub Blog: Meet the GitHub MCP Registry</strong>: <a href="https://github.blog/ai-and-ml/github-copilot/meet-the-github-mcp-registry-the-fastest-way-to-discover-mcp-servers?featured_on=talkpython" target="_blank" >github.blog</a><br/> <strong>MultiViewer App</strong>: <a href="https://multiviewer.app?featured_on=talkpython" target="_blank" >multiviewer.app</a><br/> <strong>GitHub Blog: Spec-driven development with AI (open source toolkit)</strong>: <a href="https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-new-open-source-toolkit/?featured_on=talkpython" target="_blank" >github.blog</a><br/> <strong>Model Context Protocol Registry (GitHub)</strong>: <a href="https://github.com/modelcontextprotocol/registry?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>mcp (GitHub organization)</strong>: <a href="https://github.com/mcp?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Tailscale</strong>: <a href="https://tailscale.com?featured_on=talkpython" target="_blank" >tailscale.com</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=0V3Tah-BDy4" target="_blank" >youtube.com</a><br/> <strong>Episode #527 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/527/mcp-servers-for-python-devs#takeaways-anchor" target="_blank" >talkpython.fm/527</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/527/mcp-servers-for-python-devs" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

November 10, 2025 08:00 AM UTC

November 09, 2025


Ned Batchelder

Three releases, one new organization

It’s been a busy, bumpy week with coverage.py. Some things did not go smoothly, and I didn’t handle everything as well as I could have.

It started with trying to fix issue 2064 about conflicts between the “sysmon” measurement core and a concurrency setting.

To measure your code, coverage.py needs to know what code got executed. To know that, it collects execution events from the Python interpreter. CPython now has two mechanisms for this: trace functions and sys.monitoring. Coverage.py has two implementations of a trace function (in C and in Python), and an implementation of a sys.monitoring listener. These three components are the measurement cores, known as “ctrace”, “pytrace”, and “sysmon”.

The fastest is sysmon, but there are coverage.py features it doesn’t yet support. With Python 3.14, sysmon is the default core. Issue 2064 complained that when the defaulted core conflicted with an explicit concurrency choice, the conflict resulted in an error. I agreed with the issue: since the core was defaulted, it shouldn’t be an error, we should choose a different core.

But I figured if you explicitly asked for the sysmon core and also a conflicting setting, that should be an error because you’ve got two settings that can’t be used together.

Implementing all that got a little involved because of “metacov”: coverage.py coverage-measuring itself. The sys.monitoring facility in Python was added in 3.12, but wasn’t fully fleshed out enough to do branch coverage until 3.14. When we measure ourselves, we use branch coverage, so 3.12 and 3.13 needed some special handling to avoid causing the error that sysmon plus branch coverage would cause.

I got it all done, and released 7.11.1 on Friday.

Soon, issue 2077 arrived. Another fix in 7.11.1 involved some missing branches when using the sysmon core. That fix required parsing the source code during execution. But sometimes the “code” can’t be parsed: Jinja templates compile html files to Python and use the html file as the file name for the code. When coverage.py tries to parse the html file as Python, of course it fails. My fix didn’t account for this. I fixed that on Saturday and released 7.11.2.

In the meantime, issue 2076 and issue 2078 both pointed out that now some settings combinations that used to produce warnings now produced errors. This is a breaking change, they said, and should not have been released as a patch version.

To be honest, my first reaction was that it wasn’t that big a deal, the settings were in conflict. Fix the settings and all will be well. It’s hard to remember all of the possibilities when making changes like this, it’s easy to make mistakes, and semantic versioning is bound to have judgement calls anyway. I had already spent a while getting 7.11.1 done, and .2 followed just a day later. I was annoyed and didn’t want to have to re-think everything.

But the more I thought about it, I decided they were right: it does break pipelines that used to work. And falling back to a different core is fine: the cores differ in speed and compatibility but (for the most part) produce the same results. Changing the requested core with a warning is a fine way to deal with the settings conflict without stopping test suites from running.

So I just released 7.11.3 to go back to the older behavior. Maybe I won’t have to do another release tomorrow!

While all this was going on, I also moved the code from my personal GitHub account to a new coveragepy GitHub organization!

Coverage.py is basically a one-man show. Maybe the GitHub organization will make others feel more comfortable chiming in, but I doubt it. I’d like to have more people to talk through changes with. Maybe I wouldn’t have had to make three releases in three days if someone else had been around as a sounding board.

I’m in the #coverage-py channel if you want to talk about any aspect of coverage.py, or I can be reached in lots of other ways. I’d love to talk to you.

November 09, 2025 11:27 PM UTC


Chris Warrick

Distro Hopping, Server Edition

I’ve recently migrated my VPS from Fedora to Ubuntu. Here’s a list of things that might be useful to keep in mind before, during, and after a migration of a server that hosts publicly accessible Web sites and applications, as well as other personal services, and how to get rid of the most annoying parts of Ubuntu.

Why switch?

Fedora is a relatively popular distro, so it’s well supported by software vendors. Its packagers adopt a no-nonsense approach, making very little changes that deviate from the upstream.

Ubuntu is not my favorite distro, far from it. While it is perhaps the most popular distro out there, its packages contain many more patches compared to Fedora, and Canonical (the company behind Ubuntu) are famous for betting on the wrong horse (Unity, upstart, Mir…). But one thing Ubuntu does well is stability. Fedora makes releases every 6 months, and those releases are supported for just 13 months, which means upgrading at least every year. Every upgrade may introduce incompatibilities, almost every upgrade requires recreating Python venvs. That gets boring fast, and it does not necessarily bring benefits. Granted, the Fedora system upgrade works quite well, and I upgraded through at least eight releases without a re-install, but I would still prefer to avoid it. That’s why I went with Ubuntu LTS, which is supported for five years, with a new release every two years, but which still comes with reasonably new software (and with many third-party repositories if something is missing or outdated).

Test your backups

I have a backup “system” that’s a bunch of Bash scripts. After upgrading one of the services that is being backed up, the responsible script started crashing, and thus backups stopped working. Another thing that broke was e-mails from cron, so I didn’t know anything was wrong.

While I do have full disk backups enabled at Hetzner (disclaimer: referral link), my custom backups are more fine-grained (e.g. important configuration files, database dumps, package lists), so they are quite useful in migrating between OSes.

So, here’s a reminder not only to test your backups regularly, but also to make sure they are being created at all, and to make sure cron can send you logs somewhere you can see them.

Bonus cron tip: set MAILFROM= and MAILTO= in your crontab if your SMTP server does not like the values cron uses by default.

Think about IP address reassignment (or pray to the DNS gods)

A new VPS or cloud server probably means a new IP address. But if you get a new IP address, that might complicate the migration of your publicly accessible applications. If you’re proxying all your Web properties through Cloudflare or something similar, that’s probably not an issue. But if you have a raw A record somewhere, things can get complicated. DNS servers and operating systems do a lot of caching. The conventional wisdom is to wait 24 or even 48 hours after changing DNS values. This might be true if your TTL is set to a long value, but if your TTL is short, the only worry are DNS servers that ignore TTL values and cache records for longer. If you plan a migration, it’s good to check your TTL well in advance, and not worry too much about broken DNS servers.

But you might not need a new IP. Carefully review your cloud provider’s IP management options before making any changes. Hetzner is more flexible than other hosts in this regard, as it is possible to move primary public IP addresses (not “floating” or “elastic” IPs) between servers, as long as you’re okay with a few minutes’ downtime (you will need to shut down the source and destination servers).

If you’re not okay with any downtime, you would probably want to leverage the floating/elastic IP feature, or hope DNS propagates quickly enough.

Trim the fat

My VPS ran a lot of services I don’t need anymore, but never really got around to decommissioning. For example, I had a full Xfce install with VNC access (the VNC server was only running when needed). I haven’t actually used the desktop for ages, so I just dropped it.

I also had an OpenVPN setup. It was useful years ago, when mobile data allowances were much smaller and speeds much worse. These days, I don’t use public WiFi networks at all, unless I’m going abroad, and I just buy one month of Mullvad VPN for €5 whenever that happens. So, add another service to the “do not migrate” list.

One thing that I could not just remove was the e-mail server. Many years ago, I ran a reasonably functional e-mail server on my VPS. I’ve since then migrated to Zoho Mail (which costs €10.80/year), in part due to IP reputation issues after changing hosting providers, and also to avoid having to fight spam. When I did that, I kept Postfix around, but as a local server for things like cron or Django to send e-mail with, and I configured it to send all e-mails via Zoho. But I did not really want to move over all the configuration, hoping that Ubuntu’s Postfix packages can work with my hacked together config from Fedora. So I replaced the server with OpenSMTPD (from the OpenBSD project), and all the Postfix configuration files with just one short configuration file:

table aliases file:/etc/aliases
table secrets file:/etc/mail-secrets

listen on localhost
listen on 172.17.0.1 # Docker

action "relay" relay host smtp+tls://smtp@smtp.example.net:587 auth <secrets> mail-from "@example.com"

match from any for any action "relay"

Dockerize everything…

My server runs a few different apps, some of which are exposed on the public Internet, while some do useful work in the background. The services I have set up most recently are containerized with the help of Docker. The only Docker-based service that was stateful (and did not just use folders mounted as volumes) was a MariaDB database. Migrating that is straightforward with a simple dump-and-restore.

Of course, not everything on my server is in Docker. The public-facing nginx install isn’t, and neither is PostgreSQL (but that was also a quick dump-and-restore migration with some extra steps).

…especially Python

But then, there are the Python apps. Python the language is cool (if a little slow), but the packaging story is a total dumpster fire.

By the way, here’s a quick recap of 2024/2025 in Python packaging: the most hyped Python package manager (uv) is written in Rust, which screams “Python is a toy language in which you can’t even write a tool as simple as a package manager”. (I know, dependency resolution is computationally expensive, so doing that in Rust makes sense, but everything else could easily be done in pure Python. And no, the package manager should not manage Python installs.) Of course, almost all the other contenders listed in my 2023 post are still being developed. On the standards front, the community finally produced a lockfile standard after years of discussions.

Anyway, I have three Python apps. One of them is Isso, which is providing the comments box below this post. I used to run a modified version of Isso a long time ago, but I don’t need to anymore. I looked at the docs, and they offer a pre-built Docker image, which means I could just quickly deploy it on my server with Docker and skip the pain of managing Python environments.

The other two apps are Django projects built by yours truly. They are not containerized, they exist in venvs created using the system Python. Moving venvs between machines is generally impossible, so I had to re-create them. Of course, I hit a deprecation, because the Python maintainers (especially in the packaging world) does not understand their responsibility as maintainers of the most popular programming language. This time, it was caused by an old editable install with setuptools (using setup.py develop, not PEP 660), and installs with more recent pip/setuptools versions would not have this error… although some people want to remove the fallback to setuptools if there is no pyproject.toml, so you need to stay up to date with the whims of the Python packaging industry if you want to use Python software.

Don’t bother with ufw

Ubuntu ships with ufw, the “uncomplicated firewall”, in the default install. I was previously using firewalld, a Red Hat-adjacent project, but I decided to give ufw a try. Since if it’s part of the default install, it might be supported better by the system.

It turns out that Docker and ufw don’t play together. Someone has built a set of rules that are supposed to fix it, but that did not work for me.

Docker does integrate with firewalld, and Ubuntu has packages for it, so I just installed it, enabled the services that need to be publicly available and things were working again.

Kill the ads (and other nonsense too)

Red Hat makes money by selling a stable OS with at least 10 years of support to enterprises, and their free offering is Fedora, with just 13 months of support; RHEL releases are branched off from Fedora. SUSE also sells SUSE Linux Enterprise and has openSUSE as the free offering (but the relationship between the paid and free version is more complicated).

Ubuntu chose a different monetization strategy: the enterprise offering is the same OS as the free offering, but it gets extra packages and extra updates. The free OS advertises the paid services. It is fairly simple to get rid of them all:

sudo apt autoremove ubuntu-pro-client
sudo chmod -x /etc/update-motd.d/*

Also, Ubuntu installs snap by default. Snap is a terrible idea. Luckily, there are no snaps installed by default on a Server install, so we can just remove snapd. We’ll also remove lxd-installer to save ~25 kB of disk space, since the installer requires snap, and lxd is another unsuccessful Canonical project.

sudo apt autoremove snapd lxd-installer

The cost of downgrading

Going from Fedora 42 (April 2025) to Ubuntu 24.04 (April 2024) means some software will be downgraded in the process. In general, this does not matter, as most software does not mind downgrades as much. One notable exception is WeeChat, the IRC client, whose config files are versioned, and Ubuntu’s version is not compatible with the one in Fedora. But here’s where Ubuntu’s popularity shines: WeeChat has its own repositories for Debian and Ubuntu, so I could just get the latest version without building it myself or trying to steal packages from a newer version.

Other than WeeChat, I haven’t experienced any other issues with software due to a downgrade. Some of it is luck (or not using new/advanced features), some of it is software caring about backwards compatibility.

Conclusion

Was it worth it? Time will tell. Upgrading Fedora itself was not that hard, and I expect Ubuntu upgrades to be OK too — the annoying part was cleaning up and getting things to work after the upgrade, and the switch means I will have to do it only every 2-4 years instead of every 6-12 months.

The switchover took a few hours, especially since I didn’t have much up-to-date documentation of what is actually installed and running, and there are always the minor details where distros differ that may require adjusting to. I think a migration like this is worth trying if rolling-release or frequently-released distros are too unstable for your needs.

November 09, 2025 06:00 PM UTC


The Python Coding Stack

The Misunderstood Hashable Types and Why Dictionaries Are Called Dictionaries • [Club]

Pick up a dictionary. No, not that one. The real dictionary you have on your bookshelf, the one that has pages made of paper, which you use to look up the meaning of English words. Or whatever other language. But let’s assume it’s an English dictionary. Now, look up zymology.

I’ll wait…

Done? It probably didn’t take you too long to find zymology. Probably it took you longer to find the dictionary on your bookshelf, the one you haven’t used in years!

You know z is the last letter of the alphabet, so you opened the dictionary on a page towards the end of the book. Then you looked at a random word on the page or maybe the word listed in the header. Does it start with z? Yes? Look at the word’s second letter. No? Open the book to another page further in the dictionary.

You get the idea. I know you know how to find a word in a dictionary.

But now, imagine that you didn’t know the alphabet. Bear with me. Just assume you can recognise letters, so you can read words, but you don’t know the order of the letters in the alphabet. You don’t know that A comes first and D is the fourth one, and so on.

How would you find zymology now? And how long do you reckon it will take you to find the word?

There you go, now you understand the purpose of using hash values for hashable objects. Or you can read on…

Today, I also explore the topic in a short video where I can communicate differently. But if you prefer to just read, then go ahead and skip the video.

Let’s continue exploring dictionaries and hashable objects.

Read more

November 09, 2025 01:09 PM UTC

November 07, 2025


Rodrigo Girão Serrão

Module compression overview

A high-level overview of how to use the module compression, new in Python 3.14.

This article will teach you how to use the package compression, new in Python 3.14. You will learn how to

Compression modules available

The package compression makes five compression modules available to you:

  1. bz2 – comprehensive interface for compressing and decompressing data using the bzip2 compression algorithm;
  2. gzip – adds support for working with gzip files through a simple interface to compress and decompress files like the GNU programs gzip and gunzip would;
  3. lzma – interface for compressing and decompressing data using the LZMA compression algorithm;
  4. zlib – interface for the zlib library (lower-level than gzip); and
  5. zstd – interface for compressing and decompressing data using the Zstandard compression algorithm.

Importing the modules

The first four modules (bz2, gzip, lzma, and zlib) were already available in earlier Python 3 versions as standalone modules. This means you can import these modules directly in earlier versions of Python:

# Python 3.12
>>> import bz2, gzip, lzma, zlib
>>> # No exception raised.

In Python 3.14, they continue to be importable directly and through the package compression:

# Python 3.14
>>> import bz2, gzip, lzma, zlib
>>> # No exception raised.

>>> from compression import bz2, gzip, lzma, zlib
>>> # No exception raised.

The package compression.zstd is new in Python 3.14 and can only be imported as compression.zstd:

# Python 3.14
>>> import zstd
# ModuleNotFoundError: No module named 'zstd'
>>> from compression import zstd  # ✅
>>> # No exception raised.

When possible (for example, in new programs), it is recommended that you import any of the five compression modules through the package compression.

Basic interface

At the most basic level, all five modules provide the functions compress and decompress. These functions can be given bytes-like objects to perform one-shot compression/decompression, as the snippet of code below shows:

from compression import zstd

data = ("Hello, world!" * 1000).encode()
compressed = zstd.compress(data)
print(compressed)
# b'(\xb5/\xfd`\xc81\xad\x00\x00hHello, world!\x01\x00\xb8\x12\xf8\xf9\x05'
print(zstd.decompress(compressed) == data)
# True

Using the same toy data (the string "Hello, world!" repeated 1000 times) we can use the function compress of each module to determine how well they compress this data. The table below shows the ratio of the compressed data to the original data, which means a smaller number is better:

module ratio check
bz2 0.0058
gzip 0.0060
lzma 0.0098
zlib 0.0051
zstd 0.0024

The table below shows that, for this toy example, zstd compressed the data at least twice as effectively as any other compression algorithm.

This table was produced by the following snippet of Python 3.14 code, which also proves that all five modules provide the functions compress and decompress:...

November 07, 2025 10:55 PM UTC


Real Python

The Real Python Podcast – Episode #273: Advice for Writing Maintainable Python Code

What are techniques for writing maintainable Python code? How do you make your Python more readable and easier to refactor? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 07, 2025 12:00 PM UTC


Django Weblog

Django at PyCon FR 2025 🇫🇷

Last week, we had a great time at PyCon FR 2025 - a free (!) gathering for Pythonistas in France. Here are some of our highlights.

Sprints on Django, our website, IA, marketing

Over two days, the conference started with 27 contributors joining us to contribute to Django and our website and online presence. Half in the room were complete newcomers to open source, wanting to get a taste of what it’s like behind the scenes. We also had people who were new to Django, taking the excellent Django Girls tutorial to get up to speed with the project. The tutorial is translated in 20 languages(!), so it’s excellent in situations like this where people come from all over Europe.

Two contributors working together on a laptop pair programming

Carmen, one of our sprint contributors, took the time to test that our software for ongoing Board elections is accessible 💚

Discussing Django’s direction

At the sprints, we also organized discussions on Django’s direction - specifically on marketing, Artificial Intelligence, and technical decisions. Some recurring topics were:

We had a great time during those two days of sprints ❤️ thank you to everyone involved, we hope you stick around!

Design systems with JinjaX, Stimulus, and Cube CSS

Mads demonstrated how to bring a design-system mindset to Django projects by combining JinjaX, Stimulus JS, and Cube CSS. Supported by modern tooling like Figma, Vite, and Storybook. JinjaX in particular, allows to take a more component-driven “lego blocks” approach to front-end development with Django.

Mads on stage with title slide about design systems at PyCon FR

Three years of htmx in Django

Céline Martinet Sanchez shared her takeaways from using htmx with Django over three years. The verdict? A joyful developer experience, some (solved) challenges with testing.

Her recommended additions to make the most of the two frameworks:

Slide with libraries in use in the project stack of Céline

Becoming an open-source contributor in 2025

In her talk, Amanda Savluchinske explored how newcomers can get involved in open source—highlighting the Django community’s Djangonaut Space program. She explains why doing it is great, how to engage with busy maintainers, and specific actions people can take to get started.

We really liked her sharing a prompt she uses with AI, to iterate on questions to maintainers before hitting “send”:

“You are an expert in technical writing. I'm trying to write a message about a question I have about this open-source project I'm contributing to. Here's the link to its repo ‹Add link here›. I want to convey my question to the maintainers in a clear, concise way, at the same time that I want it to have enough context so that the communication happens with the least back and forth possible. I want this question to contain a short, max two sentence summary upfront, and then more context in the text's body. Ask me whatever questions you need about my question and context in order to produce this message.”

Amanda showcases contributor programs Google Summer of Code and Djangonaut Space

La Suite numérique: government collaboration powered by Django

PyCon FR also featured La Suite numérique, the French government’s collaborative workspace—developed with partners in Germany, the Netherlands (Mijn Bureau), and Italy. Their platform includes collaborative documents, video calls, chat, and an AI assistant — all powered by Django 🤘. This work is now part of a wider European Union initiative for sovereign digital infrastructure based on open source, for more information see: Commission to launch Digital Commons EDIC to support sovereign European digital infrastructure and technology.

Manuel on stage introducing La suite numerique

Up next…

Up next, we have the first ever Django Day India event! And closer to France, DjangoCon Europe 2026 will take place in Athens, Greece 🇬🇷🏖️🏛️☀️


We’re elated to support events like PyCon FR 2025. To help us do more of this, take a look at this great offer from JetBrains: 30% Off PyCharm Pro – 100% for Django – All money goes to the Django Software Foundation!

November 07, 2025 09:01 AM UTC

November 06, 2025


Django Weblog

2026 DSF Board Candidates

Thank you to the 19 individuals who have chosen to stand for election. This page contains their candidate statements submitted as part of the 2026 DSF Board Nominations.

Our deepest gratitude goes to our departing board members who are at the end of their term and chose not to stand for re-elections: Sarah Abderemane and Thibaud Colas; thank you for your contributions and commitment to the Django community ❤️.

Those eligible to vote in this election will receive information on how to vote shortly. Please check for an email with the subject line “2026 DSF Board Voting”. Voting will be open until 23:59 on November 26, 2025 Anywhere on Earth. There will be three seats open for election this year.

Any questions? Reach out on our dedicated forum thread or via email to foundation@djangoproject.com.

All candidate statements

To make it simpler to review all statements, here they are as a list of links. Voters: please take a moment to read all statements before voting! It will take some effort to rank all candidates on the ballot. We believe in you.

  1. Aayush Gauba (he/him) — St. Louis, MO
  2. Adam Hill (he/him) — Alexandria, VA
  3. Andy Woods (he/they) — UK
  4. Apoorv Garg (he/him) — India, now living in Japan
  5. Ariane Djeupang (she/her) — Cameroon
  6. Arunava Samaddar (he/him) — India
  7. Chris Achinga (he/him) — Mombasa, Kenya
  8. Dinis Vasco Chilundo (he/him) — Cidade de Inhambane, Mozambique
  9. Jacob Kaplan-Moss (he/him) — Oregon, USA
  10. Julius Nana Acheampong Boakye (he/him) — Ghana
  11. Kyagulanyi Allan (he/him) — Kampala, Uganda
  12. Nicole Buque (she) — Maputo, Mozambique
  13. Nkonga Morel (he/him) — Cameroun
  14. Ntui Raoul Ntui Njock (he/his) — Buea, Cameroon
  15. Priya Pahwa (she/her) — India, Asia
  16. Quinter Apondi Ochieng (she) — Kenya-Kisumu City
  17. Rahul Lakhanpal (he/him) — Gurugram, India
  18. Ryan Cheley (he/him) — California, United States
  19. Sanyam Khurana (he/him) — Toronto, Canada

Aayush Gauba (he/him) St. Louis, MO

View personal statement

I’m Aayush Gauba, a Django developer and Djangonaut Space mentee passionate about open source security and AI integration in web systems. I’ve spoken at DjangoCon US and actively contribute to the Django community through projects like AIWAF. My focus is on using technology to build safer and more inclusive ecosystems for developers worldwide.

Over the past few years, I’ve contributed to multiple areas of technology ranging from web development and AI security to research in quantum inspired computing. I’ve presented talks across these domains, including at DjangoCon US, where I spoke about AI powered web security and community driven innovation. Beyond Django, I’ve published academic papers exploring the intersection of ethics, quantum AI, and neural architecture design presented at IEEE and other research venues. These experiences have helped me understand both the technical and philosophical challenges of building responsible and transparent technology. As a Djangonaut Space mentee, I’ve been on the learning side of Django’s mentorship process and have seen firsthand how inclusive guidance and collaboration can empower new contributors. I bring a perspective that connects deep research with community growth and balancing innovation with the values that make Django strong: openness, ethics, and accessibility.

As part of the DSF board, I would like to bridge the gap between experienced contributors and new voices. I believe mentorship and accessibility are key to Django’s future. I would also like to encourage discussions around responsible AI integration, web security, and community growth ensuring Django continues to lead both technically and ethically. My goal is to help the DSF stay forward looking while staying true to its open, supportive roots.

Adam Hill (he/him) Alexandria, VA

View personal statement

I have been a software engineer for over 20 years and have been deploying Django in production for over 10. When not writing code, I'm probably playing pinball, watching a movie, or shouting into the void on social media.

I have been working with Django in production for over 10 years at The Motley Fool where I am a Staff Engineer. I have also participated in the Djangonauts program for my Django Unicorn library, gave a talk at DjangoCon EU (virtual) and multiple lightning talks at DjangoCon US conferences, built multiple libraries for Django and Python, have a semi-regularly updated podcast about Django with my friend, Sangeeta, and just generally try to push the Django ecosystem forward in positive ways.

The key issue I would like to get involved with is updating the djangoproject.com website. The homepage itself hasn't changed substantially in over 10 years and I think Django could benefit from a fresh approach to selling itself to developers who are interested in a robust, stable web framework. I created a forum post around this here: Want to work on a homepage site redesign?. I also have a document where I lay out some detailed ideas about the homepage here: Django Homepage Redesign.

Andy Woods (he/they) UK

View personal statement

I’m am based in academia and am a senior Creative Technologist and Psychologist. I have a PhD in Multisensory Perception. I make web apps and love combining new technologies. I’ve worked in academia (Sheffield, Dublin, Bangor, Manchester, Royal Holloway), industry (Unilever, NL), and founded three technology-based startups. I am proud of my neurodiversity.

I was on the review team of DjangoCon Europe 2021. I have had several blog posts included on the Django Newsletter (e.g. django htmx modal popup loveliness). I have written a scientific article on using Django for academic research (under peer review). I have several projects mentioned on Django Packages e.g. MrBenn Toolbar Plugin. I am part of a cohort of people who regularly meet to discuss python based software they are developing in the context of startups, started by Michael Kennedy. Here is an example of an opensource django-based project I am developing there: Tootology.

I am keen on strengthening the link between Django and the academic community. Django has enormous potential as a research and teaching tool, but us academics don't know about this! I would like to make amends by advocating for members of our community to appear on academic podcasts and social media platforms to promote Django’s versatility, and to reach new audiences.

In my professional life, I lead work on Equality, Diversity, and Inclusion, and am committed to creating fair and supportive environments. I will bring this to the DSF. The Django community already takes great strides in this area, and I would like to build upon this progress. Python recently turning down a $1.5 million grant, which I feels exemplifies the awesomeness of the greater community we are a part of.

Apoorv Garg (he/him) India, now living in Japan

View personal statement

I’m Apoorv Garg, a Django Software Foundation Member and open source advocate. I actively organize and volunteer for community events around Django, Grafana, and Postgres. Professionally, I work as a software engineer at a startup, focusing on building scalable backend systems and developer tools. I’m also part of the Google Summer of Code working groups with Django and JdeRobot, contributing to mentorship and open source development for over four years.

I have been actively speaking at various tech communities including Python, FOSSASIA, Django, Grafana, and Postgres. Over time, I’ve gradually shifted from just speaking to also organizing and volunteering at these community events, helping others get involved and build connections around open source technologies.

Beyond work, I’ve been mentoring students through Google Summer of Code with Django and JdeRobot. I also teach high school students the fundamentals of Python, Django, and robotics, helping them build curiosity and confidence in programming from an early stage.

Last year, I joined the Accessibility Working Group of the World Wide Web Consortium (W3C), which focuses on improving web accessibility standards and ensuring inclusive digital experiences for all users. My goal is to bring these learnings into the Django ecosystem, aligning its community and tools with global accessibility best practices.

Looking at the issues, I believe the opportunity of Google Summer of Code is currently very limited in Django. I know Django already has a lot of contributions, but being part of the core members in the JdeRobot organization, which is a small open source group, I understand the pain points we face when trying to reach that level of contribution. The way we utilize GSoC in JdeRobot has helped us grow, improve productivity, and bring in long-term contributors. I believe Django can benefit from adopting a similar approach.

Funding is another major issue faced by almost every open source organization. There are continuous needs around managing grants for conferences, supporting local communities and fellows, and sponsoring initiatives that strengthen the ecosystem. Finding sustainable ways to handle these challenges is something I want to focus on.

I also plan to promote Django across different open source programs. In my opinion, Django should not be limited to Python or Django-focused events. It can and should have a presence in database and infrastructure communities such as Postgres, Grafana, FOSSASIA, and W3C conferences around the world. This can help connect Django with new audiences and create more collaboration opportunities.

Ariane Djeupang (she/her) Cameroon

View personal statement

I’m Ariane Djeupang from Cameroon (Central Africa) , a ML Engineer, Project Manager, and Community Organizer passionate about building sustainable, inclusive tech ecosystems across Africa. As a Microsoft MVP in the Developer Technologies category, an active DSF member and a leader in open source communities, I believe in the power of collaboration, documentation, and mentorship to unlock global impact.

My efforts focus on lowering the barriers to meaningful participation. My work sits at the intersection of production engineering, clear technical communication, and community building. I’ve spent years building ML production-ready systems with Django, FastAPI, Docker, cloud platforms, and also ensuring that the knowledge behind those systems is accessible to others. I’ve turned complex workflows into approachable, accessible guides and workshops that empower others to build confidently. I’ve also collaborated with global networks to promote ethical ML/AI and sustainable tech infrastructure in resource-constrained environments.

Through my extensive experience organizing major events like: DjangoCon Africa, UbuCon Africa, PyCon Africa, DjangoCon US, EuroPython, I’ve created inclusive spaces where underrepresented voices lead, thrive and are celebrated. This has equipped me with the skills and insights needed to drive inclusivity, sustainability and community engagement. I volunteer on both the DSF's CoC and the D&I (as Chair) working groups. I also contribute to the scientific community through projects like NumPy, Pandas, SciPy, the DISCOVER COOKBOOK (under NumFOCUS' DISC Program).

As the very first female Cameroonian to be awarded Microsoft MVP, this recognition reflects years of consistent contribution, technical excellence, and community impact. The program connects me with a global network that I actively leverage to bring visibility, resources, and opportunities back to Django and Python communities, bridging local initiatives with global platforms to amplify Django’s reach and relevance. It demonstrates that my work is recognized at the highest levels of the industry.

As a young Black African woman in STEM from a region of Africa with pretty limited resources and an active DSF member, I’ve dedicated my career to fostering inclusivity and representation in the tech and scientific spaces and I am confident that I bring a unique perspective to the table.

I will push the DSF to be more than a steward of code, to be a catalyst for global belonging. My priorities are:

  • Radical inclusion: I'll work to expand resources and support for contributors from underrepresented regions, especially in Africa, Latin America, and Southeast Asia. This includes funding for local events, mentorship pipelines, and multilingual documentation sprints.
  • Sustainable community infrastructure: I’ll advocate for sustainable models of community leadership, ones that recognize invisible labor, prevent burnout, and promote distributed governance. We need to rethink how we support organizers, maintainers, and contributors beyond code.
  • Ethical tech advocacy: I’ll help the DSF navigate the ethical dimensions of Django’s growing role in AI and data-driven systems. From privacy to fairness, Django can lead by example. And I’ll work to ensure our framework reflects our values.
  • Global partnerships: I want to strengthen partnerships with regional communities and allied open-source foundations, ensuring Django’s growth is global and socially conscious.

I will bring diversity, a young and energized spirit that I think most senior boards lack. My vision is for the DSF to not only maintain Django but to set the standard for inclusive, ethical, and sustainable open source. My goal is simple: to make Django the most welcoming, resilient, and socially conscious web framework in the world.

Arunava Samaddar (he/him) India

View personal statement

Information Technology Experience 15 years

Microsoft Technology Python MongoDB Cloud Technology Testing People Manager Supervisor L2 Production Support and Maintenance

Well experience in software sales product delivery operations Agile Scrum and Marketing.

Chris Achinga (he/him) Mombasa, Kenya

View personal statement

I am a software developer, primarily using Python and Javascript, building web and mobile applications. At my workplace, I lead the Tech Department and the Data team.

I love developer communities and supported emerging developers through meetups, training, and community events including PyCon Kenya, local Django Meetup and university outreach.

At Swahilipot Hub, I built internal tools, supported digital programs, and mentored over 300 young developers through industrial attachment programs. I primarily use Django and React to development internal tools, websites (Swahilipot Hub) including our radio station site (Swahilipot FM).

I also work with Green World Campaign Kenya on the AIRS platform, where we use AI, cloud technology, and blockchain to support environmental projects and rural communities.

Outside of engineering, I write technical content and actively organise and support developer communities along the Kenyan coast to help more young people grow into tech careers - Chris Achinga’s Articles and Written Stuff

I would want to get involved more on the community side, diversity in terms of regional presentation and awareness of Django, and the Django Software Foundation. In as much as they's a lot of efforts in place. With no available African entity of the DSF, this would make it difficult for companies/organization in Africa to donate and support the DSF, I would love to champion for that and pioner towards that direction, not only for Africa but also for other under-represented geographical areas.

I wasn't so sure about this last year, but I am more confident, with a better understanding of the Django ecosystem and I know I have the capabilities of getting more contributions to Django, both financially and code-wise. I would also love to make sure that Django and the ecosystem is well know through proper communication channels, I know this differs based on different countries, the goal is to make sure that the DSF is all over, of course, where we are needed. Create the feeling that Django is for everyone, everywhere!

Dinis Vasco Chilundo (he/him) Cidade de Inhambane, Mozambique

View personal statement

I am a Rural Engineer from Universidade Eduardo Mondlane with practical experience in technology, data management, telecommunications, and sustainabilitty

In recent years, I have worked as a trainer and coach, as well as a researcher, empowering young people with programming, digital skills, and data analysis. I have also contributed to open-source projects, promoting access to technology and remote learning in several cities across Mozambique. These experiences have strengthened my belief in the power of open-source communities to create opportunities, foster collaboration, and drive innovation in regions with limited resources.

The thing I want the DSF to do is to expand its support for students and early career professionals.Personally, what I want to achieve is collaboration and transparency in actions as integrity is non negotiable.

Jacob Kaplan-Moss (he/him) Oregon, USA

View personal statement

I was one of the original maintainers of Django, and was the original founder and first President of the DSF. I re-joined the DSF board in 2023, and have served as Treasurer since 2024. I used to be a software engineer and security consultant (REVSYS, Latacora, 18F, Heroku), before mostly retiring from tech in 2025 to become an EMT.

I've been a member of the DSF Board for 3 years, so I bring some institutional knowledge there. I've been involved in the broader Django community as long as there has been a Django community, though the level of my involvement has waxed and waned. The accomplishments I'm the most proud of in the Django community are creating our Code of Conduct (djangoproject.com/conduct/), and more recently establishing the DSF's Working Groups model (django/dsf-working-groups).

Outside of the Django community, I have about 15 years of management experience, at companies small and large (and also in the US federal government).

I'm running for re-election with three goals for the DSF: (a) hire an Executive Director, (b) build more ""onramps"" into the DSF and Django community, and (c) expand and update our Grants program.

Hire an ED: this is my main goal for 2026, and the major reason I'm running for re-election. The DSF has grown past the point where being entirely volunteer-ran is working; we need transition the organization towards a more professional non-profit operation. Which means paid staff. Members of the Board worked on this all throughout 2025, mostly behind the scenes, and we're closer than ever -- but not quite there. We need to make this happen in 2026.

Build ""onramps"": this was my main goal when I ran in 2024 (see my statement at 2024 DSF Board Candidates). We've had some success there: several Working Groups are up and running, and over on the technical side we helped the Steering Council navigate a tricky transition, and they're now headed in a more positive direction. I'm happy with our success there, but there's still work to do; helping more people get involved with the DSF and Django would continue to be a high-level goal of mine. And, I'd like to build better systems for recognition of people who contribute to the DSF/Django — there are some incredible people working behind the scenes that most of the community has heard of.

Expand and update our grants program: our grants program is heavily overdue for a refresh. I'd like to update our rules and policies, make funding decisions clearer and less ad-hoc, increase the amount of money we're giving per grant, and (funding allowing) expand to to other kinds of grants (e.g. travel grants, feature grants, and more). I'd also like to explore turning over grant decisions to a Working Group (or subcommittee of the board), to free up Board time for more strategic work.

Julius Nana Acheampong Boakye (he/him) Ghana

View personal statement

I’m a proud Individual Member of the Django Software Foundation and a full-stack software engineer with a strong focus on Django and mobile development. Beyond code, I’m deeply involved in the global Python and Django , Google & FlutterFlow communities, actively contributing to the organization of several major conferences around the world.

I am a passionate full-stack software engineer with a strong focus on Django and mobile development. Over the years, I’ve contributed to the global Python and Django communities through volunteering, organizing, and speaking. I served as the Opportunity Grant Co-Chair for DjangoCon US (2024 & 2025), where I help ensure accessibility and inclusion for underrepresented groups. I also helped Organise DjangoCon Europe, where my impact was felt (see LinkedIn post)

I was also the as Design Lead for PyCon Africa 2024 and PyCon Ghana 2025 , where i worked everything designs to make the conference feel like home (see LinkedIn post) and I also helped organise other regional events, including DjangoCon Africa, PyCon Namibia, PyCon Portugal and etc. Beyond organising, I’ve spoken at several local and international conferences, sharing knowledge and promoting community growth including PyCon Africa, DjangoCon Africa, PyCon Nigeria, and PyCon Togo.

I’m also an Individual Member of the Django Software Foundation, and my work continues to center on empowering developers, building open communities, and improving access for newcomers in tech.

As a board member, I want to help strengthen Django’s global community by improving accessibility, diversity, and engagement especially across regions where Django adoption is growing but still lacks strong community infrastructure, such as Africa and other underrepresented areas.

My experience as Opportunity Grant Co-Chair for DjangoCon US and Design Lead for PyCon Africa has shown me how powerful community-driven support can be when it’s backed by inclusion and transparency. I want the DSF to continue building bridges between developers, organizers, and contributors making sure that everyone, regardless of location or background, feels seen and supported.

I believe the DSF can take a more active role in empowering local communities, improving mentorship pathways, and creating better visibility for contributors who work behind the scenes. I also want to support initiatives that make Django more approachable to new developers through clearer learning materials and global outreach programs.

Personally, I want to help the DSF improve communication with international communities, expand partnerships with educational programs and tech organizations, and ensure the next generation of developers see Django as not just a framework, but a welcoming and sustainable ecosystem.

My direction for leadership is guided by collaboration, empathy, and practical action building on Django’s strong foundation while helping it evolve for the future

Kyagulanyi Allan (he/him) Kampala, Uganda

View personal statement

I am Kyagulanyi Allan, a software developer, and co-founder at Grin Mates. Grin Mates is an eco-friendly EVM dApp with an inbuilt crypto wallet that awards Green points for verified sustainable activities. Ii am very excited about the potential of web3 and saddened by some other parts of it.

I am a developer, and I have been lucky to volunteer and also contribute. I worked on diverse projects like AROC and Grin Mates. I volunteered as a Google student developer lead at my university, when i was working at after query experts on project pluto. I used Python to train the LLM mode on bash/linux commands.

My position on key issues is on advancing and advocating for inclusiveness, with priority on children from rural areas.

Nicole Buque (she) Maputo, Mozambique

View personal statement

My name is Nicole Buque, a 20-year-old finalist student in Computer Engineering from Mozambique. I am deeply passionate about data analysis, especially in the context of database systems, and I love transforming information into meaningful insights that drive innovation.

During my academic journey, I have worked with Vodacom, contributing to website development projects that improved digital communication and accessibility. I also participated in the WT Bootcamp for Data Analysis, where I gained strong analytical, technical, and teamwork skills. As an aspiring IT professional, I enjoy exploring how data, systems, and community collaboration can create sustainable solutions. My experience has helped me develop both technical expertise and a people-centered approach to technology — understanding that real progress comes from empowering others through knowledge

Nkonga Morel (he/him) Cameroun

View personal statement

Curious, explorer, calm, patient

My experience on Django is medium

My direction for the DSF is one of growth, mentorship, and openness ,ensuring Django remains a leading framework not just technically, but socially.

Ntui Raoul Ntui Njock (he/his) Buea, Cameroon

View personal statement

I'm a software engineer posionate about AI/ML and solving problems in the healthcare sector in collaboration with others.

I'm a skilled software engineer in the domain of AI/ML, Django, Reactjs, TailwindCSS. I have been building softwares for over 2 years now and growing myself in this space has brought some level of impact in the community as I have been organizing workshops in the university of Buea, teaching people about the Django framework, I also had the privilege to participate at the deep learning indaba Cameroon where I was interviewed by CRTV to share knowledge with respect to deep learning. You could see all these on my LinkedIn profile (Ntui Raoul).

I believe that in collaboration with others at the DSF, I'll help the DSF to improve in it's ways to accomplish its goals. I believe we shall improve on the codebase of Django framework, it's collaboration with other frameworks so as to help the users of the framework to find it more easy to use the Django framework. Also I'll help to expand the Django framework to people across the world.

Priya Pahwa (she/her) India, Asia

View personal statement

I'm Priya Pahwa (she/her), an Indian woman who found both community and confidence through Django. I work as a Software Engineer (Backend and DevOps) at a fintech startup and love volunteering in community spaces. From leading student communities as a GitHub Campus Expert to contributing as a GitHub Octern and supporting initiatives in the Django ecosystem, open-source is an integral part of my journey as a developer.

My belonging to the Django community has been shaped by serving as the Session Organizer of the global Djangonaut Space program, where I work closely with contributors and mentors from diverse geographies, cultures, age groups, and both coding and non-coding backgrounds. Being part of the organizing team for Sessions 3, 4, and ongoing 5, each experience has evolved my approach towards better intentional community stewardship and collaboration.

I also serve as Co-Chair of the DSF Fundraising Working Group since its formation in mid-2024. As we enter the execution phase, we are focused on establishing additional long-term funding streams for the DSF. I intend to continue this work by:

  • Running sustained fundraising campaigns rather than one-off appeals
  • Building corporate sponsorship relationships for major donations
  • Focusing on the funding of the Executive Director for financial resilience

My commitment to a supportive ecosystem guides my work. I am a strong advocate of psychological safety in open-source, a topic I've publicly talked about (“Culture Eats Strategy for Breakfast” at PyCon Greece and DjangoCongress Japan). This belief led me to join the DSF Code of Conduct Working Group because the health of a community is determined not only by who joins, but by who feels able to stay.

If elected to the board, I will focus on:

  • Moving fundraising WG from “effort” to infrastructure (already moving in the direction by forming the DSF prospectus)
  • Initiating conference travel grants to lower barriers and increase participation for active community members
  • Strengthening cross-functional working groups' collaboration to reduce organizational silos
  • Designing inclusive contributor lifecycles to support pauses for caregiving or career breaks
  • Highlighting diverse user stories and clearer “here’s how to get involved” community pathways
  • Amplifying DSF’s public presence and impact through digital marketing strategies

Quinter Apondi Ochieng (she) Kenya-Kisumu City

View personal statement

my name is Quinter Apondi Ochieng from Kisumu city , i am a web developer from kisumu city , Django has been part of my development professional journey for the past two years , i have contributed to local meetups as a community leader, developed several website one being an e-commerce website , also organized Django Girls kisumu workshop which didn't come to success due to financial constrains, workshop was to take place 1st November but postponed it.

In my current position, i lead small team building Django based applications, i have also volunteered as python kisumu community committee member which i served as a non-profit tech boards driven by passion.The experience have strengthen my skills in collaborations , decision making , long-term project planning and governance.I understand how important it is for the DSF to balance technical progress with sustainability and transparency.

The challenge I can help to negotiate is limited mentorship and unemployment. It has always blown my mind why IT, computer science, and SWE graduates struggle after campus life. In my country, SWE,IT and comp sci courses have final year projects that they pass and that have not been presented to any educational institute. I believe that if those projects are shipped, unemployment will be cut by over 50 %.

Rahul Lakhanpal (he/him) Gurugram, India

View personal statement

I am a software architect working for over 13 years in the field of software development based out of Gurugram, India. For the past 8 years, I have been working 100% remotely, working as an independent contractor under my own company deskmonte

As a kid I was always the one breaking more toys than I played with and was super curious. Coming from a normal family background, we always had a focus on academics. Although I did not break into the top tier colleges, the intent and curiosity to learn more stayed.

As of now, I am happily married with an year old kid.

My skills are primarily Python and Django, have been using the same tech stack since the last decade. Have used it to create beautiful admin interfaces for my clients, have written APIs in both REST using django rest framework package along with GraphQL using django-graphene. Alongside, have almost always integrated Postgres and Celery+Redis with my core tech stack.

In terms of volunteering, I have been an active code mentor at Code Institute, Ireland and have been with them since 2019, helping students pick up code using Python and Django for the most part.

I love the django rest framework and I truly believe that the admin interface is extremely powerful and the utility of the overall offering is huge.

I would love to take django to people who are just starting up, support and promote for more meetups/conferences that can focus on django along with advancing django's utility in the age of AI.

Ryan Cheley (he/him) California, United States

View personal statement

I'm Ryan and I’m running for the DSF Board in the hopes of being the Treasurer. I've been using Django since 2018. After several years of use, I finally had a chance to attend DjangoCon US in 2022. I felt like I finally found a community where I belonged and knew that I wanted to do whatever I could to give back.

My involvement with the community over the last several years includes being a:

If elected to the board, I would bring valuable skills to benefit the community, including:

  • Managing technical teams for nearly 15 years
  • Nearly 20 years of project management experience
  • Overseeing the financial operations for a team of nearly 30
  • Consensus-building on large projects

I'm particularly drawn to the treasurer role because my background in financial management and budgeting positions me to help ensure the DSF's continued financial health and transparency.

For more details on my implementation plan, see my blog post Details on My Candidate Statement for the DSF.

If elected to the DSF Board I have a few key initiatives I'd like to work on:

  1. Getting an Executive Director to help run the day-to-day operations of the DSF
  2. Identifying small to midsized companies for sponsorships
  3. Implementing a formal strategic planning process
  4. Setting up a fiscal sponsorship program to allow support of initiatives like Django Commons

I believe these are achievable in the next 2 years.

Sanyam Khurana (he/him) Toronto, Canada

View personal statement

I’m Sanyam Khurana (“CuriousLearner”), a seasoned Django contributor and member of the djangoproject.com Website Working Group, as well as a CPython bug triager and OSS maintainer. I’ve worked in India, the U.K., and Canada, and I’m focused on inclusion, dependable tooling, and turning first-time contributors into regulars.

I’ve contributed to Django and the wider Python ecosystem for years as a maintainer, reviewer, and issue triager. My Django-focused work includes django-phone-verify (auth flows), django-postgres-anonymizer (privacy/data handling), and Django-Keel (a production-ready project template). I also build developer tooling like CacheSniper (a tiny Rust CLI to sanity-check edge caching).

Repos: django-phone-verify , django-postgres-anonymizer , django-keel , cache_sniper

CPython & Django contributions: django commits, djangoproject.com commits, CPython commits

Beyond code, I’ve supported newcomers through docs-first guidance, small PR reviews, and patient issue triage. I’m a CPython bug triager and listed in Mozilla credits, which taught me to balance openness with careful review and clear process. I’ve collaborated across India, UK, and Canada, so I’m used to async work, time-zones, and transparent communication.

I owe my learnings to the community and want to give back. I understand the DSF Board is non-technical leadership like fundraising, grants/sponsorships, community programs, CoC support, and stewardship of Django’s operations and not deciding framework features. That’s exactly where I want to contribute.

I’ll push for an easy, skimmable annual “Where your donation went” report (fellows, events, grants, infra) plus lightweight quarterly updates. Clear storytelling helps retain individual and corporate sponsors and shows impact beyond core commits.

I want to grow contributors globally by turning their first PR into regular contributions. I want to make this path smoother by funding micro-grants for mentorship/sprints and backing working groups with small, delegated budgets under clear guardrails - so they can move fast without waiting on the Board.

I propose a ready-to-use “starter kit” for meetups/sprints: budget templates, venue ask letters, CoC, diversity travel-grant boilerplates, and a sponsor prospectus. We should prioritize regions with high Django usage but fewer historic DSF touchpoints (South Asia, Africa, LATAM). This comes directly from organizing over 120 meetups and annual conference like PyCon India for 3 years.

  • Your move now

    That’s it, you’ve read it all 🌈! Be sure to vote if you’re eligible, by using the link shared over email. To support the future of Django, donate to the Django Software Foundation on our website or via GitHub Sponsors. We also have our 30% Off PyCharm Pro – 100% for Django 💚.

  • November 06, 2025 05:00 AM UTC