skip to navigation
skip to content

Planet Python

Last update: January 20, 2026 09:44 PM UTC

January 20, 2026


PyCoder’s Weekly

Issue #718: pandas 3.0, deque, tprof, and More (Jan. 20, 2026)

#718 – JANUARY 20, 2026
View in Browser »

The PyCoder’s Weekly Logo


What’s New in pandas 3.0

Learn what’s new in pandas 3.0: pd.col expressions for cleaner code, Copy-on-Write for predictable behavior, and PyArrow-backed strings for 5-10x faster operations.
CODECUT.AI • Shared by Khuyen Tran

Python’s deque: Implement Efficient Queues and Stacks

Use a Python deque to efficiently append and pop elements from both ends of a sequence, build queues and stacks, and set maxlen for history buffers.
REAL PYTHON

B2B Authentication for any Situation - Fully Managed or BYO

alt

What your sales team needs to close deals: multi-tenancy, SAML, SSO, SCIM provisioning, passkeys…What you’d rather be doing: almost anything else. PropelAuth does it all for you, at every stage. →
PROPELAUTH sponsor

Introducing tprof, a Targeting Profiler

Adam has written tprof a targeting profiler for Python 3.12+. This article introduces you to the tool and why he wrote it.
ADAM JOHNSON

Python 3.15.0 Alpha 4 Released

CPYTHON DEV BLOG

Articles & Tutorials

Anthropic Invests $1.5M in the PSF

Anthropic has entered a two-year partnership with the PSF, contributing $1.5 million. The investment will focus on Python ecosystem security including advances to CPython and PyPI.
PYTHON SOFTWARE FOUNDATION

The Coolest Feature in Python 3.14

Svaannah has written a debugging tool called debugwand that help access Python applications running in Kubernetes and Docker containers using Python 3.14’s sys.remote_exec() function.
SAVANNAH OSTROWSKI

AI Code Review with Comments You’ll Actually Implement

alt

Unblocked is the AI code review that surfaces real issues and meaningful feedback instead of flooding your PRs with stylistic nitpicks and low-value comments. “Finally, a tool that surfaces context only someone with a full view of the codebase could provide.” - Senior developer, Clio →
UNBLOCKED sponsor

Avoiding Duplicate Objects in Django Querysets

When filtering Django querysets across relationships, you can easily end up with duplicate objects in your results. Learn why this happens and the best ways to avoid it.
JOHNNY METZ

diskcache: Your Secret Python Perf Weapon

Talk Python interviews Vincent Warmerdam and they discuss DiskCache, an SQLite-based caching mechanism that doesn’t require you to spin up extra services like Redis.
TALK PYTHON podcast

How to Create a Django Project

Learn how to create a Django project and app in clear, guided steps. Use it as a reference for any future Django project and tutorial you’ll work on.
REAL PYTHON

Get Job-Ready With Live Python Training

Real Python’s 2026 cohorts are open. Python for Beginners teaches fundamentals the way professional developers actually use them. Intermediate Python Deep Dive goes deeper into decorators, clean OOP, and Python’s object model. Live instruction, real projects, expert feedback. Learn more at realpython.com/live →
REAL PYTHON sponsor

Quiz: How to Create a Django Project

REAL PYTHON

Intro to Object-Oriented Programming (OOP) in Python

Learn Python OOP fundamentals fast: master classes, objects, and constructors with hands-on lessons in this beginner-friendly video course.
REAL PYTHON course

Fun With Mypy: Reifying Runtime Relations on Types

This post describes how to implement a safer version of typing.cast which guarantees a cast type is also an appropriate sub-type.
LANGSTON BARRETT

How to Type Hint a Decorator in Python

Writing a decorator itself can be a little tricky, but adding type hints makes it a little harder. This article shows you how.
MIKE DRISCOLL

How to Integrate ChatGPT’s API With Python Projects

Learn how to use the ChatGPT Python API with the openai library to build AI-powered features in your Python applications.
REAL PYTHON

Quiz: How to Integrate ChatGPT’s API With Python Projects

REAL PYTHON

Raw String Literals in Python

Exploring the pitfalls of raw string literals in Python and why backslash can still escape some things in raw mode.
SUBSTACK.COM • Shared by Vivis Dev

Need a Constant in Python? Enums Can Come in Useful

Python doesn’t have constants, but it does have enums. Learn when you might want to use them in your code.
STEPHEN GRUPPETTA

Projects & Code

usqlite: μSQLite Library Module for MicroPython

GITHUB.COM/SPATIALDUDE

transtractor-lib: PDF Bank Statement Extraction

GITHUB.COM/TRANSTRACTOR

sharepoint-to-text: Sharepoint to Text

GITHUB.COM/HORSMANN

graphqlite: Graph Database SQLite Extension

GITHUB.COM/COLLIERY-IO

chanx: WebSocket Framework for Django Channels, FastAPI, and ASGI-based Applications

GITHUB.COM/HUYNGUYENGL99

Events

Weekly Real Python Office Hours Q&A (Virtual)

January 21, 2026
REALPYTHON.COM

Python Leiden User Group

January 22, 2026
PYTHONLEIDEN.NL

PyDelhi User Group Meetup

January 24, 2026
MEETUP.COM

PyLadies Amsterdam: Robotics Beginner Class With MicroPython

January 27, 2026
MEETUP.COM

Python Sheffield

January 27, 2026
GOOGLE.COM

Python Southwest Florida (PySWFL)

January 28, 2026
MEETUP.COM


Happy Pythoning!
This was PyCoder’s Weekly Issue #718.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

January 20, 2026 07:30 PM UTC


Python Software Foundation

Announcing Python Software Foundation Fellow Members for Q4 2025! 🎉

 

The PSF is pleased to announce its fourth batch of PSF Fellows for 2025! Let us welcome the new PSF Fellows for Q4! The following people continue to do amazing things for the Python community:

Chris Brousseau

Website, LinkedIn, GitHub, Mastodon, X, PyBay, PyBay GitHub

Dave Forgac

Website, Mastodon, GitHub, LinkedIn

Inessa Pawson

GitHub, LinkedIn

James Abel

Website, LinkedIn, GitHub, Bluesky

Karen Dalton

Website

Mia Bajić

Website

Tatiana Andrea Delgadillo Garzofino

Website

Thank you for your continued contributions. We have added you to our Fellows Roster.

The above members help support the Python ecosystem by being phenomenal leaders, sustaining the growth of the Python scientific community, maintaining virtual Python communities, maintaining Python libraries, creating educational material, organizing Python events and conferences, starting Python communities in local regions, and overall being great mentors in our community. Each of them continues to help make Python more accessible around the world. To learn more about the new Fellow members, check out their links above.

Let's continue recognizing Pythonistas all over the world for their impact on our community. The criteria for Fellow members is available on our PSF Fellow Membership page. If you would like to nominate someone to be a PSF Fellow, please send a description of their Python accomplishments and their email address to psf-fellow at python.org. We are accepting nominations for Quarter 1 of 2026 through February 20th, 2026.

Are you a PSF Fellow and want to help the Work Group review nominations? Contact us at psf-fellow at python.org.

January 20, 2026 02:49 PM UTC


Real Python

uv vs pip: Python Packaging and Dependency Management

When it comes to Python package managers, the choice often comes down to uv vs pip. You may choose pip for out-of-the-box availability, broad compatibility, and reliable ecosystem support. In contrast, uv is worth considering if you prioritize fast installs, reproducible environments, and clean uninstall behavior, or if you want to streamline workflows for new projects.

In this video course, you’ll compare both tools. To keep this comparison meaningful, you’ll focus on the overlapping features, primarily package installation and dependency management.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 20, 2026 02:00 PM UTC


PyCharm

Why Is Python So Popular in 2025?

While other programming languages come and go, Python has stood the test of time and firmly established itself as a top choice for developers of all levels, from beginners to seasoned professionals.

Whether you’re working on intelligent systems or data-driven workflows, Python has a pivotal role to play in how your software is built, scaled, and optimized.

Many surveys, including our Developer Ecosystem Survey 2025, confirm Python’s continued popularity. The real question is why developers keep choosing it, and that’s what we’ll explore. 

Whether you’re choosing your first language or building production-scale services, this post will walk you through why Python remains a top choice for developers.

How popular is Python in 2025?

In our Developer Ecosystem Survey 2025, Python ranks as the second most-used programming language in the last 12 months, with 57% of developers reporting that they use it.

More than a third (34%) said Python is their primary programming language. This places it ahead of JavaScript, Java, and TypeScript in terms of primary use. It’s also performing well despite fierce competition from newer systems and niche domain tools.

These stats tell a story of sustained relevance across diverse developer segments, from seasoned backend engineers to first-time data analysts.

This continued success is down to Python’s ability to grow with you. It doesn’t just serve as a first step; it continues adding value in advanced environments as you gain skills and experience throughout your career.

Let’s explore why Python remains a popular choice in 2025.

1. Dominance in AI and machine learning

Our recently released report, The State of Python 2025, shows that 41% of Python developers use the language specifically for machine learning.

This is because Python drives innovation in areas like natural language processing, computer vision, and recommendation systems.

Python’s strength in this area comes from the fact that it offers support at every stage of the process, from prototyping to production. It also integrates into machine learning operations (MLOps) pipelines with minimal friction and high flexibility.

One of the most significant reasons for Python’s popularity is its syntax, which is expressive, readable, and dynamic. This allows developers to write training loops, manipulate tensors, and orchestrate workflows without boilerplate friction. 

However, it’s Python’s ecosystem that makes it indispensable.

Core frameworks include:

These frameworks are mature, well-documented, and interoperable, benefitting from rapid open-source development and extensive community contributions. They support everything from GPU acceleration and distributed training to model export and quantization.

Python also integrates cleanly across the machine learning (ML) pipeline, from data preprocessing with pandas and NumPy to model serving via FastAPI or Flask to inference serving for LLMs with vLLM.

It all comes together to provide a solution that allows you to deliver a working AI solution without ever really having to work outside Python.

2. Strength in data science and analytics

From analytics dashboards to ETL scripts, Python’s flexibility drives fast, interpretable insights across industries. It’s particularly adept at handling complex data, such as time-series analyses. 

The State of Python 2025 reveals that 51% of respondents are involved in data exploration and processing. This includes tasks like:

Core libraries such as pandas, NumPy, Matplotlib, Plotly, and Jupyter Notebook form a mature ecosystem that’s supported by strong documentation and active community development.

Python offers a unique balance. It’s accessible enough for non-engineers, but powerful enough for production-grade pipelines. It also integrates with cloud platforms, supports multiple data formats, and works seamlessly with SQL and NoSQL data stores.

3. Syntax that’s simple and scalable

Python’s most visible strength remains its readability. Developers routinely cite Python’s low barrier to entry and clean syntax as reasons for initial adoption and longer-term loyalty. In Python, even model training syntax reads like plain English:

def train(model):
    for item in model.data:
        model.learn(item)

Code snippets like this require no special decoding. That clarity isn’t just beginner-friendly; it also lowers maintenance costs, shortens onboarding time, and improves communication across mixed-skill teams.

This readability brings practical advantages. Teams spend less time deciphering logic and more time improving functionality. Bugs surface faster. Reviews run more smoothly. And non-developers can often read Python scripts without assistance.

The State of Python 2025 revealed that 50% of respondents had less than two years of total coding experience. Over a third (39%) had been coding in Python for two years or less, even in hobbyist or educational settings.

This is where Python really stands out. Though its simple syntax makes it an ideal entry point for new coders, it scales with users, which means retention rates remain high. As projects grow in complexity, Python’s simplicity becomes a strength, not a limitation.

Add to this the fact that Python supports multiple programming paradigms (procedural, object-oriented, and functional), and it becomes clear why readability is important. It’s what enables developers to move between approaches without friction.

4. A mature and versatile ecosystem

Python’s power lies in its vast network of libraries that span nearly every domain of modern software development.

Our survey shows that developers rely on Python for everything from web applications and API integration to data science, automation, and testing. 

Its deep, actively maintained toolset means you can use Python at all stages of production.

Here’s a snapshot of Python’s core domains and the main libraries developers reach for:

DomainPopular Libraries
Web developmentDjango, Flask, FastAPI
AI and MLTensorFlow, PyTorch, scikit-learn, Keras
Testingpytest, unittest, Hypothesis
AutomationClick, APScheduler, Rich
Data sciencepandas, NumPy, Plotly, Matplotlib

This breadth translates to real-world agility. Developers can move between back-end APIs and machine learning pipelines without changing language or tooling. They can prototype with high-level wrappers and drop to lower-level control when needed.

Critically, Python’s packaging and dependency management systems like pip, conda, and poetry support modular development and reproducible environments. Combined with frameworks like FastAPI for APIs, pytest for testing, and pandas for data handling, Python offers unrivaled scalability.

5. Community support and shared knowledge

Python’s enduring popularity owes much to its global, engaged developer community.

From individual learners to enterprise teams, Python users benefit from open forums, high-quality tutorials, and a strong culture of mentorship. The community isn’t just helpful, it’s fast-moving and inclusive, fostering a welcoming environment for developers of all levels.

Key pillars include:

This network doesn’t just solve problems; it also shapes the language’s evolution. Python’s ecosystem is sustained by collaboration, continual refinement, and shared best practices.

When you choose Python, you tap into a knowledge base that grows with the language and with you over time.

6. Cross-domain versatility

Python’s reach is not limited to AI and ML or data science and analytics. It’s equally at home in automation, scripting, web APIs, data workflows, and systems engineering. Its ability to move seamlessly across platforms, domains, and deployment targets makes it the default language for multipurpose development.

The State of Python 2025 shows just how broadly developers rely on Python:

FunctionalityPercentage of Python users
Data analysis48%
Web development46%
Machine learning41%
Data engineering31%
Academic research27%
DevOps and systems administration26%

That spread illustrates Python’s domain elasticity. The same language that powers model training can also automate payroll tasks, control scientific instruments, or serve REST endpoints. Developers can consolidate tools, reduce context-switching, and streamline team workflows.

Python’s platform independence (Windows, Linux, macOS, cloud, and browser) reinforces this versatility. Add in a robust packaging ecosystem and consistent cross-library standards, and the result is a language equally suited to both rapid prototyping and enterprise production.

Few languages match Python’s reach, and fewer still offer such seamless continuity. From frontend interfaces to backend logic, Python gives developers one cohesive environment to build and ship full solutions.

That completeness is part of the reason people stick with it. Once you’re in, you rarely need to reach for anything else.

Python in the age of intelligent development

As software becomes more adaptive, predictive, and intelligent, Python is strongly positioned to retain its popularity. 

Its abilities in areas like AI, ML, and data handling, as well as its mature libraries, make it a strong choice for systems that evolve over time.

Python’s popularity comes from its ability to easily scale across your projects and platforms. It continues to be a great choice for developers of all experience levels and across projects of all sizes, from casual automation scripts to enterprise AI platforms.

And when working with PyCharm, Python is an intelligent, fast, and clean option.

For a deeper dive, check out The State of Python 2025 by Michael Kennedy, Python expert and host of the Talk Python to Me podcast. 

Michael analyzed over 30,000 responses from our Python Developers Survey 2024, uncovering fascinating insights and identifying the latest trends.

Whether you’re a beginner or seasoned developer, The State of Python 2025 will give you the inside track on where the language is now and where it’s headed. 

As tools like Astral’s uv show, Python’s evolution is far from over, despite its relative maturity. With a growing ecosystem and proven staying power, it’s well-positioned to remain a popular choice for developers for years to come.

January 20, 2026 01:40 PM UTC

Whether you’re building APIs, dashboards, or machine learning pipelines, choosing the right framework can make or break your project.

Every year, we survey thousands of Python developers to help you understand how the ecosystem is evolving, from tooling and languages to frameworks and libraries. Our insights from the State of Python 2025 offer a snapshot of what frameworks developers are using in 2025.

In this article, we’ll look at the most popular Python frameworks and libraries. While some long-standing favorites like Django and Flask remain strong, newer contenders like FastAPI are rapidly gaining ground in areas like AI, ML, and data science.

1. FastAPI

2024 usage: 38% (+9% from 2023)

Top of the table is FastAPI, a modern, high-performance web framework for building APIs with Python 3.8+. It was designed to combine Python’s type hinting, asynchronous programming, and OpenAPI standards into a single, developer-friendly package. 

Built on top of Starlette (for the web layer) and Pydantic (for data validation), FastAPI offers automatic request validation, serialization, and interactive documentation, all with minimal boilerplate.

FastAPI is ideal for teams prioritizing speed, simplicity, and standards. It’s especially popular among both web developers and data scientists.

FastAPI advantages

FastAPI disadvantages

2. Django

2024 usage: 35% (+2% from 2023)

Django once again ranks among the most popular Python frameworks for developers.

Originally built for rapid development with built-in security and structure, Django has since evolved into a full-stack toolkit. It’s trusted for everything from content-heavy websites to data science dashboards and ML-powered services.

It follows the model-template-view (MTV) pattern and comes with built-in tools for routing, data access, and user management. This allows teams to move from idea to deployment with minimal setup.

Django advantages

Django disadvantages

3. Flask

2024 usage: 34% (+1% from 2023)

Flask is one of the most popular Python frameworks for small apps, APIs, and data science dashboards. 

It is a lightweight, unopinionated web framework that gives you full control over application architecture. Flask is classified as a “microframework” because it doesn’t enforce any particular project structure or include built-in tools like ORM or form validation.

Instead, it provides a simple core and lets you add only what you need. Flask is built on top of Werkzeug (a WSGI utility library) and Jinja2 (a templating engine). It’s known for its clean syntax, intuitive routing, and flexibility.

It scales well when paired with extensions like SQLAlchemy, Flask-Login, or Flask-RESTful. 

Flask advantages

Flask disadvantages

4. Requests

2024 usage: 33% (+3% from 2023)

Requests isn’t a web framework, it’s a Python library for making HTTP requests, but its influence on the Python ecosystem is hard to overstate. It’s one of the most downloaded packages on PyPI and is used in everything from web scraping scripts to production-grade microservices.

Requests is often paired with frameworks like Flask or FastAPI to handle outbound HTTP calls. It abstracts away the complexity of raw sockets and urllib, offering a clean, Pythonic interface for sending and receiving data over the web.

Requests advantages

Requests disadvantages

5. Asyncio

2024 usage: 23% (+3% from 2023)

Asyncio is Python’s native library for asynchronous programming. It underpins many modern async frameworks and enables developers to write non-blocking code using coroutines, event loops, and async/await syntax.

While not a web framework itself, Asyncio excels at handling I/O-bound tasks such as network requests and subprocesses. It’s often used behind the scenes, but remains a powerful tool for building custom async workflows or integrating with low-level protocols.

Asyncio advantages

Asyncio disadvantages

6. Django REST Framework

2024 usage: 20% (+2% from 2023)

Django REST Framework (DRF) is the most widely used extension for building APIs on top of Django. It provides a powerful, flexible toolkit for serializing data, managing permissions, and exposing RESTful endpoints – all while staying tightly integrated with Django’s core components.

DRF is especially popular in enterprise and backend-heavy applications where teams are already using Django and want to expose a clean, scalable API without switching stacks. It’s also known for its browsable API interface, which makes testing and debugging endpoints much easier during development.

Django REST Framework advantages

Django REST Framework disadvantages

Best of the rest: Frameworks 7–10

While the most popular Python frameworks dominate usage across the ecosystem, several others continue to thrive in more specialized domains. These tools may not rank as high overall, but they play important roles in backend services, data pipelines, and async systems.

FrameworkOverviewAdvantagesDisadvantages
httpx
2024 usage: 15% (+3% from 2023)
Modern HTTP client for sync and async workflowsAsync support, HTTP/2, retries, and type hintsNot a web framework, no routing or server-side features
aiohttp
2024 usage: 13% (+1% from 2023)
Async toolkit for HTTP servers and clientsASGI-ready, native WebSocket handling, and flexible middlewareLower-level than FastAPI, less structured for large apps.
Streamlit
2024 usage: 12% (+4% from 2023)
Dashboard and data app builder for data workflowsFast UI prototyping, with zero front-end knowledge requiredLimited control over layout, less suited for complex UIs.
Starlette
2024 usage: 8% (+2% from 2023)
Lightweight ASGI framework used by FastAPIExceptional performance, composable design, fine-grained routingRequires manual integration, fewer built-in conveniences

Choosing the right framework and tools

Whether you’re building a blazing-fast API with FastAPI, a full-stack CMS with Django, or a lightweight dashboard with Flask, the most popular Python web frameworks offer solutions for every use case and developer style.

Insights from the State of Python 2025 show that while Django and Flask remain strong, FastAPI is leading a new wave of async-native, type-safe development. Meanwhile, tools like Requests, Asyncio, and Django REST Framework continue to shape how Python developers build and scale modern web services.

But frameworks are only part of the equation. The right development environment can make all the difference, from faster debugging to smarter code completion and seamless framework integration.

That’s where PyCharm comes in. Whether you’re working with Django, FastAPI, Flask, or all three, PyCharm offers deep support for Python web development. This includes async debugging, REST client tools, and rich integration with popular libraries and frameworks.

Ready to build something great? Try PyCharm and see how much faster and smoother Python web development can be.

January 20, 2026 01:40 PM UTC

Hugging Face is currently a household name for machine learning researchers and enthusiasts. One of their biggest successes is Transformers, a model-definition framework for machine learning models in text, computer vision, audio, and video. Because of the vast repository of state-of-the-art machine learning models available on the Hugging Face Hub and the compatibility of Transformers with the majority of training frameworks, it is widely used for inference and model training.

Why do we want to fine-tune an AI model?

Fine-tuning AI models is crucial for tailoring their performance to specific tasks and datasets, enabling them to achieve higher accuracy and efficiency compared to using a general-purpose model. By adapting a pre-trained model, fine-tuning reduces the need for training from scratch, saving time and resources. It also allows for better handling of specific formats, nuances, and edge cases within a particular domain, leading to more reliable and tailored outputs.

In this blog post, we will fine-tune a GPT model with mathematical reasoning so it better handles math questions.

Using models from Hugging Face

After downloading PyCharm, we can easily browse and add any models from Hugging Face. In a new Python file, from the Code menu at the top, select Insert HF Model.

Using models from Hugging Face

In the menu that opens, you can browse models by category or start typing in the search bar at the top. When you select a model, you can see its description on the right.

Explore models from Hugging Face

When you click Use Model, you will see a code snippet added to your file. And that’s it – You’re ready to start using your Hugging Face model.

Use Hugging Face models in PyCharm

GPT (Generative Pre-Trained Transformer) models

GPT models are very popular on the Hugging Face Hub, but what are they? GPTs are trained models that understand natural language and generate high-quality text. They are mainly used in tasks related to textual entailment, question answering, semantic similarity, and document classification. The most famous example is ChatGPT, created by OpenAI.

A lot of OpenAI GPT models are available on the Hugging Face Hub, and we will learn how to use these models with Transformers, fine-tune them with our own data, and deploy them in an application.

Benefits of using Transformers

Transformers, together with other tools provided by Hugging Face, provides high-level tools for fine-tuning any sophisticated deep learning model. Instead of requiring you to fully understand a given model’s architecture and tokenization method, these tools help make models “plug and play” with any compatible training data, while also providing a large amount of customization in tokenization and training.

Transformers in action

To get a closer look at Transformers in action, let’s see how we can use it to interact with a GPT model.

Inference using a pretrained model with a pipeline

After selecting and adding the OpenAI GPT-2 model to the code, this is what we’ve got:

from transformers import pipeline


pipe = pipeline("text-generation", model="openai-community/gpt2")

Before we can use it, we need to make a few preparations. First, we need to install a machine learning framework. In this example, we chose PyTorch. You can install it easily via the Python Packages window in PyCharm.

Install PyTorch in PyCharm

Then we need to install Transformers using the `torch` option. You can do that by using the terminal – open it using the button on the left or use the ⌥ F12 (macOS) or Alt + F12 (Windows) hotkey.

Install Transformers in PyCharm's terminal

In the terminal, since we are using uv, we use the following commands to add it as a dependency and install it:

uv add “transformers[torch]”
uv sync

If you are using pip:

pip install “transformers[torch]”

We will also install a couple more libraries that we will need later, including python-dotenv, datasets, notebook, and ipywidgets. You can use either of the methods above to install them.
After that, it may be best to add a GPU device to speed up the model. Depending on what you have on your machine, you can add it by setting the device parameter in pipeline. Since I am using a Mac M2 machine, I can set device="mps" like this:

pipe = pipeline("text-generation", model="openai-community/gpt2", device="mps")

If you have CUDA GPUs you can also set device="cuda".

Now that we’ve set up our pipeline, let’s try it out with a simple prompt:

from transformers import pipeline


pipe = pipeline("text-generation", model="openai-community/gpt2", device="mps")


print(pipe("A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?", max_new_tokens=200))

Run the script with the Run button () at the top:

Run the script in PyCharm

The result will look something like this:

[{'generated_text': 'A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?nnA rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width?nnA rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width?nnA rectangle has a perimeter of 20 cm. If the width is 6 cm, what is the width? A rectangle has a perimeter'}]

There isn’t much reasoning in this at all, only a bunch of nonsense. 

You may also see this warning:

Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.

This is the default setting. You can also manually add it as below, so this warning disappears, but we don’t have to worry about it too much at this stage.

print(pipe("A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?", max_new_tokens=200, pad_token_id=pipe.tokenizer.eos_token_id))

Now that we’ve seen how GPT-2 behaves out of the box, let’s see if we can make it better at math reasoning with some fine-tuning.

Load and prepare a dataset from the Hugging Face Hub

Before we work on the GPT model, we first need training data. Let’s see how to get a dataset from the Hugging Face Hub.

If you haven’t already, sign up for a Hugging Face account and create an access token. We only need a `read` token for now. Store your token in a `.env` file, like so:

HF_TOKEN=your-hugging-face-access-token

We will use this Math Reasoning Dataset, which has text describing some math reasoning. We will fine-tune our GPT model with this dataset so it can solve math problems more effectively.

Let’s create a new Jupyter notebook, which we’ll use for fine-tuning because it lets us run different code snippets one by one and monitor the progress.

In the first cell, we use this script to load the dataset from the Hugging Face Hub:

from datasets import load_dataset
from dotenv import load_dotenv
import os


load_dotenv()
dataset = load_dataset("Cheukting/math-meta-reasoning-cleaned", token=os.getenv("HF_TOKEN"))
dataset

Run this cell (it may take a while, depending on your internet speed), which will download the dataset. When it’s done, we can have a look at the result:

DatasetDict({
    train: Dataset({
        features: ['id', 'text', 'token_count'],
        num_rows: 987485
    })
})

If you are curious and want to have a peek at the data, you can do so in PyCharm. Open the Jupyter Variables window using the button on the right:

Open Jupyter Variables in PyCharm

Expand dataset and you will see the View as DataFrame option next to dataset[‘train’]:

Jupyter Variables in PyCharm

Click on it to take a look at the data in the Data View tool window:

Data View tool in PyCharm

Next, we will tokenize the text in the dataset:

from transformers import GPT2Tokenizer


tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2")
tokenizer.pad_token = tokenizer.eos_token


def tokenize_function(examples):
   return tokenizer(examples['text'], truncation=True, padding='max_length', max_length=512)


tokenized_datasets = dataset.map(tokenize_function, batched=True)

Here we use the GPT-2 tokenizer and set the pad_token to be the eos_token, which is the token indicating the end of line. After that, we will tokenize the text with a function. It may take a while the first time you run it, but after that it will be cached and will be faster if you have to run the cell again.

The dataset has almost 1 million rows for training. If you have enough computing power to process all of them, you can use them all. However, in this demonstration we’re training locally on a laptop, so I’d better only use a small portion!

tokenized_datasets_split = tokenized_datasets["train"].shard(num_shards=100, index=0).train_test_split(test_size=0.2, shuffle=True)
tokenized_datasets_split

Here I take only 1% of the data, and then perform train_test_split to split the dataset into two:

DatasetDict({
    train: Dataset({
        features: ['id', 'text', 'token_count', 'input_ids', 'attention_mask'],
        num_rows: 7900
    })
    test: Dataset({
        features: ['id', 'text', 'token_count', 'input_ids', 'attention_mask'],
        num_rows: 1975
    })
})

Now we are ready to fine-tune the GPT-2 model.

Fine-tune a GPT model

In the next empty cell, we will set our training arguments:

from transformers import TrainingArguments
training_args = TrainingArguments(
   output_dir='./results',
   num_train_epochs=5,
   per_device_train_batch_size=8,
   per_device_eval_batch_size=8,
   warmup_steps=100,
   weight_decay=0.01,
   save_steps = 500,
   logging_steps=100,
   dataloader_pin_memory=False
)

Most of them are pretty standard for fine-tuning a model. However, depending on your computer setup, you may want to tweak a few things:

 After we’ve configured our settings, we will put the trainer together in the next cell:

from transformers import Trainer, DataCollatorForLanguageModeling


model = GPT2LMHeadModel.from_pretrained("openai-community/gpt2")
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)


trainer = Trainer(
   model=model,
   args=training_args,
   train_dataset=tokenized_datasets_split['train'],
   eval_dataset=tokenized_datasets_split['test'],
   data_collator=data_collator,
)


trainer.train(resume_from_checkpoint=False)

We set `resume_from_checkpoint=False`, but you can set it to `True` to continue from the last checkpoint if the training is interrupted.

After the training finishes, we will evaluate and save the model:

trainer.evaluate(tokenized_datasets_split['test'])
trainer.save_model("./trained_model")

We can now use the trained model in the pipeline. Let’s switch back to `model.py`, where we have used a pipeline with a pretrained model:

from transformers import pipeline


pipe = pipeline("text-generation", model="openai-community/gpt2", device="mps")


print(pipe("A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?", max_new_tokens=200, pad_token_id=pipe.tokenizer.eos_token_id))

Now let’s change `model=”openai-community/gpt2″` to `model=”./trained_model”` and see what we get:

[{'generated_text': "A rectangle has a perimeter of 20 cm. If the length is 6 cm, what is the width?nAlright, let me try to solve this problem as a student, and I'll let my thinking naturally fall into the common pitfall as described.nn---nn**Step 1: Attempting the Problem (falling into the pitfall)**nnWe have a rectangle with perimeter 20 cm. The length is 6 cm. We want the width.nnFirst, I need to find the area under the rectangle.nnLet’s set \( A = 20 - 12 \), where \( A \) is the perimeter.nn**Area under a rectangle:**  n\[nA = (20-12)^2 + ((-12)^2)^2 = 20^2 + 12^2 = 24n\]nnSo, \( 24 = (20-12)^2 = 27 \).nnNow, I’ll just divide both sides by 6 to find the area under the rectangle.n"}]

Unfortunately, it still does not solve the problem. However, it did come up with some mathematical formulas and reasoning that it didn’t use before. If you want, you can try fine-tuning the model a bit more with the data we didn’t use.

In the next section, we will see how we can deploy a fine-tuned model to API endpoints using both the tools provided by Hugging Face and FastAPI.

Deploying a fine-tuned model

The easiest way to deploy a model in a server backend is to use FastAPI. Previously, I wrote a blog post about deploying a machine learning model with FastAPI. While we won’t go into the same level of detail here, we will go over how to deploy our fine-tuned model.

With the help of Junie, we’ve created some scripts which you can see here. These scripts let us deploy a server backend with FastAPI endpoints. 

There are some new dependencies that we need to add:

uv add fastapi pydantic uvicorn
uv sync

Let’s have a look at some interesting points in the scripts, in `main.py`:

# Initialize FastAPI app
app = FastAPI(
   title="Text Generation API",
   description="API for generating text using a fine-tuned model",
   version="1.0.0"
)


# Initialize the model pipeline
try:
   pipe = pipeline("text-generation", model="../trained_model", device="mps")
except Exception as e:
   # Fallback to CPU if MPS is not available
   try:
       pipe = pipeline("text-generation", model="../trained_model", device="cpu")
   except Exception as e:
       print(f"Error loading model: {e}")
       pipe = None

After initializing the app, the script will try to load the model into a pipeline. If a Metal GPU is not available, it will fall back to using the CPU. If you have a CUDA GPU instead of a Metal GPU, you can change `mps` to `cuda`.

# Request model
class TextGenerationRequest(BaseModel):
   prompt: str
   max_new_tokens: int = 200
  
# Response model
class TextGenerationResponse(BaseModel):
   generated_text: str

Two new classes are created, inheriting from Pydantic’s `BaseModel`.

We can also inspect our endpoints with the Endpoints tool window. Click on the globe next to `app = FastAPI` on line 11 and select Show All Endpoints.

Show all endpoints in PyCharm

We have three endpoints. Since the root endpoint is just a welcome message, we will look at the other two.

@app.post("/generate", response_model=TextGenerationResponse)
async def generate_text(request: TextGenerationRequest):
   """
   Generate text based on the provided prompt.
  
   Args:
       request: TextGenerationRequest containing the prompt and generation parameters
      
   Returns:
       TextGenerationResponse with the generated text
   """
   if pipe is None:
       raise HTTPException(status_code=500, detail="Model not loaded properly")
  
   try:
       result = pipe(
           request.prompt,
           max_new_tokens=request.max_new_tokens,
           pad_token_id=pipe.tokenizer.eos_token_id
       )
      
       # Extract the generated text from the result
       generated_text = result[0]['generated_text']
      
       return TextGenerationResponse(generated_text=generated_text)
   except Exception as e:
       raise HTTPException(status_code=500, detail=f"Error generating text: {str(e)}")

The `/generate` endpoint collects the request prompt and generates the response text with the model.

@app.get("/health")
async def health_check():
   """Check if the API and model are working properly."""
   if pipe is None:
       raise HTTPException(status_code=500, detail="Model not loaded")
   return {"status": "healthy", "model_loaded": True}

The `/health` endpoint checks whether the model is loaded correctly. This can be useful if the client-side application needs to check before making the other endpoint available in its UI.

In `run.py`, we use uvicorn to run the server:

import uvicorn


if __name__ == "__main__":
   uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True)

When we run this script, the server will be started at http://0.0.0.0:8000/.

After we start running the server, we can go to http://0.0.0.0:8000/docs to test out the endpoints. 

We can try this with the `/generate` endpoint:

{
  "prompt": "5 people give each other a present. How many presents are given altogether?",
  "max_new_tokens": 300
}

This is the response we get:

{
  "generated_text": "5 people give each other a present. How many presents are given altogether?nAlright, let's try to solve the problem:nn**Problem**  n1. Each person gives each other a present. How many presents are given altogether?n2. How many "gift" are given altogether?nn**Common pitfall**  nAssuming that each present is a "gift" without considering the implications of the original condition.nn---nn### Step 1: Attempting the problem (falling into the pitfall)nnOkay, so I have two people giving each other a present, and I want to know how many are present. I remember that there are three types of gifts—gifts, gins, and ginses.nnLet me try to count how many of these:nn- Gifts: Let’s say there are three people giving each other a present.n- Gins: Let’s say there are three people giving each other a present.n- Ginses: Let’s say there are three people giving each other a present.nnSo, total gins and ginses would be:nn- Gins: \( 2 \times 3 = 1 \), \( 2 \times 1 = 2 \), \( 1 \times 1 = 1 \), \( 1 \times 2 = 2 \), so \( 2 \times 3 = 4 \).n- Ginses: \( 2 \times 3 = 6 \), \("
}

Feel free to experiment with other requests.

Conclusion and next steps

Now that you have successfully fine-tuned an LLM model like GPT-2 with a math reasoning dataset and deployed it with FastAPI, you can fine-tune a lot more of the open-source LLMs available on the Hugging Face Hub. You can experiment with fine-tuning other LLM models with either the open-source data there or your own datasets. If you want to (and the license of the original model allows), you can also upload your fine-tuned model on the Hugging Face Hub. Check out their documentation for how to do that.

One last remark regarding using or fine-tuning models with resources on the Hugging Face Hub – make sure to read the licenses of any model or dataset that you use to understand the conditions for working with those resources. Is it allowed to be used commercially? Do you need to credit the resources used?

In future blog posts, we will keep exploring more code examples involving Python, AI, machine learning, and data visualization.

In my opinion, PyCharm provides best-in-class Python support that ensures both speed and accuracy. Benefit from the smartest code completion, PEP 8 compliance checks, intelligent refactorings, and a variety of inspections to meet all your coding needs. As demonstrated in this blog post, PyCharm provides integration with the Hugging Face Hub, allowing you to browse and use models without leaving the IDE. This makes it suitable for a wide range of AI and LLM fine-tuning projects.

January 20, 2026 01:40 PM UTC

This is a guest post from Michael Kennedy, the founder of Talk Python and a PSF Fellow.

State of Python 2025

Welcome to the highlights, trends, and key actions from the eighth annual Python Developers Survey. This survey is conducted as a collaborative effort between the Python Software Foundation and JetBrains’ PyCharm team. The survey results provide a comprehensive look at Python usage statistics and popularity trends in 2025.

My name is Michael Kennedy, and I’ve analyzed the more than 30,000 responses to the survey and pulled out the most significant trends and predictions, and identified various actions that you can take to improve your Python career.

I am in a unique position as the host of the Talk Python to Me podcast. Every week for the past 10 years, I’ve interviewed the people behind some of the most important libraries and language trends in the Python ecosystem. In this article, my goal is to use that larger community experience to understand the results of this important yearly survey.

If your job or products and services depend on Python, or developers more broadly, you’ll want to read this article. It provides a lot of insight that is difficult to gain from other sources.

Key Python trends in 2025

Let’s dive into the most important trends based on the Python survey results. 

Infographic showing key Python trends in 2025

As you explore these insights, having the right tools for your projects can make all the difference. Try PyCharm for free and stay equipped with everything you need for data science, ML/AI workflows, and web development in one powerful Python IDE.

Python people use Python

Let’s begin by talking about how central Python is for people who use it. Python people use Python primarily. That might sound like an obvious tautology. However, developers use many languages that are not their primary language. For example, web developers might use Python, C#, or Java primarily, but they also use CSS, HTML, and even JavaScript.

On the other hand, developers who work primarily with Node.js or Deno also use JavaScript, but not as their primary language.

The survey shows that 86% of respondents use Python as their main language for writing computer programs, building applications, creating APIs, and more.

Donut chart of Python usage in 2025: 86% use Python as main language, 14% as secondary.

We are mostly brand-new programmers

For those of us who have been programming for a long time – I include myself in this category, having written code for almost 30 years now – it’s easy to imagine that most people in the industry have a decent amount of experience. It’s a perfectly reasonable assumption. You go to conferences and talk with folks who have been doing programming for 10 or 20 years. You look at your colleagues, and many of them have been using Python and programming for a long time.

But that is not how the broader Python ecosystem looks.

Exactly 50% of respondents have less than two years of professional coding experience! And 39% have less than two years of experience with Python (even in hobbyist or educational settings).

Bar chart of developers’ coding experience: 31% less than 1 year, 19% with 1–2 years, 20% with 3–5 years, 13% with 6–10 years, 17% with 11+ years.

This result reaffirms that Python is a great language for those early in their career. The simple (but not simplistic) syntax and approachability really speak to newer programmers as well as seasoned ones. Many of us love programming and Python and are happy to share it with our newer community members.

However, it suggests that we consider these demographics when we create content for the community. If you create a tutorial or video demonstration, don’t skimp on the steps to help people get started. For example, don’t just tell them to install the package. Tell them that they need to create a virtual environment, and show them how to do so and how to activate it. Guide them on installing the package into that virtual environment.

If you’re a tool vendor such as JetBrains, you’ll certainly want to keep in mind that many of your users will be quite new to programming and to Python itself. That doesn’t mean you should ignore advanced features or dumb down your products, but don’t make it hard for beginners to adopt them either.

Data science is now over half of all Python

This year, 51% of all surveyed Python developers are involved in data exploration and processing, with pandas and NumPy being the tools most commonly used for this.

Many of us in the Python pundit space have talked about Python as being divided into thirds: One-third web development, one-third Python for data science and pure science, and one-third as a catch-all bin.

We need to rethink that positioning now that one of those thirds is overwhelmingly the most significant portion of Python.

This is also in the context of not only a massive boom in the interest in data and AI right now, but a corresponding explosion in the development of tools to work with in this space. There are data processing tools like Polars, new ways of working with notebooks like Marimo, and a huge number of user friendly packages for working with LLMs, vision models, and agents, such as Transformers (the Hugging Face library for LLMs), Diffusers (for diffusion models), smolagents, LangChain/LangGraph (frameworks for LLM agents) and LlamaIndex (for indexing knowledge for LLMs).

Python’s center of gravity has indeed tilted further toward data/AI.

Most still use older Python versions despite benefits of newer releases 

The survey shows a distribution across the latest and older versions of the Python runtime. Many of us (15%) are running on the very latest released version of Python, but more likely than not, we’re using a version a year old or older (83%).

Bar chart of Python version usage in 2025: 35% on Python 3.12, 21% on 3.11, 15% on 3.13, smaller shares on older versions.

The survey also indicates that many of us are using Docker and containers to execute our code, which makes this 83% or higher number even more surprising. With containers, just pick the latest version of Python in the container. Since everything is isolated, you don’t need to worry about its interactions with the rest of the system, for example, Linux’s system Python. We should expect containerization to provide more flexibility and ease our transition towards the latest version of Python.

So why haven’t people updated to the latest version of Python? The survey results give two primary reasons.

  1. The version I’m using meets all my needs (53%)
  2. I haven’t had the time to update (25%)

The 83% of developers running on older versions of Python may be missing out on much more than they realize. It’s not just that they are missing some language features, such as the except keyword, or a minor improvement to the standard library, such as tomllib. Python 3.11, 3.12, and 3.13 all include major performance benefits, and the upcoming 3.14 will include even more.

What’s amazing is you get these benefits without changing your code. You simply choose a newer runtime, and your code runs faster. CPython has been extremely good at backward compatibility. There’s rarely significant effort involved in upgrading. Let’s look at some numbers.

48% of people are currently using Python 3.11. Upgrading to 3.13 will make their code run ~11% faster end to end while using ~10-15% less memory.

If they are one of the 27% still on 3.10 or older, their code gets a whopping ~42% speed increase (with no code changes), and memory use can drop by ~20-30%!

So maybe they’ll still come back to “Well, it’s fast enough for us. We don’t have that much traffic, etc.”. But if they are like most medium to large businesses, this is an incredible waste of cloud compute expense (which also maps to environmental harm via spent energy).

Research shows some estimates for cloud compute (specifically computationally based):

If we assume they’re running Python 3.10, that’s potentially $420,000 and $5.6M in savings, respectively (computed as 30% of the EC2 cost).

If your company realizes you are burning an extra $0.4M-$5M a year because you haven’t gotten around to spending the day it takes to upgrade, that’ll be a tough conversation.

Finances and environment aside, it’s really great to be able to embrace the latest language features and be in lock-step with the core devs’ significant work. Make upgrading a priority, folks.

Python web devs resurgence

For the past few years, we’ve heard that the significance of web development within the Python space is decreasing. Two powerful forces could be at play here: 1) As more data science and AI-focused people come to Python, the relatively static number of web devs represents a lower percentage, and 2) The web continues to be frontend-focused, and until Python in the browser becomes a working reality, web developers are likely to prefer JavaScript.

Looking at the numbers from 2021–2023, the trend is clearly downward 45% → 43% → 42%. But this year, the web is back! Respondents reported that 46% of them are using Python for web development in 2024. To bolster this hypothesis further, we saw web “secondary” languages jump correspondingly, with HTML/CSS usage up 15%, JavaScript usage up 14%, and SQL’s usage up 16%.

Line chart of Python use cases 2021–2024: data analysis at 48%, web development at 46%, machine learning at 41% in 2024.

The biggest winner of the Python web frameworks was FastAPI, which jumped from 29% to 38% (a 30% increase). While all of the major frameworks grew year over year, FastAPI’s nearly 30% jump is impressive. I can only speculate why this is. To me, I think this jump in Python for web is likely partially due to a large number of newcomers to the Python space. Many of these are on the ML/AI/data science side of things, and those folks often don’t have years of baked-in experience and history with Flask or Django. They are likely choosing the hottest of the Python web frameworks, which today looks like it’s FastAPI. There are many examples of people hosting their ML models behind FastAPI APIs.

Line chart of Python web frameworks 2021–2024: FastAPI rose to 38%, Django declined to 35%, Flask declined to 34%.

The trend towards async-friendly Python web frameworks has been continuing as well. Over at Talk Python, I rewrote our Python web app in async Flask (roughly 10,000 lines of Python). Django has been steadily adding async features, and its async support is nearly complete. Though today, at version 5.2, its DB layer needs a bit more work, as the team says: “We’re still working on async support for the ORM and other parts of Django.”

Python web servers shift toward async and Rust-based tools

It’s worth a brief mention that the production app servers hosting Python web apps and APIs are changing too. Anecdotally, I see two forces at play here: 1) The move to async frameworks necessitates app servers that support ASGI, not just WSGI and 2) Rust is becoming more and more central to the fast execution of Python code (we’ll dive into that shortly).

The biggest loss in this space last year was the complete demise of uWSGI. We even did a Python Bytes podcast entitled We Must Replace uWSGI With Something Else examining this situation in detail. 

We also saw Gunicorn handling less of the async workload with async-native servers such as uvicorn and Hypercorn, which are able to operate independently. Newcomer servers, based on Rust, such as Granian, have gained a solid following as well.

Rust is how we speed up Python now

Over the past couple of years, Rust has become Python’s performance co-pilot. The Python Language Summit of 2025 revealed that “Somewhere between one-quarter and one-third of all native code being uploaded to PyPI for new projects uses Rust”, indicating that “people are choosing to start new projects using Rust”.

Looking into the survey results, we see that Rust usage grew from 27% to 33% for binary extensions to Python packages. This reflects a growing trend toward using Rust for systems-level programming and for native extensions that accelerate Python code.

Bar chart comparing 2023 vs 2024: 55% use C++, 45% use C, 33% use Rust for Python binary modules.

We see this in the ecosystem with the success of Polars for data science and Pydantic for pretty much all disciplines. We are even seeing that for Python app servers such as the newer Granian.

Typed Python is getting better tooling

Another key trend this year is static type checking in Python. You’ve probably seen Python type information in function definitions such as: 

def add(x: int, y: int) -> int: ... 

These have been in Python for a while now. Yet, there is a renewed effort to make typed Python more common and more forgiving. We’ve had tools such as mypy since typing’s early days, but the goal there was more along the lines of whole program consistency. In just the past few months, we have seen two new high-performance typing tools released:

ty and Pyrefly provide extremely fast static type checking and language server protocols (LSPs). These next‑generation type checkers make it easier for developers to adopt type hints and enforce code quality.

Notice anything similar? They are both written in Rust, backing up the previous claim that “Rust has become Python’s performance co-pilot”.

By the way, I interviewed the team behind ty when it was announced a few weeks ago if you want to dive deeper into that project.

Code and docs make up most open-source contributions

There are many different and unique ways to contribute to open source. Probably the first thing that comes to most people’s minds when they think of a contributor is someone who writes code and adds a new feature to that project. However, there are less visible but important ways to make a contribution, such as triaging issues and reviewing pull requests.

So, what portion of the community has contributed to open source, and in which ways have they done so?

The survey tells us that one-third of devs contributed to open source. This manifests primarily as code and documentation/tutorial additions.

Bar chart of open-source contributions in 2025: 78% code, 40% documentation, 35% governance, 33% tests.

Python documentation is the top resource for developers

Where do you typically learn as a developer or data scientist? Respondents said that docs are #1. There are many ways to learn languages and libraries, but people like docs best. This is good news for open-source maintainers. This means that the effort put into documentation (and embedded tutorials) is well spent. It’s a clear and straightforward way to improve users’ experience with your project.

Moreover, this lines up with Developer Trends in 2025, a podcast panel episode I did with experienced Python developers, including JetBrains’ own Paul Everitt. The panelists all agree that docs are #1, though the survey ranked YouTube much higher than the panelists, at 51%. Remember, our community has an average of 1–2 years of experience, and 45% of them are younger than 30 years old.

Respondents said that documentation and embedded tutorials are the top learning resources. Other sources, such as YouTube tutorials, online courses, and AI-based code generation tools, are also gaining popularity. In fact, the survey shows that AI tools as a learning source increased from 19 % to 27 % (up 42% year over year)!

Postgres reigns as the database king for Pythonistas

When asked which database (if any) respondents chose, they overwhelmingly said PostgreSQL. This relational database management system (RDBMS) grew from 43 % to 49 %. That’s +14% year over year, which is remarkable for a 28-year-old open-source project.

Bar chart of databases used in 2023 vs 2024: PostgreSQL grew to 49%, SQLite 37%, MySQL 31%.

One interesting detail here, beyond Postgres being used a lot, is that every single database in the top six, including MySQL and SQLite, grew in usage year over year. This is likely another indicator that web development itself is growing again, as discussed above.

Forward-looking trends

Agentic AI will be wild

My first forward-looking trend is that agentic AI will be a game-changer for coding. Agentic AI is often cited as a tool of the much maligned and loved vibe coding. However, vibe coding obscures the fact that agentic AI tools are remarkably productive when used alongside a talented engineer or data scientist.

Surveys outside the PSF survey indicate that about 70% of developers were using or planning to use AI coding tools in 2023, and by 2024, around 44% of professional developers use them daily.

JetBrains’ State of Developer Ecosystem 2023 report noted that within a couple of years, “AI-based code generation tools went from interesting research to an important part of many developers’ toolboxes”. Jump ahead to 2025, according to the State of Developer Ecosystem 2025 survey, nearly half of the respondents (49%) plan to try AI coding agents in the coming year.

Bar chart showing adoption of AI coding agents in 2025: 49% very likely, 20% somewhat likely, 11% already using.

Program managers at major tech companies have stated that they almost cannot hire developers who don’t embrace agentic AI. The productive delta between those using it and those who avoid it is simply too great (estimated at about 30% greater productivity with AI).

Async, await, and threading are becoming core to Python

The future will be abuzz with concurrency and Python. We’ve already discussed how the Python web frameworks and app servers are all moving towards asynchronous execution, but this only represents one part of a powerful trend.

Python 3.14 will be the first version of Python to completely support free-threaded Python. Free-threaded Python, which is a version of the Python runtime that does not use the GIL, the global interpreter lock, was first added as an experiment to CPython 3.13.

Just last week, the steering council and core developers officially accepted this as a permanent part of the language and runtime. This will have far-reaching effects. Developers and data scientists will have to think more carefully about threaded code with locks, race conditions, and the performance benefits that come with it. Package maintainers, especially those with native code extensions, may have to rewrite some of their code to support free-threaded Python so they themselves do not enter race conditions and deadlocks.

There is a massive upside to this as well. I’m currently writing this on the cheapest Apple Mac Mini M4. This computer comes with 10 CPU cores. That means until this change manifests in Python, the maximum performance I can get out of a single Python process is 10% of what my machine is actually capable of. Once free-threaded Python is fully part of the ecosystem, I should get much closer to maximum capacity with a standard Python program using threading and the async and await keywords.

Async and await keywords are not just tools for web developers who want to write more concurrent code.  It’s appearing in more and more locations. One such tool that I recently came across is Temporal. This program leverages the asyncio event loop but replaces the standard clever threading tricks with durable machine-spanning execution. You might simply await some action, and behind the scenes, you get durable execution that survives machine restarts. So understanding async and await is going to be increasingly important as more tools make interesting use of it, as Temporal did.

I see parallels here of how Pydantic made a lot of people more interested in Python typing than they otherwise would have been.

Python GUIs and mobile are rising

My last forward-looking trend is that Python GUIs and Python on mobile are rising. When we think of native apps on iOS and Android, we can only dream of using Python to build them someday soon.

At the 2025 Python Language Summit, Russell Keith-Magee presented his work on making iOS and Android Tier 3-supported platforms for CPython. This has been laid out in PEP 730 and PEP 738. This is a necessary but not sufficient condition for allowing us to write true native apps that ship to the app stores using Python.

More generally, there have been some interesting ideas and new takes on UIs for Python. We had Jeremy Howard from fast.ai introduce FastHTML, which allows us to write modern web applications in pure Python. NiceGUI has been coming on strong as an excellent way to write web apps and PWAs in pure Python.

I expect these changes, especially the mobile ones, to unlock powerful use cases that we’ll be talking about for years to come.

Actionable ideas

You’ve seen the results, my interpretations, and predictions. So what should you do about them? Of course, nothing is required of you, but I am closing out this article with some actionable ideas to help you take advantage of these technological and open-source waves.

Here are six actionable ideas you can put into practice after reading this article. Pick your favorite one that you’re not yet leveraging and see if it can help you thrive further in the Python space.

Action 1: Learn uv

uv, the incredible package and Python management tool jumped incredibly from 0% to 11% the year it was introduced (and that growth has demonstrably continued to surge in 2025). This Rust-based tool unifies capabilities from many of the most important ones you may have previously heard of and does so with performance and incredible features.

Do you need Python on the machine? Simply RUN uv venv .venv, and you have both installed the latest stable release and created a virtual environment. That’s just the beginning. If you want the full story, I did an interview with Charlie Marsh about the second generation of uv over on Talk Python.

If you decide to install uv, be sure to use their standalone installers. It allows uv to manage itself and get better over time.

Action 2: Use the latest Python

We saw that 83% of respondents are not using the latest version of Python. Don’t be one of them. Use a virtual environment or use a container and install the latest version of Python. The quickest and easiest way these days is to use uv, as it won’t affect system Python and other configurations (see action 1!).

If you deploy or develop in Docker containers, all you need to do is set up the latest version of Python 3.13 and run these two lines:

RUN curl -LsSf https://astral.sh/uv/install.sh | sh
RUN uv venv --python 3.13 /venv

If you develop locally in virtual environments (as I do), just remove the RUN keyword and use uv to create that environment. Of course, update the version number as new major versions of Python are released.

By taking this action, you will be able to take advantage of the full potential of modern Python, from the performance benefits to the language features.

Action 3: Learn agentic AI

If you’re one of the people who have not yet tried agentic AI, you owe it to yourself to give it a look. Agentic AI uses large language models (LLMs) such as GPT‑4, ChatGPT, or models available via Hugging Face to perform tasks autonomously.

I understand why people avoid using AI and LLMs. For one thing, there’s dubious legality around copyrights. The environmental harms can be real, and the threat to developers’ jobs and autonomy is not to be overlooked. But using top-tier models for agentic AI, not just chatbots, allows you to be tremendously productive.

I’m not recommending vibe coding. But have you ever wished for a library or package to exist, or maybe a CLI tool to automate some simple part of your job? Give that task to an agentic AI, and you won’t be taking on technical debt to your main application and some part of your day. Your productivity just got way better.

The other mistake people make here is to give it a try using the cheapest or free models. When they don’t work that great, people hold that up as evidence and say, “See, it’s not that helpful. It just makes up stuff and gets things wrong.” Make sure you choose the best possible model that you can, and if you want to give it a genuine look, spend $10 or $20 for a month to see what’s actually possible.

JetBrains recently released Junie, an agentic coding assistant for their IDEs. If you’re using one of them, definitely give it a look.

Action 4: Learn to read basic Rust

Python developers should consider learning the basics of Rust, not to replace Python, but to complement it. As I discussed in our analysis, Rust is becoming increasingly important in the most significant portions of the Python ecosystem. I definitely don’t recommend that you become a Rust developer instead of a Pythonista, but being able to read basic Rust so that you understand what the libraries you’re consuming are doing will be a good skill to have.

Action 5: Invest in understanding threading

Python developers have worked mainly outside the realm of threading and parallel programming. In Python 3.6, the amazing async and await keywords were added to the language. However, they only applied to I/O bound concurrency. For example, if I’m calling a web service, I might use the HTTPX library and await that call. This type of concurrency mostly avoids race conditions and that sort of thing.

Now, true parallel threading is coming for Python. With PEP 703 officially and fully accepted as part of Python in 3.14, we’ll need to understand how true threading works. This will involve understanding locks, semaphores, and mutexes.

It’s going to be a challenge, but it is also a great opportunity to dramatically increase Python’s performance.

At the 2025 Python Language Summit, almost one-third of the talks dealt with concurrency and threading in one form or another. This is certainly a forward-looking indicator of what’s to come.

Not every program you write will involve concurrency or threading, but they will be omnipresent enough that having a working understanding will be important. I have a course I wrote about async in Python if you’re interested in learning more about it. Plus, JetBrains’ own Cheuk Ting Ho wrote an excellent article entitled Faster Python: Concurrency in async/await and threading, which is worth a read.

Action 6: Remember the newbies

My final action to you is to keep things accessible for beginners – every time you build or share. Half of the Python developer base has been using Python for less than two years, and most of them have been programming in any format for less than two years. That is still remarkable to me.

So, as you go out into the world to speak, write, or create packages, libraries, and tools, remember that you should not assume years of communal knowledge about working with multiple Python files, virtual environments, pinning dependencies, and much more.

Interested in learning more? Check out the full Python Developers Survey Results here.

Start developing with PyCharm

PyCharm provides everything you need for data science, ML/AI workflows, and web development right out of the box – all in one powerful IDE.

About the author

Michael Kennedy

Michael Kennedy

Michael is the founder of Talk Python and a PSF Fellow. Talk Python is a podcast and course platform that has been exploring the Python ecosystem for over 10 years. At his core, Michael is a web and API developer.

January 20, 2026 01:40 PM UTC


PyBites

“I’m worried about layoffs”

I’ve had some challenging conversations this week.

Lately, my calendar has been filled with calls from developers reaching out for advice because layoffs were just announced at their company.

Having been in their shoes myself, I could really empathise with their anxiety.

The thing is though, when we’d dig into why there was such anxiety, a common confession surfaced. It often boiled down to something like this:

“I got comfortable. I stopped learning. I haven’t touched a new framework or built anything serious in two years because things were okay.”

They were enjoying “Peace Time.”

I like to think of life in two modes: War Mode and Peace Time.

The deadly mistake most developers make is waiting for War Mode before they start training.

They wait until the severance package arrives to finally decide, “Okay, time to really learn Python/FastAPI/Cloud.”

It’s a recipe for disaster. Trying to learn complex engineering skills when you’re terrified about paying the mortgage is almost impossible. You’re just too stressed. You can’t focus which means you can’t dive into the deep building necessary to learn.

You absolutely have to train and skill up during Peace Time.

When things are boring and stable, that’s the exact moment you should be aggressive about your growth.

That’s when you have the mental bandwidth to struggle through a hard coding problem without the threat of redundancy hanging over your head. It’s the perfect time to sharpen the saw.

If you’re currently in a stable job, you’re in Peace Time. Don’t waste it.

Here’s what you need to do: 

Does this resonate with you? Are you guilty of coasting during Peace Time?

I know I’ve been there! (I often think back and wonder where I’d be now had I not spent so much time coasting through my life’s peaceful periods!)

Let’s get you back on track. Fill out this Portfolio Assessment form we’ve created to help you formulate your goals and ideas. We read every submission, Pybites Portfolio Assessment Tool.

Julian

This note was originally sent to our email list. Join here: https://pybit.es/newsletter

January 20, 2026 12:15 AM UTC


Seth Michael Larson

“urllib3 in 2025” available on Illia Volochii’s new blog

2025 was a big year for urllib3 and I want you to read about it! In case you missed it, this year I passed the baton of “lead maintainer” to Illia Volochii who has a new website and blog. Quentin Pradet and I continue to be maintainers to the project.

If you are reading my blog to keep up-to-date on the latest in urllib3 I highly recommend following both Illia and Quentin's blogs, as I will likely publish less and less about urllib3 here going forward. The leadership change was a part of my observation of Volunteer Responsibility Amnesty Day in the spring of last year.

This isn't goodbye, but I would like to take a moment to be reflective. Being a contributor to urllib3 from 2016 to now has had an incredibly positive impact on my life and livelihood. I am forever grateful for my early open source mentors: Cory Benfield and Thea "Stargirl" Flowers, who were urllib3 leads before me. I've also met so many new friends from my deep involvement with Python open source, it really is an amazing network of people! 💜

urllib3 was my first opportunity to work on open source full-time for a few weeks on a grant about improving security. urllib3 became an early partner with Tidelift, leading me to investigate and write about open source security practices and policies for Python projects. My positions at Elastic and the Python Software Foundation were likely influenced by my involvement with urllib3 and other open source Python projects.

In short: contributing to open source is an amazing and potentially life-changing opportunity.



Thanks for keeping RSS alive! ♥

January 20, 2026 12:00 AM UTC

January 19, 2026


Kevin Renskers

Django 6.0 Tasks: a framework without a worker

Background tasks have always existed in Django projects. They just never existed in Django itself.

For a long time, Django focused almost exclusively on the request/response cycle. Anything that happened outside that flow, such as sending emails, running cleanups, or processing uploads, was treated as an external concern. The community filled that gap with tools like Celery, RQ, and cron-based setups.

That approach worked but it was never ideal. Background tasks are not an edge case. They are a fundamental part of almost every non-trivial web application. Leaving this unavoidable slice entirely to third-party tooling meant that every serious Django project had to make its own choices, each with its own trade-offs, infrastructure requirements, and failure modes. It’s one more thing that makes Django complex to deploy.

Django 6.0 is the first release that acknowledges this problem at the framework level by introducing a built-in tasks framework. That alone makes it a significant release. But my question is whether it actually went far enough.

What Django 6.0 adds

Django 6.0 introduces a brand new tasks framework. It’s not a queue, not a worker system, and not a scheduler. It only defines background work in a first-party, Django-native way, and provides hooks for someone else to execute that work.

As an abstraction, this is clean and sensible. It gives Django a shared language for background execution and removes a long-standing blind spot in the framework. But it also stops there.

Django’s task system only supports one-off execution. There is no notion of scheduling, recurrence, retries, persistence, or guarantees. There is no worker process and no production-ready backend. That limitation would be easier to accept if one-off tasks were the primary use case for background work, but they are not. In real applications, background work is usually time-based, repeatable, and failure-prone. Tasks need to run later, run again, or keep retrying until they succeed.

A missed opportunity

What makes this particularly frustrating is that Django had a clear opportunity to do more.

DEP 14 explicitly talks about a database backend, deferring tasks to run at a specific time in the future, and a new email backend that offloads work to the background. None of that has made it into Django itself yet. Why wasn’t the database worker from django-tasks at least added to Django, or something equivalent? This would have covered a large percentage of real-world use cases with minimal operational complexity.

Instead, we got an abstraction without an implementation.

I understand that building features takes time. What I struggle to understand is why shipping such a limited framework was preferred over waiting longer and delivering a more complete story. You only get to introduce a feature once, and in its current form the tasks framework feels more confusing than helpful for newcomers. The official documentation even acknowledges this incompleteness, yet offers little guidance beyond a link to the Community Ecosystem page. Developers are left guessing whether they are missing an intended setup or whether the feature is simply unfinished.

What Django should focus on next

Currently, with Django 6.0, serious background processing still requires third-party tools for scheduling, retries, delayed execution, monitoring, and scaling workers. That was true before, and it remains true now. Even if one-off fire-and-forget tasks are all you need, you still need to install a third party package to get a database backend and worker.

DEP 14 also explicitly states that the intention is not to build a replacement for Celery or RQ, because “that is a complex and nuanced undertaking”. I think this is a mistake. The vast majority of Django applications need a robust task framework. A database-backed worker that handles delays, retries, and basic scheduling would cover most real-world needs without any of Celery’s operational complexity. Django positions itself as a batteries-included framework, and background tasks are not an advanced feature. They are basic application infrastructure.

Otherwise, what is the point of Django’s Task framework? Let’s assume that it’ll get a production-ready backend and worker soon. What then? It can still only run one-off tasks. As soon as you need to schedule tasks, you still need to reach for a third-party solution. I think it should have a first-party answer for the most common cases, even if it’s complex.

Conclusion

Django 6.0’s task system is an important acknowledgement of a long-standing gap in the framework. It introduces a clean abstraction and finally gives background work a place in Django itself. This is good! But by limiting that abstraction to one-off tasks and leaving execution entirely undefined, Django delivers the least interesting part of the solution.

If I sound disappointed, it’s because I am. I just don’t understand the point of adding such a bare-bones Task framework when the reality is that most real-world projects still need to use third-party packages. But the foundation is there now. I hope that Django builds something on top that can replace django-apscheduler, django-rq, and django-celery. I believe that it can, and that it should.

January 19, 2026 08:00 PM UTC


Talk Python Blog

Announcing Talk Python AI Integrations

We’ve just added two new and exciting features to the Talk Python To Me website to allow deeper and richer integration with AI and LLMs.

  1. A full MCP server at talkpython.fm/api/mcp/docs
  2. A LLMs summary to guide non-MCP use-cases: talkpython.fm/llms.txt

The MCP Server

New to the idea of an MCP server? MCP (Model Context Protocol) servers are lightweight services that expose data and functionality to AI assistants through a standardized interface, allowing models like Claude to query external systems and access real-time information beyond their training data. The Talk Python To Me MCP server acts as a bridge between AI conversations and the podcast’s extensive catalog. This enables you to search episodes, look up guest appearances, retrieve transcripts, and explore course content directly within your AI workflow, making research and content discovery seamless.

January 19, 2026 05:49 PM UTC


Mike Driscoll

New Book: Vibe Coding Video Games with Python

My latest book, Vibe Coding Video Games with Python, is now available as an eBook. The paperback will be coming soon, hopefully by mid-February at the latest. The book is around 183 pages in length and is 6×9” in size.

Vibe Coding Video Games with Python

In this book, you will learn how to use artificial intelligence to create mini-games. You will attempt to recreate the look and feel of various classic video games. The intention is not to violate copyright or anything of the sort, but instead to learn the limitations and the power of AI.

Instead, you will simply be learning about whether or not you can use AI to help you know how to create video games. Can you do it with no previous knowledge, as the AI proponents say? Is it really possible to create something just by writing out questions to the ether?

You will use various large language models (LLMs), such as Google Gemini, Grok, Mistral, and CoPilot, to create these games. You will discover the differences and similarities between these tools. You may be surprised to find that some tools give much more context than others.

AI is certainly not a cure-all and is far from perfect. You will quickly discover AI’s limitations and learn some strategies for solving those kinds of issues.

What You’ll Learn

You’ll be creating “clones” of some popular games. However, these games will only be the first level and may or may not be fully functional.

Where to Purchase

You can get Vibe Coding Video Games with Python at the following websites:

The post New Book: Vibe Coding Video Games with Python appeared first on Mouse Vs Python.

January 19, 2026 02:25 PM UTC


Real Python

How to Integrate ChatGPT's API With Python Projects

Python’s openai library provides the tools you need to integrate the ChatGPT API into your Python applications. With it, you can send text prompts to the API and receive AI-generated responses. You can also guide the AI’s behavior with developer role messages and handle both simple text generation and more complex code creation tasks. Here’s an example:

ChatGPT Python API ExamplePython Script Output from a ChatGPT API Call Using openai

After reading this tutorial, you’ll understand how examples like this work under the hood. You’ll learn the fundamentals of using the ChatGPT API from Python and have code examples you can adapt for your own projects.

Get Your Code: Click here to download the free sample code that you’ll use to integrate ChatGPT’s API with Python projects.

Take the Quiz: Test your knowledge with our interactive “How to Integrate ChatGPT's API With Python Projects” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

How to Integrate ChatGPT's API With Python Projects

Test your knowledge of the ChatGPT API in Python. Practice sending prompts with openai and handling text and code responses in this quick quiz.

Prerequisites

To follow along with this tutorial, you’ll need the following:

Don’t worry if you’re new to working with APIs. This tutorial will guide you through everything you need to know to get started with the ChatGPT API and implement AI features in your applications.

Step 1: Obtain Your API Key and Install the OpenAI Package

Before you can start making calls to the ChatGPT Python API, you need to obtain an API key and install the OpenAI Python library. You’ll start by getting your API key from the OpenAI platform, then install the required package and verify that everything works.

Obtain Your API Key

You can obtain an API key from the OpenAI platform by following these steps:

  1. Navigate to platform.openai.com and sign in to your account or create a new one if you don’t have an account yet.
  2. Click on the settings icon in the top-right corner and select API keys from the left-hand menu.
  3. Click the Create new secret key button to generate a new API key.
  4. In the dialog that appears, give your key a descriptive name like “Python Tutorial Key” to help you identify it later.
  5. For the Project field, select your preferred project.
  6. Under Permissions, select All to give your key full access to the API for development purposes.
  7. Click Create secret key to generate your API key.
  8. Copy the generated key immediately, as you won’t be able to see it again after closing the dialog.

Now that you have your API key, you need to store it securely.

Warning: Never hard-code your API key directly in your Python scripts or commit it to version control. Always use environment variables or secure key management services to keep your credentials safe.

The OpenAI Python library automatically looks for an environment variable named OPENAI_API_KEY when creating a client connection. By setting this variable in your terminal session, you’ll authenticate your API requests without exposing your key in your code.

Set the OPENAI_API_KEY environment variable in your terminal session:

Windows PowerShell
PS> $env:OPENAI_API_KEY="your-api-key-here"
Shell
$ export OPENAI_API_KEY="your-api-key-here"

Replace your-api-key-here with the actual API key you copied from the OpenAI platform.

Install the OpenAI Package

With your API key configured, you can now install the OpenAI Python library. The openai package is available on the Python Package Index (PyPI), and you can install it with pip.

Open a terminal or command prompt, create a new virtual environment, and then install the library:

Read the full article at https://realpython.com/chatgpt-api-python/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 19, 2026 02:00 PM UTC

Quiz: How to Integrate ChatGPT's API With Python Projects

In this quiz, you’ll test your understanding of How to Integrate ChatGPT’s API With Python Projects.

By working through this quiz, you’ll revisit how to send prompts with the openai library, guide behavior with developer role messages, and handle text and code outputs. You’ll also see how to integrate AI responses into your Python scripts for practical tasks.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 19, 2026 12:00 PM UTC


Python Bytes

#466 PSF Lands $1.5 million

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong>Better Django management commands with django-click and django-typer</strong></li> <li><strong><a href="https://pyfound.blogspot.com?featured_on=pythonbytes">PSF Lands a $1.5 million sponsorship from Anthropic</a></strong></li> <li><strong><a href="https://nesbitt.io/2025/12/26/how-uv-got-so-fast.html?featured_on=pythonbytes">How uv got so fast</a></strong></li> <li><strong><a href="https://pyview.rocks?featured_on=pythonbytes">PyView Web Framework</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=3jaIv4VvmgY' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="466">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: Better Django management commands with django-click and django-typer</strong></p> <ul> <li>Lacy Henschel</li> <li>Extend Django <a href="http://manage.py?featured_on=pythonbytes">&lt;code>manage.py&lt;/code></a> commands for your own project, for things like <ul> <li>data operations</li> <li>API integrations</li> <li>complex data transformations</li> <li>development and debugging</li> </ul></li> <li>Extending is built into Django, but it looks easier, less code, and more fun with either <a href="https://github.com/django-commons/django-click?featured_on=pythonbytes">&lt;code>django-click&lt;/code></a> or <a href="https://github.com/django-commons/django-typer?featured_on=pythonbytes">&lt;code>django-typer&lt;/code></a>, two projects supported through <a href="https://github.com/django-commons?featured_on=pythonbytes">Django Commons</a></li> </ul> <p><strong>Michael #2: <a href="https://pyfound.blogspot.com?featured_on=pythonbytes">PSF Lands a $1.5 million sponsorship from Anthropic</a></strong></p> <ul> <li>Anthropic is partnering with the Python Software Foundation in a landmark funding commitment to support both security initiatives and the PSF's core work.</li> <li>The funds will enable new automated tools for proactively reviewing all packages uploaded to PyPI, moving beyond the current reactive-only review process.</li> <li>The PSF plans to build a new dataset of known malware for capability analysis</li> <li>The investment will sustain programs like the Developer in Residence initiative, community grants, and infrastructure like PyPI.</li> </ul> <p><strong>Brian #3: <a href="https://nesbitt.io/2025/12/26/how-uv-got-so-fast.html?featured_on=pythonbytes">How uv got so fast</a></strong></p> <ul> <li>Andrew Nesbitt</li> <li>It’s not just be cause “it’s written in Rust”.</li> <li>Recent-ish standards, PEPs 518 (2016), 517 (2017), 621 (2020), and 658 (2022) made many <code>uv</code> design decisions possible</li> <li>And <code>uv</code> drops many backwards compatible decisions kept by <code>pip</code>.</li> <li>Dropping functionality speeds things up. <ul> <li>“Speed comes from elimination. Every code path you don’t have is a code path you don’t wait for.”</li> </ul></li> <li>Some of what uv does could be implemented in pip. Some cannot.</li> <li>Andrew discusses different speedups, why they could be done in Python also, or why they cannot.</li> <li>I read this article out of interest. But it gives me lots of ideas for tools that could be written faster just with Python by making design and support decisions that eliminate whole workflows.</li> </ul> <p><strong>Michael #4: <a href="https://pyview.rocks?featured_on=pythonbytes">PyView Web Framework</a></strong></p> <ul> <li>PyView brings the <a href="https://github.com/phoenixframework/phoenix_live_view?featured_on=pythonbytes">Phoenix LiveView</a> paradigm to Python</li> <li>Recently <a href="https://www.youtube.com/watch?v=g0RDxN71azs">interviewed Larry on Talk Python</a></li> <li>Build dynamic, real-time web applications using server-rendered HTML</li> <li>Check out <a href="https://examples.pyview.rocks?featured_on=pythonbytes">the examples</a>. <ul> <li>See the Maps demo for some real magic</li> </ul></li> <li>How does this possibly work? See the <a href="https://pyview.rocks/core-concepts/liveview-lifecycle/?featured_on=pythonbytes">LiveView Lifecycle</a>.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://upgradedjango.com?featured_on=pythonbytes">Upgrade Django</a>, has a great discussion of how to upgrade version by version and why you might want to do that instead of just jumping ahead to the latest version. And also who might want to save time by leapfrogging <ul> <li>Also has all the versions and dates of release and end of support.</li> </ul></li> <li>The <a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">Lean TDD</a> book 1st draft is done. <ul> <li>Now available through both <a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">pythontest</a> and <a href="https://leanpub.com/lean-tdd?featured_on=pythonbytes">LeanPub</a> <ul> <li>I set it as 80% done because of future drafts planned.</li> </ul></li> <li>I’m working through a few submitted suggestions. Not much feedback, so the 2nd pass might be fast and mostly my own modifications. It’s possible.</li> <li>I’m re-reading it myself and already am disappointed with page 1 of the introduction. I gotta make it pop more. I’ll work on that.</li> <li>Trying to decide how many suggestions around using AI I should include. <ul> <li>It’s not mentioned in the book yet, but I think I need to incorporate some discussion around it.</li> </ul></li> </ul></li> </ul> <p>Michael:</p> <ul> <li><a href="https://thenewstack.io/python-whats-coming-in-2026/?utm_campaign=trueanthem&utm_medium=social&utm_source=linkedin&featured_on=pythonbytes">Python: What’s Coming in 2026</a></li> <li>Python Bytes rewritten in Quart + async (very similar to <a href="https://talkpython.fm/blog/posts/talk-python-rewritten-in-quart-async-flask/?featured_on=pythonbytes">Talk Python’s journey</a>)</li> <li>Added <a href="https://talkpython.fm/api/mcp/docs?featured_on=pythonbytes">a proper MCP server</a> at Talk Python To Me (you don’t need a formal MCP framework btw) <ul> <li>Example one: <a href="https://blobs.pythonbytes.fm/latest-episodes-mcp.png?cache_id=b76dc6">latest-episodes-mcp.png</a></li> <li>Example two: <a href="https://blobs.pythonbytes.fm/which-episodes-mcp.webp?cache_id=2079d2">which-episodes-mcp.webp</a></li> </ul></li> <li><a href="https://llmstxt.org?featured_on=pythonbytes">Implmented /llms.txt</a> for Talk Python To Me (see <a href="http://talkpython.fm/llms.txt?featured_on=pythonbytes">talkpython.fm/llms.txt</a> )</li> </ul> <p><strong>Joke: <a href="https://www.linkedin.com/feed/update/urn:li:activity:7351943843409248256/?featured_on=pythonbytes">Reverse Superman</a></strong></p>

January 19, 2026 08:00 AM UTC

January 18, 2026


EuroPython

Humans of EuroPython: Doreen Peace Nangira Wanyama

EuroPython thrives thanks to dedicated volunteers who invest hundreds of hours into each conference. From speaker coordination and fundraising to workshop preparation, their commitment ensures every year surpasses the last.

Below is our latest interview with Doreen Peace Nangira Wanyama. Doreen wore many hats at EuroPython 2025, including being the lead organizer of the Django Girls workshop during the Beginners’ Day, helping in the Financial Aid Team, as well as volunteering on-site.

Thank you for contributing to the conference, Doreen!

altDoreen Peace Nangira Wanyama, Django Girls Organizer at EuroPython 2025

EP: What first inspired you to volunteer for EuroPython? 

What inspired me was the diversity and inclusivity aspect in the EuroPython community. I had been following the EuroPython community since 2024 and what stood out for me was how inclusive it was. This was open not only to people from the EU but worldwide. I saw people from Africa getting the stage to speak and even the opportunity grants were there for everyone. I told myself wow! I should be part of this community. All I can say I will still choose EuroPython over and over.

EP: What was your primary role as a volunteer, and what did a typical day look like for you?

I had the opportunity to play two main roles. I was the Django Girls organizer and also part of the Financial Aid organizing team. In the Django Girls, I was in charge of putting out the call for coaches and Django Girls mentees. I ensured proper logistics were in place for all attendees and also worked with the communications team to ensure enough social media posts were made about the event. I also worked with coaches to set up the PCs for mentees for the workshop i.e. Django installation.In the Financial Aid Team, I worked with fellow team mates by putting out the call for finaid grants, reviewing applications and sending out acknowledgement emails. We prepared visa letters to accepted grant recipients to help with their visa application. We issued the conference tickets to both accepted online and onsite attendees. After the conference we did reimbursements for each grant recipient and followed up with emails to ensure everyone had been reimbursed.

EP: Did you make any lasting friendships or professional connections through contributing to the conference?

Yes. Contributing to this conference earned me new friends and professional connections. I got to meet and talk to people I would have hardly met out there. First of all, when I attended the conference I thought I would be the only database administrator there, well the EuroPython had a surprise for me. I met a fellow DBA from Germany and we would not stop talking about the importance of Python in our field. I got the opportunity of meeting the DSF president Thibaud Colas for the first time, someone who is down to earth and one who loves giving back to the community.I also got to meet Daria Linhart, a loving soul. Someone who is always ready to help. I remember getting stuck in Czech when I was looking for my accommodation. Daria used her Czech language skills to speak with my host and voila!

EP: How has volunteering at EuroPython impacted your own career or learning journey?

Volunteering at EuroPython made me realize that people can make you go far. Doing it all alone is possible but doing it as a team makes a big difference. Working with different people during this conference and attending talks made me realize the different areas I need to improve on.  

EP: What&aposs your favorite memory from contributing at EuroPython?

My favourite memory is the daily social events after the conference. Wow! EuroPython made me explore the Czech Republic to the fullest. From the speakers&apos dinner on the first day to the Django birthday cake we cut, I really had great moments. I also can’t forget the variety of food we were offered. I enjoyed the whole cuisine and can’t wait to experience this again in the next EuroPython.

EP: If you were to invite someone else, what do you think are the top 3 reasons to join the EuroPython organizing team?

A. Freedom of expression — EuroPython is a free and open space. Everyone is allowed to express their views without bias.

B. Learning opportunities — Whether you are a first timer or a seasoned conference organizer, there is always something to learn here. You will learn new ways of doing things.

C. Loving and welcoming community — Want a place that feels like home, EuroPython community is the place.

EP: Thank you, Doreen!

January 18, 2026 05:07 PM UTC


Eli Bendersky

Compiling Scheme to WebAssembly

One of my oldest open-source projects - Bob - has celebrated 15 a couple of months ago. Bob is a suite of implementations of the Scheme programming language in Python, including an interpreter, a compiler and a VM. Back then I was doing some hacking on CPython internals and was very curious about how CPython-like bytecode VMs work; Bob was an experiment to find out, by implementing one from scratch for R5RS Scheme.

Several months later I added a C++ VM to Bob, as an exercise to learn how such VMs are implemented in a low-level language without all the runtime support Python provides; most importantly, without the built-in GC. The C++ VM in Bob implements its own mark-and-sweep GC.

After many quiet years (with just a sprinkling of cosmetic changes, porting to GitHub, updates to Python 3, etc), I felt the itch to work on Bob again just before the holidays. Specifically, I decided to add another compiler to the suite - this one from Scheme directly to WebAssembly.

The goals of this effort were two-fold:

  1. Experiment with lowering a real, high-level language like Scheme to WebAssembly. Experiments like the recent Let's Build a Compiler compile toy languages that are at the C level (no runtime). Scheme has built-in data structures, lexical closures, garbage collection, etc. It's much more challenging.
  2. Get some hands-on experience with the WASM GC extension [1]. I have several samples of using WASM GC in the wasm-wat-samples repository, but I really wanted to try it for something "real".

Well, it's done now; here's an updated schematic of the Bob project:

Bob project diagram with all the components it includes

The new part is the rightmost vertical path. A WasmCompiler class lowers parsed Scheme expressions all the way down to WebAssembly text, which can then be compiled to a binary and executed using standard WASM tools [2].

Highlights

The most interesting aspect of this project was working with WASM GC to represent Scheme objects. As long as we properly box/wrap all values in refs, the underlying WASM execution environment will take care of the memory management.

For Bob, here's how some key Scheme objects are represented:

;; PAIR holds the car and cdr of a cons cell.
(type $PAIR (struct (field (mut (ref null eq))) (field (mut (ref null eq)))))

;; BOOL represents a Scheme boolean. zero -> false, nonzero -> true.
(type $BOOL (struct (field i32)))

;; SYMBOL represents a Scheme symbol. It holds an offset in linear memory
;; and the length of the symbol name.
(type $SYMBOL (struct (field i32) (field i32)))

$PAIR is of particular interest, as it may contain arbitrary objects in its fields; (ref null eq) means "a nullable reference to something that has identity". ref.test can be used to check - for a given reference - the run-time type of the value it refers to.

You may wonder - what about numeric values? Here WASM has a trick - the i31 type can be used to represent a reference to an integer, but without actually boxing it (one bit is used to distinguish such an object from a real reference). So we don't need a separate type to hold references to numbers.

Also, the $SYMBOL type looks unusual - how is it represented with two numbers? The key to the mystery is that WASM has no built-in support for strings; they should be implemented manually using offsets to linear memory. The Bob WASM compiler emits the string values of all symbols encountered into linear memory, keeping track of the offset and length of each one; these are the two numbers placed in $SYMBOL. This also allows to fairly easily implement the string interning feature of Scheme; multiple instances of the same symbol will only be allocated once.

Consider this trivial Scheme snippet:

(write '(10 20 foo bar))

The compiler emits the symbols "foo" and "bar" into linear memory as follows [3]:

(data (i32.const 2048) "foo")
(data (i32.const 2051) "bar")

And looking for one of these addresses in the rest of the emitted code, we'll find:

(struct.new $SYMBOL (i32.const 2051) (i32.const 3))

As part of the code for constructing the constant cons list representing the argument to write; address 2051 and length 3: this is the symbol bar.

Speaking of write, implementing this builtin was quite interesting. For compatibility with the other Bob implementations in my repository, write needs to be able to print recursive representations of arbitrary Scheme values, including lists, symbols, etc.

Initially I was reluctant to implement all of this functionality by hand in WASM text, but all alternatives ran into challenges:

  1. Deferring this to the host is difficult because the host environment has no access to WASM GC references - they are completely opaque.
  2. Implementing it in another language (maybe C?) and lowering to WASM is also challenging for a similar reason - the other language is unlikely to have a good representation of WASM GC objects.

So I bit the bullet and - with some AI help for the tedious parts - just wrote an implementation of write directly in WASM text; it wasn't really that bad. I import only two functions from the host:

(import "env" "write_char" (func $write_char (param i32)))
(import "env" "write_i32" (func $write_i32 (param i32)))

Though emitting integers directly from WASM isn't hard, I figured this project already has enough code and some host help here would be welcome. For all the rest, only the lowest level write_char is used. For example, here's how booleans are emitted in the canonical Scheme notation (#t and #f):

(func $emit_bool (param $b (ref $BOOL))
    (call $emit (i32.const 35)) ;; '#'
    (if (i32.eqz (struct.get $BOOL 0 (local.get $b)))
        (then (call $emit (i32.const 102))) ;; 'f'
        (else (call $emit (i32.const 116))) ;; 't'
    )
)

Conclusion

This was a really fun project, and I learned quite a bit about realistic code emission to WASM. Feel free to check out the source code of WasmCompiler - it's very well documented. While it's a bit over 1000 LOC in total [4], more than half of that is actually WASM text snippets that implement the builtin types and functions needed by a basic Scheme implementation.


[1]The GC proposal is documented here. It was officially added to the WASM spec in Oct 2023.
[2]

In Bob this is currently done with bytecodealliance/wasm-tools for the text-to-binary conversion and Node.js for the execution environment, but this can change in the future.

I actually wanted to use Python bindings to wasmtime, but these don't appear to support WASM GC yet.

[3]2048 is just an arbitrary offset the compiler uses as the beginning of the section for symbols in memory. We could also use the multiple memories feature of WASM and dedicate a separate linear memory just for symbols.
[4]To be clear, this is just the WASM compiler class; it uses the Expr representation of Scheme that is created by Bob's parser (and lexer); the code of these other components is shared among all Bob implementations and isn't counted here.

January 18, 2026 06:40 AM UTC

Revisiting "Let's Build a Compiler"

There's an old compiler-building tutorial that has become part of the field's lore: the Let's Build a Compiler series by Jack Crenshaw (published between 1988 and 1995).

I ran into it in 2003 and was very impressed, but it's now 2025 and this tutorial is still being mentioned quite often in Hacker News threads. Why is that? Why does a tutorial from 35 years ago, built in Pascal and emitting Motorola 68000 assembly - technologies that are virtually unknown for the new generation of programmers - hold sway over compiler enthusiasts? I've decided to find out.

The tutorial is easily available and readable online, but just re-reading it seemed insufficient. So I've decided on meticulously translating the compilers built in it to Python and emit a more modern target - WebAssembly. It was an enjoyable process and I want to share the outcome and some insights gained along the way.

The result is this code repository. Of particular interest is the TUTORIAL.md file, which describes how each part in the original tutorial is mapped to my code. So if you want to read the original tutorial but play with code you can actually easily try on your own, feel free to follow my path.

A sample

To get a taste of the input language being compiled and the output my compiler generates, here's a sample program in the KISS language designed by Jack Crenshaw:

var X=0

 { sum from 0 to n-1 inclusive, and add to result }
 procedure addseq(n, ref result)
     var i, sum  { 0 initialized }
     while i < n
         sum = sum + i
         i = i + 1
     end
     result = result + sum
 end

 program testprog
 begin
     addseq(11, X)
 end
 .

It's from part 13 of the tutorial, so it showcases procedures along with control constructs like the while loop, and passing parameters both by value and by reference. Here's the WASM text generated by my compiler for part 13:

(module
  (memory 8)
  ;; Linear stack pointer. Used to pass parameters by ref.
  ;; Grows downwards (towards lower addresses).
  (global $__sp (mut i32) (i32.const 65536))

  (global $X (mut i32) (i32.const 0))

  (func $ADDSEQ (param $N i32) (param $RESULT i32)
    (local $I i32)
    (local $SUM i32)
    loop $loop1
      block $breakloop1
        local.get $I
        local.get $N
        i32.lt_s
        i32.eqz
        br_if $breakloop1
        local.get $SUM
        local.get $I
        i32.add
        local.set $SUM
        local.get $I
        i32.const 1
        i32.add
        local.set $I
        br $loop1
      end
    end
    local.get $RESULT
    local.get $RESULT
    i32.load
    local.get $SUM
    i32.add
    i32.store
  )

  (func $main (export "main") (result i32)
    i32.const 11
    global.get $__sp      ;; make space on stack
    i32.const 4
    i32.sub
    global.set $__sp
    global.get $__sp
    global.get $X
    i32.store
    global.get $__sp    ;; push address as parameter
    call $ADDSEQ
    ;; restore parameter X by ref
    global.get $__sp
    i32.load offset=0
    global.set $X
    ;; clean up stack for ref parameters
    global.get $__sp
    i32.const 4
    i32.add
    global.set $__sp
    global.get $X
  )
)

You'll notice that there is some trickiness in the emitted code w.r.t. handling the by-reference parameter (my previous post deals with this issue in more detail). In general, though, the emitted code is inefficient - there is close to 0 optimization applied.

Also, if you're very diligent you'll notice something odd about the global variable X - it seems to be implicitly returned by the generated main function. This is just a testing facility that makes my compiler easy to test. All the compilers are extensively tested - usually by running the generated WASM code [1] and verifying expected results.

Insights - what makes this tutorial so special?

While reading the original tutorial again, I had on opportunity to reminisce on what makes it so effective. Other than the very fluent and conversational writing style of Jack Crenshaw, I think it's a combination of two key factors:

  1. The tutorial builds a recursive-descent parser step by step, rather than giving a long preface on automata and table-based parser generators. When I first encountered it (in 2003), it was taken for granted that if you want to write a parser then lex + yacc are the way to go [2]. Following the development of a simple and clean hand-written parser was a revelation that wholly changed my approach to the subject; subsequently, hand-written recursive-descent parsers have been my go-to approach for almost 20 years now.
  2. Rather than getting stuck in front-end minutiae, the tutorial goes straight to generating working assembly code, from very early on. This was also a breath of fresh air for engineers who grew up with more traditional courses where you spend 90% of the time on parsing, type checking and other semantic analysis and often run entirely out of steam by the time code generation is taught.

To be honest, I don't think either of these are a big problem with modern resources, but back in the day the tutorial clearly hit the right nerve with many people.

What else does it teach us?

Jack Crenshaw's tutorial takes the syntax-directed translation approach, where code is emitted while parsing, without having to divide the compiler into explicit phases with IRs. As I said above, this is a fantastic approach for getting started, but in the latter parts of the tutorial it starts showing its limitations. Especially once we get to types, it becomes painfully obvious that it would be very nice if we knew the types of expressions before we generate code for them.

I don't know if this is implicated in Jack Crenshaw's abandoning the tutorial at some point after part 14, but it may very well be. He keeps writing how the emitted code is clearly sub-optimal [3] and can be improved, but IMHO it's just not that easy to improve using the syntax-directed translation strategy. With perfect hindsight vision, I would probably use Part 14 (types) as a turning point - emitting some kind of AST from the parser and then doing simple type checking and analysis on that AST prior to generating code from it.

Conclusion

All in all, the original tutorial remains a wonderfully readable introduction to building compilers. This post and the GitHub repository it describes are a modest contribution that aims to improve the experience of folks reading the original tutorial today and not willing to use obsolete technologies. As always, let me know if you run into any issues or have questions!


[1]This is done using the Python bindings to wasmtime.
[2]By the way, gcc switched from YACC to hand-written recursive-descent parsing in the 2004-2006 timeframe, and Clang has been implemented with a recursive-descent parser from the start (2007).
[3]

Concretely: when we compile subexpr1 + subexpr2 and the two sides have different types, it would be mighty nice to know that before we actually generate the code for both sub-expressions. But the syntax-directed translation approach just doesn't work that way.

To be clear: it's easy to generate working code; it's just not easy to generate optimal code without some sort of type analysis that's done before code is actually generated.

January 18, 2026 06:40 AM UTC


Armin Ronacher

Agent Psychosis: Are We Going Insane?

You can use Polecats without the Refinery and even without the Witness or Deacon. Just tell the Mayor to shut down the rig and sling work to the polecats with the message that they are to merge to main directly. Or the polecats can submit MRs and then the Mayor can merge them manually. It’s really up to you. The Refineries are useful if you have done a LOT of up-front specification work, and you have huge piles of Beads to churn through with long convoys.

Gas Town Emergency User Manual, Steve Yegge

Many of us got hit by the agent coding addiction. It feels good, we barely sleep, we build amazing things. Every once in a while that interaction involves other humans, and all of a sudden we get a reality check that maybe we overdid it. The most obvious example of this is the massive degradation of quality of issue reports and pull requests. As a maintainer many PRs now look like an insult to one’s time, but when one pushes back, the other person does not see what they did wrong. They thought they helped and contributed and get agitated when you close it down.

But it’s way worse than that. I see people develop parasocial relationships with their AIs, get heavily addicted to it, and create communities where people reinforce highly unhealthy behavior. How did we get here and what does it do to us?

I will preface this post by saying that I don’t want to call anyone out in particular, and I think I sometimes feel tendencies that I see as negative, in myself as well. I too, have thrown some vibeslop up to other people’s repositories.

Our Little Dæmons

In His Dark Materials, every human has a dæmon, a companion that is an externally visible manifestation of their soul. It lives alongside as an animal, but it talks, thinks and acts independently. I’m starting to relate our relationship with agents that have memory to those little creatures. We become dependent on them, and separation from them is painful and takes away from our new-found identity. We’re relying on these little companions to validate us and to collaborate with. But it’s not a genuine collaboration like between humans, it’s one that is completely driven by us, and the AI is just there for the ride. We can trick it to reinforce our ideas and impulses. And we act through this AI. Some people who have not programmed before, now wield tremendous powers, but all those powers are gone when their subscription hits a rate limit and their little dæmon goes to sleep.

Then, when we throw up a PR or issue to someone else, that contribution is the result of this pseudo-collaboration with the machine. When I see an AI pull request come in, or on another repository, I cannot tell how someone created it, but I can usually after a while tell when it was prompted in a way that is fundamentally different from how I do it. Yet it takes me minutes to figure this out. I have seen some coding sessions from others and it’s often done with clarity, but using slang that someone has come up with and most of all: by completely forcing the AI down a path without any real critical thinking. Particularly when you’re not familiar with how the systems are supposed to work, giving in to what the machine says and then thinking one understands what is going on creates some really bizarre outcomes at times.

But people create these weird relationships with their AI agent and once you see how some prompt their machines, you realize that it dramatically alters what comes out of it. To get good results you need to provide context, you need to make the tradeoffs, you need to use your knowledge. It’s not just a question of using the context badly, it’s also the way in which people interact with the machine. Sometimes it’s unclear instructions, sometimes it’s weird role-playing and slang, sometimes it’s just swearing and forcing the machine, sometimes it’s a weird ritualistic behavior. Some people just really ram the agent straight towards the most narrow of all paths towards a badly defined goal with little concern about the health of the codebase.

Addicted to Prompts

These dæmon relationships change not just how we work, but what we produce. You can completely give in and let the little dæmon run circles around you. You can reinforce it to run towards ill defined (or even self defined) goals without any supervision.

It’s one thing when newcomers fall into this dopamine loop and produce something. When Peter first got me hooked on Claude, I did not sleep. I spent two months excessively prompting the thing and wasting tokens. I ended up building and building and creating a ton of tools I did not end up using much. “You can just do things” was what was on my mind all the time but it took quite a bit longer to realize that just because you can, you might not want to. It became so easy to build something and in comparison it became much harder to actually use it or polish it. Quite a few of the tools I built I felt really great about, just to realize that I did not actually use them or they did not end up working as I thought they would.

The thing is that the dopamine hit from working with these agents is so very real. I’ve been there! You feel productive, you feel like everything is amazing, and if you hang out just with people that are into that stuff too, without any checks, you go deeper and deeper into the belief that this all makes perfect sense. You can build entire projects without any real reality check. But it’s decoupled from any external validation. For as long as nobody looks under the hood, you’re good. But when an outsider first pokes at it, it looks pretty crazy. And damn some things look amazing. I too was blown away (and fully expected at the same time) when Cursor’s AI written Web Browser landed. It’s super impressive that agents were able to bootstrap a browser in a week! But holy crap! I hope nobody ever uses that thing or would try to build an actual browser out of it, at least with this generation of agents, it’s still pure slop with little oversight. It’s an impressive research and tech demo, not an approach to building software people should use. At least not yet.

There is also another side to this slop loop addiction: token consumption.

Consider how many tokens these loops actually consume. A well-prepared session with good tooling and context can be remarkably token-efficient. For instance, the entire port of MiniJinja to Go took only 2.2 million tokens. But the hands-off approaches—spinning up agents and letting them run wild—burn through tokens at staggering rates. Patterns like Ralph are particularly wasteful: you restart the loop from scratch each time, which means you lose the ability to use cached tokens or reuse context.

We should also remember that current token pricing is almost certainly subsidized. These patterns may not be economically viable for long. And those discounted coding plans we’re all on? They might not last either.

Slop Loop Cults

And then there are things like Beads and Gas Town, Steve Yegge’s agentic coding tools, which are the complete celebration of slop loops. Beads, which is basically some sort of issue tracker for agents, is 240,000 lines of code that … manages markdown files in GitHub repositories. And the code quality is abysmal.

There appears to be some competition in place to run as many of these agents in parallel with almost no quality control in some circles. And to then use agents to try to create documentation artifacts to regain some confidence of what is actually going on. Except those documents themselves read like slop.

Looking at Gas Town (and Beads) from the outside, it looks like a Mad Max cult. What are polecats, refineries, mayors, beads, convoys doing in an agentic coding system? If the maintainer is in the loop, and the whole community is in on this mad ride, then everyone and their dæmons just throw more slop up. As an external observer the whole project looks like an insane psychosis or a complete mad art project. Except, it’s real? Or is it not? Apparently a reason for slowdown in Gas Town is contention on figuring out the version of Beads, which takes 7 subprocess spawns. Or using the doctor command times out completely. Beads keeps growing and growing in complexity and people who are using it, are realizing that it’s almost impossible to uninstall. And they might not even work well together even though one apparently depends on the other.

I don’t want to pick on Gas Town or these projects, but they are just the most visible examples of this in-group behavior right now. But you can see similar things in some of the AI builder circles on Discord and X where people hype each other up with their creations, without much critical thinking and sanity checking of what happens under the hood.

Asymmetric and Maintainer’s Burden

It takes you a minute of prompting and waiting a few minutes for code to come out of it. But actually honestly reviewing a pull request takes many times longer than that. The asymmetry is completely brutal. Shooting up bad code is rude because you completely disregard the time of the maintainer. But everybody else is also creating AI-generated code, but maybe they passed the bar of it being good. So how can you possibly tell as a maintainer when it all looks the same? And as the person writing the issue or the PR, you felt good about it. Yet what you get back is frustration and rejection.

I’m not sure how we will go ahead here, but it’s pretty clear that in projects that don’t submit themselves to the slop loop, it’s going to be a nightmare to deal with all the AI-generated noise.

Even for projects that are fully AI-generated but are setting some standard for contributions, some folks now prefer actually just getting the prompts over getting the actual code. Because then it’s clearer what the person actually intended. There is more trust in running the agent oneself than having other people do it.

Is Agent Psychosis Real?

Which really makes me wonder: am I missing something here? Is this where we are going? Am I just not ready for this new world? Are we all collectively getting insane?

Particularly if you want to opt out of this craziness right now, it’s getting quite hard. Some projects no longer accept human contributions until they have vetted the people completely. Others are starting to require that you submit prompts alongside your code, or just the prompts alone.

I am a maintainer who uses AI myself, and I know others who do. We’re not luddites and we’re definitely not anti-AI. But we’re also frustrated when we encounter AI slop on issue and pull request trackers. Every day brings more PRs that took someone a minute to generate and take an hour to review.

There is a dire need to say no now. But when one does, the contributor is genuinely confused: “Why are you being so negative? I was trying to help.” They were trying to help. Their dæmon told them it was good.

Maybe the answer is that we need better tools — better ways to signal quality, better ways to share context, better ways to make the AI’s involvement visible and reviewable. Maybe the culture will self-correct as people hit walls. Maybe this is just the awkward transition phase before we figure out new norms.

Or maybe some of us are genuinely losing the plot, and we won’t know which camp we’re in until we look back. All I know is that when I watch someone at 3am, running their tenth parallel agent session, telling me they’ve never been more productive — in that moment I don’t see productivity. I see someone who might need to step away from the machine for a bit. And I wonder how often that someone is me.

Two things are both true to me right now: AI agents are amazing and a huge productivity boost. They are also massive slop machines if you turn off your brain and let go completely.

January 18, 2026 12:00 AM UTC

January 16, 2026


PyCon

Building the Future with Python? Apply for Free Booth Space on Startup Row at PyCon US 2026



Consult just about any guide about how to build a tech startup and one of the very first pieces of advice you’ll be given is: Talk to Your Customers. If your target market just so happens to be Python-fluent developers, data scientists, researchers, students, and open-source software enthusiasts, there’s probably no better place than PyCon US to share your startup’s products and services with the Python community.

If you’re a founder of an early-stage startup that’s building something cool with Python and want to apply for free (yes, free) booth space, conference passes, and (optionally) a table at the PyCon US Job Fair for your team at PyCon US 2026 in lovely Long Beach, California this upcoming May, we have some great news for you: Applications for booth space on Startup Row are open, but not for long…

Applications close Friday, January 30, 2026. You’ll hear back with an acceptance decision from us by mid-February, so you’ll have plenty of time to book travel and get your booth materials together in time for the conference.

TL;DR: How/where to Apply. For all the action-oriented types who want to skip the rest of this post and just get to the point, here’s the Startup Row page again (where you can find eligibility criteria, etc.) and a direct link to the application form (for which you’ll need to be logged in or create an account to access). Good luck! We look forward to reviewing your application, and hope to see you at PyCon US 2026.

What Startup Row Companies Receive

Since 2011, organizers of PyCon US set aside a row of booths for early-stage startups, straightforwardly named: Startup Row. The goal is to give early-stage companies access to the best of what PyCon US has to offer.

At no cost to them, Startup Row companies receive:Two included conference passes, with additional passes available for your team at a discount.
The only catch? If you’re granted a spot on Startup Row, as part of the onboarding process, PyCon US organizers ask for a fully refundable $400 deposit to discourage no-shows. Teams also cover their own transportation, lodging, and booth materials (banners, swag, table coverings, etc.). Startup Row organizers will partner with your team to make sure everything runs smoothly. After the conference, PyCon US refunds deposits to startups that successfully attended.

If your company is building something cool with Python, it’s hard to beat PyCon US for sharing your work and meeting the Python software community. Startup Row is where some companies launch publicly, where others find their earliest customers and contributors, and where attendees can discover exciting, meaningful job opportunities.

What kinds of companies get a spot on Startup Row?

Python is a flexible language and has applications up and down the stack.

Over the years, Startup Row has featured software and hardware companies, consumer and enterprise offerings, open-source and proprietary codebases, and teams from a surprisingly broad range of industries—from familiar categories like developer tools and ML frameworks to foundation model developers and the occasional wonderfully weird idea (think: an e-ink portable typewriter with cloud sync, or an online wedding-planning platform).

Want recent examples? Take a look at the PyCon US blog announcements for the 2025, 2024, 2023, and 2022 batches.

When scoring applications, the selection committee is encouraged to weigh:
  • Market upside: could this be a big business?
  • Problem/solution fit: does the product truly address the stated need?
  • Team strength: does the founding team have the credibility and capability to execute?
  • “X factor”: would appearing on Startup Row materially accelerate outcomes for the company and/or the Python community?
If you can make a credible case for any one of those points, your startup stands a chance of getting featured on Startup Row at PyCon US 2026.

Who do I contact with questions about Startup Row at PyCon US 2026?

For specific Startup Row-related questions between now and the application deadline, reach out to pycon-startups@python.org.

January 16, 2026 02:43 PM UTC


Python Morsels

Self-concatenation

Strings and other sequences can be multiplied by numbers to self-concatenate them.

Table of contents

  1. Concatenating strings to numbers doesn't work
  2. Multiplying strings by numbers
  3. Self-concatenation only works with integers
  4. Sequences usually support self-concatenation
  5. Practical uses of self-concatenation
  6. Self-concatenation does not copy
  7. Don't self-concatenate lists of mutable items
  8. When to use self-concatenation

Concatenating strings to numbers doesn't work

You can't use the plus sign (+) between a string and a number in Python:

>>> prefix = "year: "
>>> year = 1999
>>> prefix + year
Traceback (most recent call last):
  File "<python-input-4>", line 1, in <module>
    prefix + year
    ~~~~~~~^~~~~~
TypeError: can only concatenate str (not "int") to str

You can use the plus sign to add two numbers:

>>> year + 1
2000

Or to concatenate two strings:

>>> prefix + str(year)
'year: 1999'

But it doesn't work between strings and numbers.

More on that: Fixing TypeError: can only concatenate str (not "int") to str.

Multiplying strings by numbers

Interestingly, you can multiply a …

Read the full article: https://www.pythonmorsels.com/self-concatenation/

January 16, 2026 02:15 PM UTC


Real Python

The Real Python Podcast – Episode #280: Considering Fast and Slow in Python Programming

How often have you heard about the speed of Python? What's actually being measured, where are the bottlenecks---development time or run time---and which matters more for productivity? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

January 16, 2026 12:00 PM UTC


Daniel Roy Greenfeld

Writing tools to download everything

Over the years, Audrey and I have accumulated photos across a variety of services. Flickr, SmugMug, and others all have chunks of our memories sitting on their servers. Some of these services we haven't touched in years, others we pay for but rarely use. It was time to bring everything home.

Why Bother?

Two reasons pushed me to finally tackle this.

First, money. Subscriptions add up. Paying for storage on services we barely use felt wasteful. As a backup even more so because there are services that are cheaper and easier to use for that purpose, like Backblaze.

Second, simplicity. Having photos scattered across multiple services means hunting through different interfaces when looking for a specific memory. Consolidating everything into one place makes our photo library actually usable.

Using Claude to Write a Downloader

I decided to start with SmugMug since that had the largest collection. I could have written this script myself. I've done plenty of API work over the years. But I'm busy, and this felt like a perfect use case for AI assistance.

My approach was straightforward:

  1. Wrote a specification for a Smugmug downloader. I linked to the docs for the service then told it to make a CLI for downloading things off that service. For the CLI I insist on typer but otherwise I didn't specify dependencies.

  2. Told Claude to generate code based on the spec. I provided the specification and let Claude produce a working Python script.

  3. Tested by running the scripts against real data. I started with small batches to verify the downloads worked correctly. Claude got everything right when iy came to downloads on the first go, which was impressive.

  4. Adjust for volume. We had over 5,000 files on Smugmug. Downloading everything at once took longer than I expected. I asked Claude to track files so if the script was interrupted it could resume where it left off. Claude kept messing this up, and after the 5th or 6th attempt I gave up trying to use Claude to write this part.

I Wrote Some Code

I wrote a super simple image ID cache using a plaintext file for storage. It was simple, effective, and worked on the first go. Sometimes it's easier to just write the code yourself than try to get an AI to do it for you.

The SmugMug Downloader

The project is here at SmugMug downloader. It authenticates, enumerates all albums, and downloads every photo while preserving the album structure. Nothing fancy, just practical.

I'll be working on the Flickr downloader soon, following the same pattern. There's a few other services on the list too; I'm scanning our bank statements to see what else we have accounts on that we've let linger for too long.

Was It Worth It?

Absolutely. What would have taken me a day of focused coding took an hour of iterating with Claude. Our photos are off Smugmug and we're canceling a subscription we no longer need. I think this is what they mean by "vibe engineering".

Summary

These are files which in some cases we thought we lost. Or had forgotten. So the emotional and financial investment in a vibe engineered effort was low. If this were something that was touching our finances or wedding/baby photos I would have been much more cautious. But for now, this is a fun experiment in using AI to handle the mundane parts of coding so I can focus on more critical tasks.

January 16, 2026 11:22 AM UTC

January 15, 2026


Django Weblog

DSF member of the month - Omar Abou Mrad

For January 2026, we welcome Omar Abou Mrad as our DSF member of the month! ⭐

Omar sitting on a gaming chair

Omar is a helper in Django Discord server, he has helped and continuesly help folks around the world in their Django journey! He is part of the Discord Staff Team. He has been a DSF member since June 2024.

You can learn more about Omar by visiting Omar's website and his GitHub Profile.

Let’s spend some time getting to know Omar better!

Can you tell us a little about yourself? (hobbies, education, etc)

Hello! My name is Omar Abou Mrad, a 47-year-old husband to a beautiful wife and father of three teenage boys. I’m from Lebanon (Middle East), have a Computer Science background, and currently work as a Technical Lead on a day-to-day basis. I’m mostly high on life and quite enthusiastic about technology, sports, food, and much more!

I love learning new things and I love helping people. Most of my friends, acquaintances, and generally people online know me as Xterm.

I have already an idea but where your nickname "Xterm" comes from?

xterm is simply the terminal emulator for the X Window System. I first encountered it back in the mid to late 90s when I started using Redhat 2.0 operating system. things weren’t easy to set up back then, and the terminal was where you spent most of your time.

Nevertheless, I had to wait months (or was it years?) on end for the nickname "Xterm" to expire on Freenode back in mid 2000s, before I snatched and registered it.

Alas, I did! Xterm, c'est moi! >:-]

How did you start using Django?

We landed on Django (~1.1) fairly early at work, as we wanted to use Python with an ORM while building websites for different clients. The real challenge came when we took on a project responsible for managing operations, traceability, and reporting at a pipe-manufacturing company.

By that time, most of the team was already well-versed in Django (~1.6), and we went head-on into building one of the most complicated applications we had done to date, everything from the back office to operators’ devices connected to a Django-powered system.

Since then, most of our projects have been built with Django at the core.

We love Django.

What other framework do you know and if there is anything you would like to have in Django if you had magical powers?

I've used a multitude of frameworks professionally before Django, primarily in Java (EE, SeamFramework, ...) and .NET (ASP.NET, ASP.NET MVC) as well as sampling different frameworks for educational purposes.

I suppose if I could snap my fingers and get things to exist in django it wouldn't be something new as much as it is official support of:

But since we're finger-snapping things to existence, it would be awesome if every component of django (core, orm, templates, forms, "all") could be installed separately in such a way that you could cherry pick what you want to install, so we could dismiss those pesky (cough) arguments (cough) about Django being bulky.

What projects are you working on now?

I'm involved in numerous projects currently at work, most of which are based on Django, but the one I'm working right now consists of doing integrations and synchronizations with SAP HANA for different modules, in different applications.

It's quite the challenge, which makes it twice the fun.

Which Django libraries are your favorite (core or 3rd party)?

I would like to mention that I'm extremely thankful for any and all core and 3rd Party libraries out there!

What are the top three things in Django that you like?

In no particular order:

You are helping a lot of folks in Django Discord, what do you think is needed to be a good helper according to you?

First and foremost, I want to highlight what an excellent staff team we have on the Official Django Discord. While I don’t feel I hold a candle to what the rest of the team does daily, we complement each other very well.

To me, being a good helper means:

Dry ORM is really appreciated! What motivated you to create the project?

Imagine you're having a discussion with a djangonaut friend or colleague about some data modeling, or answering some question or concern they have, or reviewing some ORM code in a repository on github, or helping someone on IRC, Slack, Discord, the forums... or simply you want to do some quick ORM experiment but not disturb your current project. The most common ways people deal with this, is by having a throw-away project that they add models to, generate migrations, open the shell, run the queries they want, reset the db if needed, copy the models and the shell code into some code sharing site, then send the link to the recipient. Not to mention needing to store the code they experiment with in either separate scripts or management commands so they can have them as references for later.

I loved what DDT gave me with the queries transparency, I loved experimenting in the shell with shell_plus --print-sql and I needed to share things online. All of this was cumbersome and that’s when DryORM came into existence, simplifying the entire process into a single code snippet.

The need grew massively when I became a helper on Official Django Discord and noticed we (Staff) could greatly benefit from having this tool not only to assist others, but share knowledge among ourselves. While I never truly wanted to go public with it, I was encouraged by my peers on Discord to share it and since then, they've been extremely supportive and assisted in its evolution.

The unexpected thing however, was for DryORM to be used in the official code tracker, or the forums, or even in Github PRs! Ever since, I've decided to put a lot of focus and effort on having features that can support the django contributors in their quest evolve Django.

So here's a shout-out to everyone that use DryORM!

I believe you are the main maintainer, do you need help on something?

Yes, I am and thank you! I think the application has reached a point where new feature releases will slow down, so it’s entering more of a maintenance phase now, which I can manage.

Hopefully soon we'll have the discord bot executing ORM snippet :-]

What are your hobbies or what do you do when you’re not working?

Oh wow, not working, what's that like! :-]

Early mornings are usually reserved for weight training.\ Followed by a long, full workday.\ Then escorting and watching the kids at practice.\ Evenings are spent with my wife.\ Late nights are either light gaming or some tech-related reading and prototyping.\

Weekends look very similar, just with many more kids sports matches!

Is there anything else you’d like to say?

I want to thank everyone who helped make Django what it is today.

If you’re reading this and aren’t yet part of the Discord community, I invite you to join us! You’ll find many like-minded people to discuss your interests with. Whether you’re there to help, get help, or just hang around, it’s a fun place to be.


Thank you for doing the interview, Omar!

January 15, 2026 02:14 PM UTC

January 14, 2026


Mike Driscoll

How to Type Hint a Decorator in Python

Decorators are a concept that can trip up new Python users. You may find this definition helpful: A decorator is a function that takes in another function and adds new functionality to it without modifying the original function.

Functions can be used just like any other data type in Python. A function can be passed to a function or returned from a function, just like a string or integer.

If you have jumped on the type-hinting bandwagon, you will probably want to add type hints to your decorators. That has been difficult until fairly recently.

Let’s see how to type hint a decorator!

Type Hinting a Decorator the Wrong Way

You might think that you can use a TypeVar to type hint a decorator. You will try that first.

Here’s an example:

from functools import wraps
from typing import Any, Callable, TypeVar


Generic_function = TypeVar("Generic_function", bound=Callable[..., Any])

def info(func: Generic_function) -> Generic_function:
    @wraps(func)
    def wrapper(*args: Any, **kwargs: Any) -> Any:
        print('Function name: ' + func.__name__)
        print('Function docstring: ' + str(func.__doc__))
        result = func(*args, **kwargs)
        return result
    return wrapper

@info
def doubler(number: int) -> int:
    """Doubles the number passed to it"""
    return number * 2

print(doubler(4))

If you run mypy —strict info_decorator.py you will get the following output:

info_decorator.py:14: error: Incompatible return value type (got "_Wrapped[[VarArg(Any), KwArg(Any)], Any, [VarArg(Any), KwArg(Any)], Any]", expected "Generic_function")  [return-value]
Found 1 error in 1 file (checked 1 source file)

That’s a confusing error! Feel free to search for an answer.

The answers that you find will probably vary from just ignoring the function (i.e. not type hinting it at all) to using something called a ParamSpec.

Let’s try that next!

Using a ParamSpec for Type Hinting

The ParamSpec is a class in Python’s typing module. Here’s what the docstring says about ParamSpec:

class ParamSpec(object):
  """ Parameter specification variable.
  
  The preferred way to construct a parameter specification is via the
  dedicated syntax for generic functions, classes, and type aliases,
  where the use of '**' creates a parameter specification::
  
      type IntFunc[**P] = Callable[P, int]
  
  For compatibility with Python 3.11 and earlier, ParamSpec objects
  can also be created as follows::
  
      P = ParamSpec('P')
  
  Parameter specification variables exist primarily for the benefit of
  static type checkers.  They are used to forward the parameter types of
  one callable to another callable, a pattern commonly found in
  higher-order functions and decorators.  They are only valid when used
  in ``Concatenate``, or as the first argument to ``Callable``, or as
  parameters for user-defined Generics. See class Generic for more
  information on generic types.
  
  An example for annotating a decorator::
  
      def add_logging[**P, T](f: Callable[P, T]) -> Callable[P, T]:
          '''A type-safe decorator to add logging to a function.'''
          def inner(*args: P.args, **kwargs: P.kwargs) -> T:
              logging.info(f'{f.__name__} was called')
              return f(*args, **kwargs)
          return inner
  
      @add_logging
      def add_two(x: float, y: float) -> float:
          '''Add two numbers together.'''
          return x + y
  
  Parameter specification variables can be introspected. e.g.::
  
      >>> P = ParamSpec("P")
      >>> P.__name__
      'P'
  
  Note that only parameter specification variables defined in the global
  scope can be pickled.
   """

In short, you use a ParamSpec to construct a parameter specification for a generic function, class, or type alias.

To see what that means in code, you can update the previous decorator to look like this: 

from functools import wraps
from typing import Callable, ParamSpec, TypeVar


P = ParamSpec("P")
R = TypeVar("R")

def info(func: Callable[P, R]) -> Callable[P, R]:
    @wraps(func)
    def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
        print('Function name: ' + func.__name__)
        print('Function docstring: ' + str(func.__doc__))
        return func(*args, **kwargs)
    return wrapper

@info
def doubler(number: int) -> int:
    """Doubles the number passed to it"""
    return number * 2

print(doubler(4))

Here, you create a ParamSpec and a TypeVar. You tell the decorator that it takes in a Callable with a generic set of parameters (P), and you use TypeVar (R) to specify a generic return type.

If you run mypy on this updated code, it will pass! Good job!

What About PEP 695?

PEP 695 adds a new wrinkle to adding type hints to decorators by updating the parameter specification in Python in 3.12.

The main thrust of this PEP is to “simplify” the way you specify type parameters within a generic class, function, or type alias.

In a lot of ways, it does clean up the code as you no longer need to import ParamSpec of TypeVar when using this new syntax. Instead, it feels almost magical.

Here’s the updated code:

from functools import wraps
from typing import Callable


def info[**P, R](func: Callable[P, R]) -> Callable[P, R]:
    @wraps(func)
    def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
        print('Function name: ' + func.__name__)
        print('Function docstring: ' + str(func.__doc__))
        return func(*args, **kwargs)
    return wrapper

@info
def doubler(number: int) -> int:
    """Doubles the number passed to it"""
    return number * 2

print(doubler(4))

Notice that at the beginning of the function you have square brackets. That is basically declaring your ParamSpec implicitly. The “R” is again the return type. The rest of the code is the same as before.

When you run mypy against this version of the type hinted decorator, you will see that it passes happily.

Wrapping Up

Type hinting can still be a hairy subject, but the newer the Python version that you use, the better the type hinting capabilities are.

Of course, since Python itself doesn’t enforce type hinting, you can just skip all this too. But if your employer like type hinting, hopefully this article will help you out.

Related Reading

The post How to Type Hint a Decorator in Python appeared first on Mouse Vs Python.

January 14, 2026 05:04 PM UTC