Planet Python
Last update: May 28, 2025 01:42 AM UTC
May 27, 2025
PyCoder’s Weekly
Issue #683: Narwhals, ty, LangChain, and More (May 27, 2025)
#683 – MAY 27, 2025
View in Browser »
Narwhals: Unified DataFrame Functions
Narwhals is a lightweight compatibility layer between DataFrame libraries. You can use it as a common interface to write reproducible and maintainable data science code which supports pandas, Polars, DuckDB, PySpark, PyArrow, and more
CODECUT.AI • Shared by Marco Gorelli
ty: Astral’s New Type Checker (Formerly Red-Knot)
The folks over at Astral are back with another amazing project: ty, formerly known as Red-Knot, it is a new type checker. Talk Python interviews Charlie Marsh and Carl Meyer about this new tool.
KENNEDY ET AL podcast
First Steps With LangChain
Large language models (LLMs) have taken the world by storm. In this step-by-step video course, you’ll learn to use the LangChain library to build LLM-assisted applications.
REAL PYTHON course
Python Jobs
Sr. Software Developer (Python, Healthcare) (USA)
Senior Software Engineer – Quant Investment Platform (LA or Dallas) (Los Angeles, CA, USA)
Causeway Capital Management LLC
Articles & Tutorials
Understanding Random Forest Using Python
A Random Forest is a powerful machine learning algorithm that can be used for classification and regression, is interpretable, and doesn’t require feature scaling. Here’s how to apply it with scikit-learn.
MICHAEL GALARNYK • Shared by Michael Galarnyk
Understanding Python Web Deployment
Trying to deploy server-side Python web applications may be overwhelming. This tutorial breaks down the why of things, rather than the what, and makes recommendations on how to approach the problem.
MIREK DŁUGOSZ
Meta Announces Pyrefly
Pyrefly is an open souce Python type checker and IDE extension built in Rust. This post announces its availability, tells you why they built it, and how you can use it.
META
How to Group Data Using Polars .group_by()
Start using Polars .group_by()
to make sense of your data. This tutorial shows you how to group, aggregate, and reveal hidden insights with hands-on examples.
REAL PYTHON
Nested Loops in Python
Learn how to use nested loops in Python to iterate over multiple sequences and perform repeated actions efficiently in your programs.
REAL PYTHON
Machine Learning With DuckDB and scikit-learn
Learn how to prototype a machine learning workflow using DuckDB for data handling and scikit-learn for modeling.
PETRICA LEUCA
Python: The Documentary
There’s a new documentary in the works on Python and its popularity. This is the official trailer.
YOUTUBE.COM video
The Guide to Hashing I Wish I Had When I Started
Learn the basics of hashing in this beginner-friendly guide. Discover what hashing is, how it works, its key principles, common algorithms, and practical uses for password storage and file integrity. (Code examples aren’t in Python, but most of the explanation is in text).
ANTON ÖDMAN
Loading Pydantic Models From JSON
Pydantic’s JSON loading uses a huge amount of memory; here’s how to reduce it.
ITAMAR TURNER-TRAURING
Projects & Code
Events
PyCon Italia 2025
May 28 to June 1, 2025
PYCON.IT
PyDelhi User Group Meetup
May 31, 2025
MEETUP.COM
PythOnRio Meetup
May 31, 2025
PYTHON.ORG.BR
Django Girls Medellín
June 1 to June 2, 2025
DJANGOGIRLS.ORG
Melbourne Python Users Group, Australia
June 2, 2025
J.MP
AfroPython Conf 2025
June 7 to June 8, 2025
AFROPYTHONCONF.ORG
Happy Pythoning!
This was PyCoder’s Weekly Issue #683.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Real Python
The LEGB Rule & Understanding Python Scope
The concept of scope rules how variables and names are looked up in your code. It determines the visibility of a variable within the code. The scope of a name or variable depends on the place in your code where you create that variable. The Python scope concept is generally presented using a rule known as the LEGB rule.
The letters in the acronym LEGB stand for Local, Enclosing, Global, and Built-in scopes. This summarizes not only the Python scope levels but also the sequence of steps that Python follows when resolving names in a program.
In this video course, you’ll learn:
- What scopes are and how they work in Python
- Why it’s important to know about Python scope
- What the LEGB rule is and how Python uses it to resolve names
- How to modify the standard behavior of Python scope using
global
andnonlocal
- What scope-related tools Python offers and how you can use them
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Morsels
Looping in reverse
Any reversible iterable can be reversed using the built-in reversed
function whereas Python's slicing syntax only works on sequences.
Reversing sequences with slicing
If you're working with a list, a string, or any other sequence in Python, you can reverse that sequence using Python's slicing syntax:
>>> colors = ["purple", "blue", "green", "pink", "red"]
>>> colors[::-1]
['red', 'pink', 'green', 'blue', 'purple']
The syntax looks weird, but it does work.
If we wanted to write a for
loop that iterated over our list from the end to the beginning, we could loop over the reversed slice of that list:
>>> for color in colors[::-1]:
... print("I like", color)
...
I like red
I like pink
I like green
I like blue
I like purple
But this only works for sequences (meaning iterables that can be indexed).
Also, it looks a little bit weird.
The list reverse
method
What about the reverse
method?
Read the full article: https://www.pythonmorsels.com/looping-in-reverse/
May 26, 2025
Python Insider
Python 3.14.0 beta 2 is here!
Here’s the second 3.14 beta.
https://www.python.org/downloads/release/python-3140b2/
This is a beta preview of Python 3.14
Python 3.14 is still in development. This release, 3.14.0b2, is the second of four planned beta releases.
Beta release previews are intended to give the wider community the opportunity to test new features and bug fixes and to prepare their projects to support the new feature release.
We strongly encourage maintainers of third-party Python projects to test with 3.14 during the beta phase and report issues found to the Python bug tracker as soon as possible. While the release is planned to be feature-complete entering the beta phase, it is possible that features may be modified or, in rare cases, deleted up until the start of the release candidate phase (Tuesday 2025-07-22). Our goal is to have no ABI changes after beta 4 and as few code changes as possible after the first release candidate. To achieve that, it will be extremely important to get as much exposure for 3.14 as possible during the beta phase.
This includes creating pre-release wheels for 3.14, as it helps other projects to do their own testing. However, we recommend that your regular production releases wait until 3.14.0rc1, to avoid the risk of ABI breaks.
Please keep in mind that this is a preview release and its use is not recommended for production environments.
Major new features of the 3.14 series, compared to 3.13
Some of the major new features and changes in Python 3.14 are:
New features
- PEP 649: The evaluation of type annotations is now deferred, improving the semantics of using annotations.
- PEP 750: Template string literals (t-strings) for custom string processing, using the familiar syntax of f-strings.
- PEP
784: A new module
compression.zstd
providing support for the Zstandard compression algorithm. - PEP
758:
except
andexcept*
expressions may now omit the brackets. - Syntax highlighting in PyREPL, and support for color in unittest, argparse, json and calendar CLIs.
- PEP 768: A zero-overhead external debugger interface for CPython.
- UUID
versions 6-8 are now supported by the
uuid
module, and generation of versions 3-5 and 8 are up to 40% faster. - PEP
765: Disallow
return
/break
/continue
that exit afinally
block. - PEP 741: An improved C API for configuring Python.
- A new type of interpreter. For certain newer compilers, this interpreter provides significantly better performance. Opt-in for now, requires building from source.
- Improved error messages.
- Builtin implementation of HMAC with formally verified code from the HACL* project.
- A new command-line interface to inspect running Python processes using asynchronous tasks.
- The pdb module now supports remote attaching to a running Python process.
(Hey, fellow core developer, if a feature you find important is missing from this list, let Hugo know.)
For more details on the changes to Python 3.14, see What’s new in Python 3.14. The next pre-release of Python 3.14 will be 3.14.0b3, scheduled for 2025-06-17.
Build changes
- PEP 761: Python 3.14 and onwards no longer provides PGP signatures for release artifacts. Instead, Sigstore is recommended for verifiers.
- Official macOS and Windows release binaries include an experimental JIT compiler.
Incompatible changes, removals and new deprecations
- Incompatible changes
- Python removals and deprecations
- C API removals and deprecations
- Overview of all pending deprecations
Python install manager
The installer we offer for Windows is being replaced by our new install manager, which can be installed from the Windows Store or our FTP page. See our documentation for more information. The JSON file available for download below contains the list of all the installable packages available as part of this release, including file URLs and hashes, but is not required to install the latest release. The traditional installer will remain available throughout the 3.14 and 3.15 releases.
More resources
- Online documentation
- PEP 745, 3.14 Release Schedule
- Report bugs at github.com/python/cpython/issues
- Help fund Python and its community
And now for something completely different
In 1897, the State of Indiana almost passed a bill defining π as 3.2.
Of course, it’s not that simple.
Edwin J. Goodwin, M.D., claimed to have come up with a solution to an ancient geometrical problem called squaring the circle, first proposed in Greek mathematics. It involves trying to draw a circle and a square with the same area, using only a compass and a straight edge. It turns out to be impossible because π is transcendental (and this had been proved just 13 years earlier by Ferdinand von Lindemann), but Goodwin fudged things so the value of π was 3.2 (his writings have included at least nine different values of π: including 4, 3.236, 3.232, 3.2325… and even 9.2376…).
Goodwin had copyrighted his proof and offered it to the State of Indiana to use in their educational textbooks without paying royalties, provided they endorsed it. And so Indiana Bill No. 246 was introduced to the House on 18th January 1897. It was not understood and initially referred to the House Committee on Canals, also called the Committee on Swamp Lands. They then referred it to the Committee on Education, who duly recommended on 2nd February that “said bill do pass”. It passed its second reading on the 5th and the education chair moved that they suspend the constitutional rule that required bills to be read on three separate days. This passed 72-0, and the bill itself passed 67-0.
The bill was referred to the Senate on 10th February, had its first reading on the 11th, and was referred to the Committee on Temperance, whose chair on the 12th recommended “that said bill do pass”.
A mathematics professor, Clarence Abiathar Waldo, happened to be in the State Capitol on the day the House passed the bill and walked in during the debate to hear an ex-teacher argue:
The case is perfectly simple. If we pass this bill which establishes a new and correct value for pi , the author offers to our state without cost the use of his discovery and its free publication in our school text books, while everyone else must pay him a royalty.
Waldo ensured the senators were “properly coached”; and on the 12th, during the second reading, after an unsuccessful attempt to amend the bill it was postponed indefinitely. But not before the senators had some fun.
The Indiana News reported on the 13th:
…the bill was brought up and made fun of. The Senators made bad puns about it, ridiculed it and laughed over it. The fun lasted half an hour. Senator Hubbell said that it was not meet for the Senate, which was costing the State $250 a day, to waste its time in such frivolity. He said that in reading the leading newspapers of Chicago and the East, he found that the Indiana State Legislature had laid itself open to ridicule by the action already taken on the bill. He thought consideration of such a proposition was not dignified or worthy of the Senate. He moved the indefinite postponement of the bill, and the motion carried.
Enjoy the new release
Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organisation contributions to the Python Software Foundation.
Regards from Helsinki, still light at 10pm,
Your release team,
Hugo van Kemenade
Ned Deily
Steve Dower
Łukasz Langa
Real Python
Marimo: A Reactive, Reproducible Notebook
Marimo notebooks redefine the notebook experience by offering a reactive environment that addresses the limitations of traditional linear notebooks. With marimo, you can seamlessly reproduce and share content while benefiting from automatic cell updates and a correct execution order. Discover how marimo’s features make it an ideal tool for documenting research and learning activities.
By the end of this tutorial, you’ll understand that:
- Marimo notebooks automatically update dependent cells, ensuring consistent results across your work.
- Reactivity allows marimo to determine the correct running order of cells using a directed acyclic graph (DAG).
- Sandboxing in marimo creates isolated environments for notebooks, preventing package conflicts and ensuring reproducibility.
- You can add interactivity to marimo notebooks using UI elements like sliders and radio buttons.
- Traditional linear notebooks have inherent flaws, such as hidden state issues, that marimo addresses with its reactive design.
Before you can get started with marimo, you’ll need to install it. Fortunately, this is quick and easy to do:
$ python -m pip install marimo
You use pip
to install the marimo library on your system. With this done, it’s time to get started, be amazed, and learn all about a different type of notebook.
The best way to approach this tutorial is to use the instructions to complete the various examples and try the exercises yourself. If you want copies of the various notebook files created during the tutorial, you’ll find them in your download bundle. The README.md
file provides further details of what’s in your downloads.
Take the Quiz: Test your knowledge with our interactive “Marimo: A Reactive, Reproducible Notebook” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Marimo: A Reactive, Reproducible NotebookThis quiz is a great way to reinforce and build on what you've learned about marimo notebooks. You'll find most of the answers in the tutorial, but you'll need to do some additional research to find some of the answers.
How to Get Started in a Marimo Notebook
A notebook is a file where you can write your programming code, run it, and view the output. You can add formatted text to explain how your code works, display charts to clarify results, and even allow your notebook’s users to try out different scenarios using a customized user interface. Once your notebook is complete, you can save everything in a single file and share your creation with others.
In this section, you’ll learn to use marimo to create a simple notebook to perform a calculation and clearly display its results.
Like many notebooks, marimo notebooks consist of cells. The primary cell types are code cells, where you enter and run your Python code, and Markdown cells, where you enter formatted text to augment the code and its output.
In this first example, you’ll use a marimo notebook and NumPy to solve a pair of simultaneous equations. To do this, you’ll first need to install the NumPy library:
$ python -m pip install numpy
With NumPy installed, you can now create your notebook by typing the following command into your console:
$ marimo edit simultaneous_equations.py
When you run this command, you’ll most likely create a new notebook named simultaneous_equations.py
for editing. If you already have a marimo notebook with that name, you’ll open it instead. Either way, your notebook will be ready for you to use within your browser.
Switch to your web browser, and you’ll see your new notebook. It’ll contain a single cell. Hover your mouse over the cell to reveal a range of icons, each with a tooltip that explains its purpose and shows the associated keyboard shortcuts:

Each of the main icons are described in the screenshot above. While most of these are self-explanatory, there are some points you should be aware of:
- The red trash can icon shown here won’t appear immediately in your notebook. This is used to delete a cell and will only appear when you add other cells. You can’t see it yet because all notebooks must have at least one cell. Deleting the last cell is impossible.
- The color of the Run current cell icon is also significant. If this cell is white, as it is in the screenshot, it’s up to date and doesn’t need to be run. Once you start changing cells, you’ll see their Run icons develop a yellow tinge. This warns you that the cell has become stale, meaning you must run it to update it.
- Finally, notice that the numbers to the left of each cell indicate the line numbers of the code within the cell. Unlike most other notebooks, there are no numbers to indicate the running order of the cells. This is because marimo allows you to add code cells in any order. Marimo can work out the correct cell running order for itself. Even so, placing cells in an illogical order should be avoided.
When you hover your mouse over some of marimo’s icons, you’ll see their associated keyboard shortcuts. Unfortunately, they don’t work correctly in all browsers. If they don’t work for you, stick to using your mouse. Feel free to try them to find out if they work for you.
Adding Code and Markdown Content
It’s time for you to gain experience creating some content in marimo. By following the walk-through, you’ll get hands-on practice with the basics.
Although confusing the first time you see it, the single cell that contains import marimo as mo
is actually a blank cell. This code allows you to work with the marimo API. However, it’s not in the cell unless you type it in manually.
Read the full article at https://realpython.com/marimo-notebook/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Marimo: A Reactive, Reproducible Notebook
Why not challenge yourself and see how much you know about marimo notebooks?
Working your way through this quiz is a great way to reinforce and build on what you learned in the Marimo: A Reactive, Reproducible Notebook tutorial.
You could try answering the questions without reading the tutorial first, but you’d miss out on a great learning experience!
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Brian Okken
Unravelling t-strings with pytest
Brett Cannon recently released a great article explaining how Python 3.14’s new t-strings work.
Here’s the article: Unravelling t-strings.
He built up the functionality of how t-strings work in a way that you can follow along even if you don’t have 3.14.0b1 (where t-strings are instroduced), all the way up to the last example.
He walks through
- Evaluating the Python expression
- Applying specified conversions
- Applying format specs
- Using an Interpolation class to hold details of replacement fields
- Using Template class to hold parsed data
The end result is very close to an example used in PEP 750.
May 23, 2025
Real Python
The Real Python Podcast – Episode #250: DjangoCon Europe 2025: Live Recording From Dublin
What goes into making video courses at Real Python? How should you build an installable Django application? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
May 22, 2025
First Institute of Reliable Software
How to separate test data from code: testing with CSV in pytest
Want to test a function with dozens of variants? I'll show you how to organize test cases in a CSV file and connect them to pytest using the pytest-csv-params plugin.
Mirek Długosz
Understanding Python web deployment
Trying to deploy server-side Python web application may be overwhelming. Many tutorials will gladly tell you how to run it inside gunicorn, and how to put nginx in front of it, but they usually skim over details on why all this is needed. Why do you need both nginx and gunicorn? Why gunicorn and not Waitress or daphne? What is WhiteNoise and do you still need nginx if you have it? What even is WSGI?
In this article I will share my mental model of distinct pieces and how they fit together. If I badly missed a mark somewhere, please reach out and tell me how wrong I am.
This is one of those topics that really benefits when you go through it in non-linear fashion. If you are here to learn, you might want to skim over headings and read sections in different order, or re-read selected sections out of order after reading the whole thing.
The main model
Let’s start with a high-level model of handling HTTP request.

That model follows the basic HTTP request/response cycle. Server passively waits for connections, which are initiated by clients. Client composes and sends a request. Server generates a response and sends it back, at which point connection is closed. If client decides it needs anything more, it sends new HTTP request.
The real world is complicated by HTTP/2, websockets and probably other things. HTTP/2 allows server to group multiple responses when addressing a single request. Websockets only start with HTTP request/response before switching to a completely different protocol. Some of these complexities will become relevant later.
Accepting HTTP request
HTTP may seem simple. Client prepares a text message defining a request and sends it to a server. Server prepares a text message constituting a response and sends it back. This works well for text-based content, such as HTML, XML and JSON.
But there is so much more than that. The current iteration of HTTP standard has a separate document on caching alone, which is about 30 pages long. There are three different compression algorithms, each described in detail in separate documentation. There’s an optional section about partial access to large resources. Then that whole thing may be wrapped in encrypted frames. HTTPS also involves special procedure for the initial handshake and a fair deal of cryptography. And on top of all that, you need to decide what to do with clients that do not conform to the specification.
Then, it’s somewhat rare for a single physical machine to serve single domain (website). On one hand, one machine may serve multiple domains and there must be something that can decide how exactly a specific request directed at one of managed domains should be handled. On the other hand, modern web-based applications are usually distributed across multiple physical machines and there must be something that can choose a specific machine to handle particular incoming request. These days decision is often made based on the request content, analysis of recent traffic and knowledge of internal service infrastructure.
Unfortunately, the world out there is a dangerous place. Some invalid requests are actually malicious. What if client sends hundreds of thousands of requests? What if client opens a connection and never sends any data? What if client sends less data it claims it will? These sorts of questions do not come naturally from reading the specification alone, and answers have been hard won over the years.
As you can imagine, there’s a lot of complexity, depth and lessons learned in last 30 years of HTTP usage. Most application developers don’t want to deal with any of that - they would rather focus on simple request/response cycle and only dive deep into other areas as they are relevant to problems they face. They would happily outsource everything else to some other software.
That other software is usually called HTTP server, load balancer or router. Some popular programs in that problem space are nginx, traefik and Kubernetes ingress.
It’s worth noting that many web app hosting platforms that focus on ease of use have their own setup and do not give any choice in that matter. Part of value proposition of Heroku or Fly.io is that the first stage is taken care of by dedicated professionals, and you can jump straight into step two or even three.
Translating between HTTP and programming language
Thanks to HTTP server, you have a sequence of bytes that constitute a valid HTTP request directed at your application. Your job is to produce a sequence of bytes that will constitute a valid HTTP response.
You can write application that works on these primitives. Python’s built-in HTTP server is basically that. But for most application developers, this is too low level - they don’t want to work with sequences of bytes, they want native objects and a library of helpers that make it easy to parse a request and construct a response. Reading and setting a cookie should take one line of code each.
Translating between a sequences of bytes and some kind of internal programming language objects (and back from objects to sequence of bytes when processing a response) is the main goal of the second step in the main model. The name and specifics of the approach varies by a programming language. Usually that component is called “application server”, “middleware” or “translation framework”.
“Middleware” is overloaded term. The concept is also used in traefik, an HTTP server (step number 1), and Django, a Python web application development framework (step number 3). The world is a tangled web and depending on where you draw the lines and how much you are willing to squint, a lot of things are “in the middle” between some other things.
You can think of application server as a tool written in some programming language that helps developers working in that language to work with HTTP, by translating between byte sequences and programming language constructs. There are some exceptions, which I’ll cover briefly near the end of article.
Python - WSGI
In Python community, application servers usually are designed to run WSGI-compliant applications.
WSGI is an abstract specification, published as PEP-333 and PEP-3333. It’s like a contract - for application server, it defines what it must provide to application, and what it should expect back. For application, it defines what it must provide back to application server, and what it should expect from it. The idea is that you should be able to run your application on any application server - and application server authors may write code once and support all applications. It’s an example of adapter design pattern.
Popular application servers are gunicorn, Waitress and bjoern.
Ruby - Rack
Rack is the main application server for Ruby web applications. These days it is both a concrete implementation of application server software, and de facto specification of how Ruby applications should work with application servers. unicorn and Puma are independent projects capable of hosting “Rack applications”.
Java - Jakarta EE
Jakarta EE is a bit of overloaded term that may refer to either Jakarta EE software, or Jakarta EE specification. Jakarta EE software is concrete implementation of Jakarta EE specification. As far as I understand, the software was first and later the community formalized whatever it was doing into a specification.
So Jakarta EE software is a bit similar to Rack, as it’s a specific software you can use which became a de facto standard. Jakarta EE specification is a bit similar to WSGI, because it’s just a document - the idea is that you should be able to use any application server with any application, as long as both are compliant with that document.
Most notable application servers are Tomcat and WildFly.
Node.js - built-in
Node.js has HTTP server and helpers built in. That’s because Node.js is built on top of a browser JavaScript engine, and JavaScript in a browser needs to deal with HTTP requests all the time. Node.js, as a de facto JavaScript backend platform, already does the same thing as application server does for other languages.
Rust - compiled-in
Rust does not provide HTTP helpers in standard library, and there is no generally accepted application server specification. Instead, there are multiple HTTP server libraries that solve many of the problems that your application is likely to deal with across the entire model. When you run application that uses one of these libraries, it binds to a socket and reads and writes byte sequences directly. At runtime, a single Rust application will effectively merge steps number 2 and 3 of the main model.
Popular libraries are Actix Web, axum and Rocket.
Adding value
At this point HTTP request is in the form suitable for a programming language. This is the step where majority of web application developers spend most of their time. Everything up to this point was a groundwork - things that had to be done, but it makes little difference how exactly they are done. Now we are getting close to the place that differentiates your application from all the other applications in the world.
In practice, many problems at this point are still shared and there is a room for outsourcing them. The objects and structures exposed to a programming language may still be relatively low level and a bit awkward to work with. Application likely needs to connect to one of popular database engines. A large part of application might be available to authenticated users, so there is a need for authentication framework. There are probably various levels of permissions and roles that users might have.
These problems are solved by frameworks or libraries. They differ mainly in how many of these problems they solve, and how opinionated they are in solutions they provide. Some popular frameworks are Django (Python), Flask (Python), FastAPI (Python), Quart (Python), Ruby on Rails (Ruby), Grape (Ruby), Spring (Java), Grails (Java), Apache Wicket (Java), Vaadin (Java), Apache Struts (Java), Meteor (JavaScript) or Next.js (JavaScript). In general, communities of most popular languages came up with at least one web application framework.
Noteworthy exceptions and complications
WSGI and ASGI
ASGI is another specification for Python. It serves the same role as WSGI.
Why two specifications? There are two main reasons. First, WSGI is written around HTTP request/response loop, and as such is unable to support newer protocols like websockets. Second, WSGI supports only synchronous functions, while asynchronous code is increasingly popular since it was introduced in Python 3.5. Some application developers concluded that performance gains they receive from asynchronous code is worth more for them than WSGI compatibility. ASGI was created in response to these needs.
From the main model perspective, WSGI and ASGI are functionally the same thing.
Practically speaking, gunicorn is go-to WSGI application server and uvicorn is go-to ASGI application server. Django is compatible with both WSGI and ASGI, while Flask users can easily move to Quart, developed by the same team. Some newer frameworks, like FastAPI, are exclusive to ASGI. There is also at least one application server capable of serving either WSGI or ASGI applications.
WhiteNoise
WhiteNoise is a Python package for efficient static files serving. Static files are all files not generated dynamically by a Python program, such as stylesheets, frontend scripts, images and fonts.
It can act as a generic WSGI application wrapping around a target WSGI application. You can think of it as being somewhere between step number 2 and 3 of the main model. However, it also provides facilities to easily integrate with Django, and in such scenario it belongs firmly to step number 3.
The common knowledge of Python community is that static files should not be served by Python application, but instead should be handled by an HTTP server at the earlier stage of request/response cycle. WhiteNoise rejects that notion. It promises a high performance of serving static files while making sure they are correctly cached by CDN, proxies and client. It takes over one of responsibilities of an HTTP server.
Given all the things that HTTP server does, personally I would not be bold enough to skip it and open WhiteNoise-enabled application server directly to the world. But it might be worth considering in some special cases, like inside a container image or for internal company service.
Phusion Passenger
Phusion Passenger is application server notable for two main things. First, it’s an example of application server written in different language that target application - Phusion Passenger is written in C++ and initially hosted only applications written in Ruby. Second, it’s an example of application server compatible with multiple programming languages - newer versions can also host Python applications (compatible with WSGI) and Node.js applications.
Granian
Granian is application server written in Rust, capable of hosting both WSGI and ASGI Python applications. It’s notable because it’s an example of application server written in different language than target application, but also because it’s application server supporting both Python specifications.
Spring Boot
Spring Boot supports multiple deployment models. It can be built into a package compatible with the Jakarta EE specification, intended to run on application server. But there is also an option to build it into standalone HTTP server application that directly responds to incoming HTTP requests. That option is similar to how HTTP servers are built in Rust.
Django Weblog
Why, in 2025, do we still need a 3rd party app to write a REST API with Django?
The question was asked to the president of the DSF this year at FOSDEM, after his talk. And it is clearly a legitimate one!
But… is it True
? Do we actually need a 3rd party app to write an API with Django?
In a lot of cases, when you require a complex and full-featured API, I would recommend you do use one. Django REST Framework and Django Ninja being very sound choices with a bunch of nifty things you might need in a bigger project.
But… what if what you need is a simple REST API that does CRUD? Do you really need a 3rd party app to do that?
Let's try not to!
Let's first ask what is a REST API in the context of this article. Let's limit ourselves to building this:
- a URL that answers to
GET
requests with a list of records of a single model type POST
-ing to that same URL should create a new record- a second URL with the primary key of a record tagged to the end. When
GET
-ing that URL, one should receive only that single record, in a similar format as in the list PUT
-ing data to that URL should update the record and return that record with updated valuesDELETE
-ing to that same URL should delete the record- everything should be achieved using JSON
Listing records from a model
Chances are you have heard of generic class-based views (CBVs) in Django, the one that comes to mind when it comes to listing records is the built-in django.views.generic.ListView
.
ListView
extends 2 other classes, BaseListView
and MultipleObjectTemplateResponseMixin
. Since we want to build an API, we clearly don't need to extend anything template-related. Looking at what BaseListView
provides, one can notice the only thing really missing there is a render_to_response
method. And this is going to be the case for most of the other base classes.
This sets our starting point!
The type of response we want to render is a json
response and Django already provides one. So let's build a JsonViewMixin
that looks like this for now:
class JsonViewMixin(View):
response_class = JsonResponse
content_type = 'application/json'
def render_to_response(self, context, **response_kwargs):
response_kwargs.setdefault("content_type", self.content_type)
return self.response_class(context, **response_kwargs)
The next thing we have to tackle is that the context
returned by BaseListView
's get_context_data
returns much more data than what we strictly need for this article. Also the list of records returned is not serialized to json
.
Serializers for mutliple formats already exist in Django core (see django.core.serializers
) but I will go a different route here. There is another way to serialize data in Django that you are likely familiar with but is not labelled as clearly: forms.
Forms are used in regular views to serialize models to simple types, understandable by HTML forms (mostly text) and vice-versa. This is very close to what we need, since json
is also mostly text-based.
To start with, using forms as serializers requires creating a new form instance for each record in the list we want to return.
Let's add that to the mixin!
def serialize_many(self, obj_list):
return [self.serialize_one(obj) for obj in obj_list]
def serialize_one(self, obj)
form = self.get_form_for_object(obj)
serialized = form.initial
serialized['pk'] = obj.pk # forms strip pk's from their data
return serialized
def get_form_for_object(self, obj):
form_class = self.get_form_class()
kwargs = self.get_form_kwargs()
kwargs['instance'] = obj
return form_class(**kwargs)
Why use forms?
ModelForms are a built-in and robust Django tool that are built around the idea of handling the transition between Model
fields and simple (and also JSON-serializable) types (mostly text and numbers). Which is exactly what we want from (de-)serializers in a lot of cases.
If you need to (de-)serialize a custom field type, Django documents creating a custom form field and this covered in various places like StackOverflow.
Moving on to our first View
Now that we have a tool to serialize the records list returned by BaseListView
let's write the first version of JsonListView
. As I alluded to earlier, we need to strip down what is returned from get_context_data
.
class JsonListView(JsonViewMixin, BaseListView):
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
# wrapping this in a dict for security reasons
return {
'results': self.serialize_many(context['object_list'])
}
This won't work yet because get_form_class
that I used in the JsonViewMixin
is only provided by classes that descend from FormMixin
. Since we want this view to handle both listing and creating records, let's go and fix that in the next section!
1 down, 3 to go: Adding records
First thing first, let's rebrand JsonListView
and make it inherit from BaseCreateView
.
class JsonListCreateView(JsonViewMixin, BaseCreateView, BaseListView):
Form creation and validation will be handled automatically by Django!
Almost…
The first concern will be with populating the form with POST
data. While Django does this for you when dealing with URL encoded or multipart form data, it does not (yet) handle json-encoded POST
content.
But this can be handled by taking advantage of the modularity of Django's generic class-based-views and overwritting get_form_kwargs
.
Let's address this (in a naïve way) within the mixin as it will be applicable to any JSON view:
def get_form_kwargs(self):
kwargs = super().get_form_kwargs()
if (
len(kwargs.get("data", {})) == 0
and self.request.content_type == "application/json"
and len(self.request.body)
):
# The request has a JSON body that we did not decode
kwargs["data"] = json.loads(self.request.body)
return kwargs
An issue that could arise here is that a JSONDecoderError
could be triggered. get_form_kwargs
does not return a response so I don't think it is the right place to handle the exception.
The post
method does return a response, let's wrap the original one with a try
/except
(still in the mixin):
def post(self, request, *args, **kwargs):
try:
return super().post(request, *args, **kwargs)
except json.decoder.JSONDecodeError as e:
return self.response_class(
{"error": f"json decode error: {e.msg}"},
status=HTTPStatus.UNSUPPORTED_MEDIA_TYPE,
)
Speaking of returning responses, the BaseCreateView class is built around HTML principles and its form_valid
and get
methods are both designed to render a form (via get_context_data
).
In the case of our REST API, the "create" part of things should not be involved with GET
requests.
Furthermore the reply to an invalid form submission should only comprise of an error (status + message) and should not require anything provided by get_context_data
.
Still, in regards to form validation, a valid form should not result in a redirect (behaviour of BaseCreateView
) but rather in a 201
response optionally containing the representation of the created record.
The form handling part is generic enough to put it in the mixin itself.
The behaviour of GET
is specific to the list/create view though.
Let's write the code accordingly:
class JsonViewMixin(View):
...
def form_invalid(self, form):
return self.response_class(
{'errors': form.errors},
status=HTTPStatus.UNPROCESSABLE_CONTENT
)
def form_valid(self, form):
self.object = form.save()
context = self.serialize_one(self.object)
return self.response_class(
context,
status=HTTPStatus.CREATED
)
class JsonListCreateView(JsonViewMixin, BaseCreateView, BaseListView):
...
def get_context_data(self, **kwargs):
# Explicitly point to BaseListView instead of `super()`
# to prevent actions taken by
# BaseCreateView's implementation of `get_context_data`
context = BaseListView.get_context_data(self, **kwargs)
return {
'results': self.serialize_many(context['object_list'])
}
Halfway there!
That was everything needed to handle the create
and list
portions of our CRUD REST application. Now we can move on to the read
, update
, delete
part. We'll do that in a second View
class as it requires a slightly different URL, one that contains the pk
of the resource.
Both read and update functionalities are provided by Django BaseUpdateView
but, as with the create/list view, the major difference in this case will be that we need a much simpler context.
class JsonReadUpdateView(JsonViewMixin, BaseUpdateView):
def get_context_data(self, **kwargs):
return self.serialize_one(self.object)
That's it!!!
Well, almost…
BaseUpdateView
is wired to answer to POST
requests for updating a record while REST good practices want us to use PUT
instead. The fix for this is to raise an error in reply to POST
calls while directing PUT
s to the parent's post
implementation.
def post(self, request, *args, **kwargs):
return self.response_class(status=HTTPStatus.METHOD_NOT_ALLOWED)
def put(self, request, *args, **kwargs):
return super().post(self, request, *args, **kwargs)
One more fix…
Our mixin implementation returns a 201
on form_valid
. In case of any view which is not creating a record, this should be 200
. Here are the necessary changes:
class JsonViewMixin(View):
form_valid_status = HTTPStatus.OK # new property
def form_valid(self, form):
self.object = form.save()
context = self.serialize_one(self.object)
return self.response_class(
context, status=self.form_valid_status # use the new property
)
...
class JsonListCreateView(JsonViewMixin, BaseCreateView, BaseListView):
form_valid_status = HTTPStatus.CREATED # override in case of creation
Why PUT
and not PATCH
?
BaseUpdateView
builds a form that expects all fields to be filled. Non-present fields would be reset to empty on the existing record for partial updates.
I'll leave it as an exercise to the reader to override that behaviour in case of a PATCH
request in order to "pre-fill the form" with existing values, maybe by using the form's initial
property… 😉
Finally…
The last bit of logic we have to implement is for deleting objects. Most of the code from Django's BaseDeleteView is related to creating and validating a form for confirming the user's intend on deleting the resource. This is usually not the expected behaviour for a REST API, this part being handled by whatever is calling the API.
Furthermore, it doesn't implement a delete method. In the HTML world of Django's BaseDeleteView
, everything is done using GET
and POST
. So we are (mostly) on our own for this last part.
We can still leverage the get_object
implementation provided by BaseUpdateView though.
Here is what implementing the delete operation for our read/update/delete view looks like:
class JsonReadUpdateDeleteView(JsonViewMixin, BaseUpdateView): # name changed
...
def delete(self, request, *args, **kwargs):
self.get_object().delete()
# data is required by JsonResponse
return self.response_class(data={}, status=HTTPStatus.NO_CONTENT)
Conclusion
This implementation is basic and clearly naïve. But it gets the job done!
And this can all be done by leveraging Django-provided tools and mechanisms, mainly using Django's generic CBVs.
Generic class-based views have been built in such a modular fashion that implementing one's own mini REST framework can be done in less than 100 lines of code.
A non-negligible advantage of such an approach is that most libraries written to work with Django's generic CBVs are also likely to work with this implementation.
This rather simple approach can certainly be improved (handling exceptions in delete
… anyone?) and is clearly not going to cover everybody's use cases. And it most likely misses handling a bunch of edge cases!
And if you are building a large REST API, I would say you are probably still better off using a 3rd party library but… to me, the answer to the question “Why do you need a 3rd party application to write a simple REST application with Django?” is: "You don’t"
If you enjoyed this article, read more from Emma on Emma has a blog, which is where this piece was from. Or watch the FOSDEM talk that Emma reacts to:
Thibaud Colas - Shifting DX expectations: keeping Django relevant 😬 | FOSDEM 2025
EuroPython
AI Keynote Reveal, Python Documentary Premiere & Last Chance for Combined Tickets
Hello, Pythonistas! 🐍
Tickets are still available but combined and tutorial tickets are almost sold out! You can get your tickets at europython.eu/tickets
📣 Meet our Keynoters
We are pleased to announce Nerea Luis will be doing a Keynote at Europython about the tangible impact that developers, managers, and other tech sector professionals can expect from the rise of generative AI.
Nerea Luis is an independent consultant and advisor in Artificial Intelligence and cofounder of Lumi Labs. She has a PhD in Computer Science and 10 years of experience in the field, she has led the direction of projects related to Natural Language Processing and Computer Vision. She has received recognition from Google, the Spanish Royal Family, and Forbes, among others.
💰 Sponsorship
If you&aposre passionate about supporting EuroPython and helping us make the event accessible to all, consider becoming a sponsor or asking your employer to join us in this effort.
By sponsoring EuroPython, you’re not just backing an event – you&aposre gaining highly targeted visibility and the chance to present your company or personal brand to one of the largest and most diverse Python communities in Europe and beyond!
We have a last few sponsorship slots available. Along with our main packages, there are optional add-ons and optional extras.
👉 More information at: https://ep2025.europython.eu/sponsorship/sponsor/
👉 Contact us at sponsoring@europython.eu
Speaker Mentorship
First Time Speakers’ Workshop
New to speaking? A seasoned speaker? Somewhere in between?
No worries! The EuroPython First Time Speakers’ Workshop is for everyone! 🐍 You will gain key presentation skills, practical advice, and valuable insights from seasoned speakers.
When: June 4, 2025, at 18:00 CEST on Zoom. Attendance is free, but kindly register: https://forms.gle/T8rc73sbyu3KbLNKA
More details: https://europython.eu/programme/mentorship/
🎥 Documentary on the History of Python
The filmmakers from Cult Repo, formerly known as Honeypot, are working on a documentary about the history of Python and its vibrant community. It features over 20 core developers and takes us on a journey from the first days of Python to the latest developments. We’re thrilled to announce that the documentary will premiere at EuroPython! Stay tuned for more details.
👥Call for Volunteers
Want to experience EuroPython from the inside? We’re looking for on-site volunteers to help us from July 14–20!
Volunteering is a great way to gain hands-on experience, meet new people, and attend the conference in a unique way. Whether you&aposre great at helping people, love the buzz of registration, want to chair a session, or just enjoy being part of a team — we have a role for you.
👉Apply by: June 1st
👉More info: europython.eu/volunteers
❓What does Guido van Rossum like about EuroPython?
Want to find out? Follow us on YouTube and social media! Over the next few days, we’ll be sharing short conversations with community members about what they&aposre looking forward to at the next EuroPython.
EuroPython Society
🏰 Call for Venues - EuroPython 2026
Are you a community builder who dreams of bringing EuroPython to your city? The Call for Venues for EuroPython 2026 is now open!
If you want to propose a location on behalf of your community, please fill in the following form: https://forms.gle/ZGQA7WhTW4gc53MD6
📊 Board Report
The EuroPython Society is the volunteer-run non-profit that organizes the EuroPython conference and supports the wider Python community in Europe. To stay transparent and keep everyone informed, the Board shares regular updates on what we’re working on.
The April board report is ready: https://europython-society.org/board-report-for-april-2025/
💞 Community Outreach
PyCon US
This year, the EuroPython Society had a booth at PyCon US for the very first time! 🎉 Curious how it went?
👉 Check out our blog post: https://europython-society.org/europython-society-at-pycon-us-2025/
💞Upcoming Events in the Python Community
- Pycon Italia, Bologna, 28-31 May https://2025.pycon.it/en
- EuroPython, Prague, 14-20 July https://ep2025.europython.eu
- EuroSciPy, Kraków, 18-22 August https://euroscipy.org/2025/
- PyCon Poland, Gliwice, 28-31 August https://pl.pycon.org/2025/en/
- PyCon Greece, Athens, Greece, 29-30 August https://2025.pycon.gr/en/
- PyData Berlin, 01-03 September https://pydata.org/berlin2025
- PyCamp CZ 25 beta, Třeštice, 12-14 September https://pycamp.cz/
- Pycon UK, Manchester, 19-22 September https://2025.pyconuk.org/
- PyCon Estonia, Tallinn, 2-3 October https://pycon.ee/
- PyCon Finland, Jyväskylä, 17 October https://2025.ploneconf.org/pyconfi-2025
- PyCon Sweden, Stockholm, 30-31 October https://pycon.se/
- PyCon France. Lyon, 30 October-2 November https://www.pycon.fr/2025/en/index.html
👋 See You All Next Month
And in the meantime, follow us on social media:
- LinkedIn: https://www.linkedin.com/company/europython/
- X: https://x.com/europython
- Mastodon: https://fosstodon.org/@europython
- BlueSky: https://bsky.app/profile/europython.eu
- YouTube: https://www.youtube.com/@EuroPythonConference
- Instagram: https://www.instagram.com/europython/
EuroPython Society
Board Report for April 2025
In April, our top priority was to kick off discussions about the 2026 venue. We successfully launched the Call for Venues and are now actively working on choosing a new location for next year.
We also helped release the schedule and coordinated all the related activities.
On top of that, we worked with our event manager to confirm a few more details, sign contracts, handle billing, and take care of similar administrative tasks.
Individual reports:
Artur
- Sponsor management
- Various small infrastructure updates, mainly in our static files hosting and the internal discord bot.
- Adding data sync and parsing from pretalx and pretix to the internal discord bot.
- Call for venues 2026 and related calls with communities
- Regular sync with the event manager
- Program: coordination of the documentary premiere with producers
Mia
- Communications: migration to Canva; copywriting for social media & articles
- Call for Venues 2026: kick-off and calls with communities
- Payments
- Design: coordination of design items with the designer; reviews
- Website: PR reviews; page updates
- Grants program: communication with recipients and administrative tasks
- Community outreach: DjangoCon Europe
- Program: coordination of the documentary premiere with producers
- Regular sync with the event manager
- PSF Grant Work
Aris
- Ops team activities, meetings and planning
- Tickets/vouchers documentation and workflow
- Accounting updates
- Payments
Ege
- Promotional material orders for PyCon US: Stickers, Posters, Retractable roll up banner, laminated info sheets
- Website and Programme API updates
- PR reviews
Shekhar
- FinAid: Completed Round 1 of our Financial Aid Programme
- Visa Application Letter: Distribution and Management
Anders
- VAT Report for March
Cyril
- …
Python⇒Speed
Loading Pydantic models from JSON without running out of memory
You have a large JSON file, and you want to load the data into Pydantic. Unfortunately, this uses a lot of memory, to the point where large JSON files are very difficult to read. What to do?
Assuming you’re stuck with JSON, in this article we’ll cover:
- The high memory usage you get with Pydantic’s default JSON loading.
- How to reduce memory usage by switching to another JSON library.
- Going further by switching to dataclasses with slots.
May 21, 2025
Real Python
Nested Loops in Python
Nested loops in Python allow you to place one loop inside another, enabling you to perform repeated actions over multiple sequences. Understanding nested loops helps you write more efficient code, manage complex data structures, and avoid common pitfalls such as poor readability and performance issues.
By the end of this tutorial, you’ll understand that:
- Nested loops in Python involve placing one loop inside another, enabling iteration over multiple sequences or repeated actions.
- Situations where nested loops are a good idea include handling multidimensional data, generating patterns, and performing repetitive tasks with multiple layers of iteration.
- You can break out of nested loops by using the
break
statement, which exits the innermost loop when a condition is met. - Disadvantages of nested loops include potential performance bottlenecks, poor readability, and variable scoping issues.
This tutorial provides practical examples and optimization techniques for using nested loops effectively in your Python programs.
Get Your Code: Click here to download the free sample code that you’ll use to learn about nested loops in Python.
Take the Quiz: Test your knowledge with our interactive “Nested Loops in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Nested Loops in PythonNested loops allow you to perform repeated actions over multiple sequences, but is there more? Test your understanding of nested loops in Python!
Getting Started With Nested Loops in Python
Loops are fundamental building blocks in programming, allowing you to iterate through actions efficiently. In Python, there are two primary types of loops: the for
loop and the while
loop. Both serve the same purpose —executing a block of code multiple times—but they differ in how they operate and in their use cases:
- A
for
loop iterates over a sequence, such as alist
or arange
, and executes a block of code for each item. They’re useful when the number of iterations is known beforehand. - A
while
loop runs as long as a specified condition remains true, making it useful when the number of iterations isn’t known in advance.
You create a nested loop by placing one loop inside another. This structure is especially helpful when working with multidimensional data, generating patterns, or handling tasks that involve several layers of repetition.
In a nested loop, the first loop is called the outer loop, and the loop inside is the inner loop. So, for every iteration of the outer loop, the inner loop runs completely before the outer loop moves to the next iteration.
Here’s the basic syntax of a nested loop:
for outer_variable in outer_iterable:
for inner_variable in inner_iterable:
<body>
The outer_iterable
must be a list, a dictionary, or some other sequence of items that you can iterate over. The same applies to the inner_iterable
. The <body>
inside the inner loop contains the code that runs once for each for
loop step in the inner_iterable
. Since the inner loop is nested inside the outer loop, it runs in full for each iteration of the outer loop.
A good analogy for a nested loop is the hour and minute hands of a clock. The hour hand moves slowly around the clock, completing one full revolution every twelve hours. Meanwhile, the minute hand moves at a much faster rate, completing a revolution every hour. While both hands rotate at different speeds, they work together, each completing their own cycle within the same clock.
Here’s how the clock logic looks in Python code:
>>> for hour in range(0, 24):
... for minute in range(0, 60):
... print(f"{hour:02d}:{minute:02d}")
...
00:00
00:01
00:02
⋮
23:57
23:58
23:59
As you can see, every time the minute hand completes a cycle, the hour hand moves to the next hour. :02d
is a format specifier that ensures the number is printed as a two-digit-wide integer value.
Now that you’ve been introduced to nested loops, it’s time to explore some practical examples. You’ll become familiar with writing programs using nested loops in the following section.
Exploring Practical Examples of Nested Loops
As you just learned, nested loops have a number of use cases. Here, you’ll have a look at a few examples. These examples are interesting and practical, allowing you to have fun as you explore their syntax and semantics.
Printing Patterns With Nested Loops
Being able to print any pattern of your choosing is a fascinating feat in programming. One way you can achieve this is by understanding how nested loops work. The code snippet below builds a sail pattern using a few simple symbols. While this may not seem particularly exciting, consider it a first step toward creating something spectacular—like a spaceship:
1>>> height = 6
2>>> sail_patterns = "*#-x+o"
3>>> for row in range(height):
4... pattern = ""
5... spacing = " " * (height - row)
6... for symbol in sail_patterns:
7... pattern += symbol * row + spacing
8...
9... print(pattern)
10...
11
12* # - x + o
13** ## -- xx ++ oo
14*** ### --- xxx +++ ooo
15**** #### ---- xxxx ++++ oooo
16***** ##### ----- xxxxx +++++ ooooo
Here’s what the code does line by line:
Read the full article at https://realpython.com/nested-loops-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Nested Loops in Python
In this quiz, you’ll test your understanding of Nested Loops in Python.
Nested loops in Python involve placing one loop inside another, enabling iteration over multiple sequences or repeated actions. They’re particularly useful for handling multidimensional data, generating patterns, and performing repetitive tasks with multiple layers of iteration.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
EuroPython Society
EuroPython Society at PyCon US 2025
This year, for the first time, the EuroPython Society had a booth at PyCon US. 🎉
It was a great chance to meet people from other Python communities, catch up with old friends, and make some new ones. At our booth, we gave out stickers (including a sneak peek of our 2025 ones), talked about our grants programme, and shared what the Society has been up to.
PyCon US was also where something special was announced. The filmmakers from Cult Repo, formerly known as Honeypot, are working on a documentary about the history of Python and its vibrant community. It features over 20 core developers, from the early days of Python to the newest contributors. They showed a trailer during the opening session on the third day and announced that the premiere will happen at EuroPython!
We managed to take a group photo with some of us during the PyLadies auction, which many of us attended. With such a full schedule, it can be tricky to get everyone together, so we were happy to catch that moment.
Outside of the talks and sessions, we also spent some evenings together just hanging out and catching up. These moments are always a highlight of any conference.
We are especially happy to share that our chair, Artur Czepiel, received a community award. So did Laís Carvalho, a former board member, and Raquel Dou, a former chair.
Karolina Surma, a member of Czech PyLadies and the 2023 programme team, also received an Outstanding PyLady Award for her contributions to the PyLadies community in the Czech Republic
Congratulations to all of you and thank you for all the time and energy you give to the community!
Thank you to the PyCon US organisers for running such a fantastic event and for giving us space to be part of it.
Thank you to our board member Ege Akman for sorting out materials and helping get everything ready for the booth.
And a big thank you to everyone who stopped by to chat. It was a pleasure to meet you. See you next year!
May 20, 2025
PyCoder’s Weekly
Issue #682: pylock.toml, NumPy, t-strings, and More (May 20, 2025)
#682 – MAY 20, 2025
View in Browser »
Going Beyond requirements.txt
With pylock.toml
What is the best way to record the Python dependencies for the reproducibility of your projects? What advantages will lock files provide for those projects? This week on the show, we welcome back Python Core Developer Brett Cannon to discuss his journey to bring PEP 751 and the pylock.toml file format to the community.
REAL PYTHON podcast
I Don’t Like NumPy
This opinion piece talks about why NumPy gets difficult fast. Two dimensions to your array? No problem, the calc is mostly self evident. Add a couple more dimensions and it gets messy fast. See also the associated HN discussion, which also includes possible solutions.
DYNOMIGHT
Webinar: Inside Dust’s AI-Powered Vision for the Future of Work
Join us on June 5 for a behind-the-scenes look at how Dust is building the future of work with AI and why Temporal is at the heart of their platform. We’ll explore how Dust is transforming those everyday tasks with intelligent agents that are deeply contextual, resilient, and production-ready →
TEMPORAL sponsor
Unravelling t-strings
PEP 750 introduced t-strings
for Python 3.14. These are a template string mechanism similar to f-strings. Although they are in 3.14.0b1, there isn’t any documentation yet, so this post explains what they are how they can be used.
BRETT CANNON
Python Jobs
Senior Software Engineer – Quant Investment Platform (LA or Dallas) (Los Angeles, CA, USA)
Causeway Capital Management LLC
Articles & Tutorials
Working With Missing Data in Polars
In this video course, you’ll learn how to deal with missing data in Polars to ensure it doesn’t interfere with your data analysis. You’ll discover how to check for missing values, update them, and remove them.
REAL PYTHON course
Django Security Best Practices
This guide contains a long list of best practices for securing your Django site. It includes topics such as ensuring HTTPS everywhere, hardening your database, using key built-in security features, and more.
AHMAD SADEDDIN
Read My Lips: Automate Lip Syncing with Wav2Lip
Don’t let your message be misunderstood! Use Wav2Lip for automated and AI-powered lip syncing. Get the code now.
INTEL CORPORATION sponsor
t-strings
in Python (PEP 750)
Python 3.14 will include the new t-string concept, an f-string like mechanism for string templating. Talk Python interviews Paul Everitt, David Peck, and Jim Baker and they talk about the upcoming feature.
KENNEDY ET AL podcast
The Database Row That Did and Didn’t Exist
Misty ran into a confounding bug, where loading and saving certain Django ORM objects was failing, but not all of them. Read on to understand what went wrong.
MISTY DE MÉO
The Appearing Built-In
Rodrigo discovered that examining the builtins
module in the REPL changes the size of the builtins
module. Read on to find what causes this oddity.
RODRIGO GIRÃO SERRÃO
What Does @Slot()
Do in Qt?
When working with Qt slots and signals in PySide6 you might discover the @Slot
decorator. This post tells you what it does and why you might use it.
MARTIN FITZPATRICK
5 Steps to N-Body Simulation
This article shows you how to simulate gravity in an N-body problem. It starts out with simpler simulation then adds optimizations as it goes along.
ALVIN NG
PEPs & Co.
A little bit of Python history: where the term “PEP” came from, along with a list of similar types of documents for other languages.
HUGO VAN KEMENADE
The First Year of Free-Threaded Python
A recap of the first year of work on enabling support for the free-threaded build of CPython in community packages.
NATHAN GOLDBAUM
Pandas: Drop Columns by Name in DataFrames
A quick overview of the variety of ways to drop columns from a pandas DataFrame, referencing the columns by name.
ERIK MARSJA
Working on Complex Systems: What I Learned at Google
Explore the key characteristics of complex systems and learn practical patterns for tackling complex problems.
TEIVA HARSANYI
Building a Real-Time Dashboard With Flask and Svelte
Learn how to build a real-time analytics dashboard using Flask and Svelte with server-sent events.
AMIR TADRISI
Projects & Code
Events
Python Leiden User Group
May 22, 2025
PYTHONLEIDEN.NL
Python Sheffield
May 27, 2025
GOOGLE.COM
PyCon Italia 2025
May 28 to June 1, 2025
PYCON.IT
PyKla Monthly Meetup
May 28, 2025
MEETUP.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #682.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Ari Lamstein
Course Review: Build AI Agents with Python
I was slow to jump on the LLM bandwagon.
In some ways this is surprising. Although it was a while ago, I did take several AI courses in grad school. So I am probably in a good position to dive into new developments in the field.
But when the LLM wave started I was busy trying to develop software engineering expertise in Python. I wanted to finish that “project” before moving onto something new. It was only a few months ago, when the PR of my multiyear
module was merged to the censusdis
package (link), that I felt like I could turn my attention elsewhere.
Coincidentally, around that time Kevin Markham created his first course on LLMs. I wrote a review of that course here. In this post I review his followup course: Build AI Agents with Python.
The Problem AI Agents Solve
Something I learned in this course is that LLMs like GPT-4o mini are called “foundational models”. Foundational models have very real limitations. Consider this conversation I just had with GPT-4o mini:
User: What is your cutoff date?
AI: My knowledge cutoff date is October 2021. This means I do not have information on events or developments that occurred after that date. If you have questions based on information available up to then, feel free to ask!User: Do you have access to the web?
AI: No, I do not have access to the web or the ability to browse the internet. My responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available information up to my knowledge cutoff date in October 2021. If you have any questions within that scope, feel free to ask!User: What is the latest version of Python?
AI: As of my last knowledge update in October 2021, the latest stable release of Python was Python 3.10, which was released on October 4, 2021. For any updates or newer versions released after that date, I recommend checking the official Python website at https://www.python.org/.
In short, the answers that foundational models give are limited to the data they were trained on. For reference, the latest version of Python is 3.13.3, which was released in April 2025.
Augmenting Foundational Models
You can think of AI Agents as augmenting foundational models with tools, data and prompts that you provide them. This allows them to provide better answers to certain questions.
As an example, Build AI Agents with Python walks you though the process of giving GPT-4o mini the “tool” of web search using Tavily. This resulted in the following conversation:
User: What is your cutoff date?
AI: My knowledge cutoff date is in October 2021. I do not have information on events or developments that occurred after that date. However, I can access real-time information through specific tools. If you have a particular query, feel free to ask!User: Do you have access to the web?
AI: Yes, I can access real-time information through a specific tool to search the web for current events and updates. If you have a particular topic or question in mind, let me know, and I can look it up for you!User: What is the latest version of python?
AI: The latest stable version of Python is 3.13.3. You can download it from the official Python website https://www.python.org/downloads/.
I had not heard of Tavily prior to taking this course. It turns out that there is an entire ecosystem of tools around LLMs!
In addition to adding in a tool for web search, the course guides you through integrating custom data sources using Retrieval Augmented Generation (RAG) and building a Q&A system that can interact with a SQL database. For a course that only contains two hours of videos, I felt that it covered a lot.
Like with all online courses, there is an argument that I could have learned all this on my own. But “get your hands dirty with LLMs” had been on my todo list for a while. It was only taking this course that made it happen. If you are looking for a similar push, then you might enjoy this course too.
Wish List
Kevin is gifted at coming up with good examples and clearly explaining his solutions. If there was one change I could make to the course it would be for him to add exercises to it.
Longtime readers might remember that I have created and taught several courses myself. My initial courses did not have exercises. It was only when I read Telling Ain’t Training that I learned how valuable they are for students. That book also presents a framework for creating exercises that I still use today.
Next Steps
After taking an online course, I often want to apply what I learned to a project of my own. In this case, I can now imagine adding an LLM feature to my Covid Demographics Explorer.
The dataset that powers that app is in a single CSV file. I think it would be fun to create a chatbot that has access to that file and uses it to answer questions. That would allow users to ask questions about the dataset using natural language and get answers in natural language.
Real Python
First Steps With LangChain
You’ve likely interacted with large language models (LLMs), like the ones behind OpenAI’s ChatGPT, and experienced their remarkable ability to answer questions, summarize documents, write code, and much more.
While LLMs are remarkable by themselves, with a little programming knowledge, you can leverage libraries like LangChain to create your own LLM-powered applications that can do just about anything.
In this video course, you’ll learn how to:
- Use LangChain to build LLM-powered applications
- Create reusable instructions with prompt templates
- Create and extend LangChain chains
- Debug what happens when a chain executes
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Stéphane Wirtel
From script to version 1.0.0 — 12 years later
🧵 From script to version 1.0.0 — 12 years later In 2013, I wrote a tiny Python script to help with local SMTP debugging during development. It used smtpd, docopt, and had no ambition other than making my day a little easier. This tiny helper evolved into a CLI tool: dsmtpd Then, I forgot about it. 🧹 2021: Minor cleanup, no big plans Eight years later, I dusted it off:
May 19, 2025
Real Python
How to Group Data Using Polars .group_by()
One of the most common tasks you’ll encounter when analyzing Polars data is the need to summarize it. You can use the Polars .group_by()
method to create groupings based on column values. A related summarization technique is aggregation, where you take a sequence of related values and condense them into a single value.
By the end of this tutorial, you’ll understand that:
- You can summarize data using aggregation.
- You can use
.filter()
to view specific data. - Using
.group_by()
allows you to summarize one or more columns of your data. - Your time series data can be grouped using
.group_by_dynamic()
. - You can match summarized data with the original data using window functions.
- Pivot tables allow you to group and aggregate rows and columns of data.
In this tutorial, you’ll learn how to group data in several ways using the .group_by()
method in Polars and apply aggregation to each group.
Before you begin, you should be familiar with the basics of working with Polars DataFrames or another DataFrame library like pandas.
Note: If you’re familiar with Polars, you’ll know that in addition to DataFrames, Polars also supports LazyFrames. While this is an efficient tool when working with large datasets, and you’re certainly encouraged to learn how to use them, DataFrames are sufficient for the examples in this tutorial.
In addition, you may find it helpful to use Jupyter Notebook as you work through many of the examples in this tutorial. Alternatively, JupyterLab will enhance your notebook experience, but any Python environment you’re comfortable with will work just fine.
To get started, you’ll need some data. For the central part of this tutorial, you’ll use the student performance dataset freely available from the UC Irvine Machine Learning Repository. The data in its original form comes in two files: student-mat.csv
and student-por.csv
. These contain data on students taking math and Portuguese language courses.
Both files have been altered to include student identification numbers and the subject indicators M
and P
, formatted as math.parquet
and portuguese.parquet
. All data has also been merged into a single course.parquet
file. These three files are available in your download bundle, which you can access by clicking the link below:
Get Your Code: Click here to download the free sample code that you’ll use to learn about grouping data with Polars .group_by().
The table below shows the fields used in this tutorial:
Field Name | Description |
---|---|
absences |
Number of absences |
age |
Student’s age |
failures |
Number of failures |
G1 , G2 , G3 |
First, second, and final grade |
internet |
Student has home internet access |
reason |
Reason for taking course |
school |
School attended |
sex |
Student’s gender (M , F ) |
student_id |
Student’s enrollment number |
subject |
Subject studied (M , P ) |
The dataset is quite extensive, containing over thirty fields. If you want to experiment, the student.txt
file defines every field. This file is also available as part of the tutorial downloads.
To use Polars, you first need to install the Polars library into your Python environment. To install Polars from the command prompt, use the following command:
$ python -m pip install polars
In a Jupyter Notebook, the command is !python -m pip install polars
.
With everything set up, it’s time to start using Polars to aggregate and group data.
Get Your Code: Click here to download the free sample code that you’ll use to learn about grouping data with Polars .group_by().
Take the Quiz: Test your knowledge with our interactive “How to Group Data Using Polars .group_by()” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Group Data Using Polars .group_by()Put your Polars .group_by() and aggregation skills to the test. This quiz goes beyond the tutorial, so you'll need to dig deeper to find all the answers.
Aggregating Data
Whenever you want to aggregate data without grouping it, you apply an aggregation function directly to the column or columns you want to analyze. More often than not, you’ll also want to group your aggregated data. You’ll learn how to do that later—for now, you’ll focus on aggregation.
Learning the Basics
Suppose you wanted to find the highest number of absences among students in the math class. You can do this as follows:
>>> import polars as pl
>>> math_students = pl.read_parquet("math.parquet")
>>> (
... math_students
... .select(pl.col("absences").max())
... )
shape: (1, 1)
┌──────────┐
│ absences │
│ --- │
│ i64 │
╞══════════╡
│ 75 │
└──────────┘
After installing the Polars library earlier, you import
it into your code with the conventional alias pl
to access its functionality. This allows you to use the read_parquet()
function to read the contents of math.parquet
into a Polars DataFrame.
Read the full article at https://realpython.com/polars-groupby/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: How to Group Data Using Polars .group_by()
In this quiz, you’ll test your understanding of the techniques covered in How to Group Data Using Polars .group_by().
By working through the questions, you’ll review your understanding of how to use the data aggregation and grouping techniques provided by Polars.
You’ll need to do some research outside of the tutorial to answer all the questions, so let this challenge take you on a learning journey.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Talk Python to Me
#506: ty: Astral's New Type Checker (Formerly Red-Knot)
The folks over at Astral have made some big-time impacts in the Python space with uv and ruff. They are back with another amazing project named ty. You may have known it as Red-Knot. But it's coming up on release time for the first version and with the release it comes with a new official name: ty. We have Charlie Marsh and Carl Meyer on the show to tell us all about this new project.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/connect-cloud'>Posit</a><br> <a href='https://talkpython.fm/auth0'>Auth0</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading">Links from the show</h2> <div><strong>Talk Python's Rock Solid Python: Type Hints & Modern Tools (Pydantic, FastAPI, and More) Course</strong>: <a href="https://training.talkpython.fm/courses/python-type-hint-course-with-hands-on-examples" target="_blank" >training.talkpython.fm</a><br/> <br/> <strong>Charlie Marsh on Twitter</strong>: <a href="https://twitter.com/charliermarsh?featured_on=talkpython" target="_blank" >@charliermarsh</a><br/> <strong>Charlie Marsh on Mastodon</strong>: <a href="https://hachyderm.io/@charliermarsh?featured_on=talkpython" target="_blank" >@charliermarsh</a><br/> <br/> <strong>Carl Meyer</strong>: <a href="https://twitter.com/carljm/?featured_on=talkpython" target="_blank" >@carljm</a><br/> <br/> <strong>ty on Github</strong>: <a href="https://github.com/astral-sh/ty?featured_on=talkpython" target="_blank" >github.com/astral-sh/ty</a><br/> <strong>A Very Early Play with Astral’s Red Knot Static Type Checker</strong>: <a href="https://app.daily.dev/posts/a-very-early-play-with-astral-s-red-knot-static-type-checker-wnhcuf0lx?featured_on=talkpython" target="_blank" >app.daily.dev</a><br/> <strong>Will Red Knot be a drop-in replacement for mypy or pyright?</strong>: <a href="https://github.com/astral-sh/ruff/discussions/15149?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Hacker News Announcement</strong>: <a href="https://news.ycombinator.com/item?id=42868576&featured_on=talkpython" target="_blank" >news.ycombinator.com</a><br/> <strong>Early Explorations of Astral’s Red Knot Type Checker</strong>: <a href="https://pydevtools.com/blog/early-explorations-of-astrals-red-knot-type-checker/?featured_on=talkpython" target="_blank" >pydevtools.com</a><br/> <strong>Astral's Blog</strong>: <a href="https://astral.sh/blog?featured_on=talkpython" target="_blank" >astral.sh</a><br/> <strong>Rust Analyzer Salsa Docs</strong>: <a href="https://docs.rs/rust-analyzer-salsa/latest/salsa/?featured_on=talkpython" target="_blank" >docs.rs</a><br/> <strong>Ruff Open Issues (label: red-knot)</strong>: <a href="https://github.com/astral-sh/ruff/issues?q=is%3Aissue%20state%3Aopen%20label%3Ared-knot&featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Ruff Types</strong>: <a href="https://types.ruff.rs/?featured_on=talkpython" target="_blank" >types.ruff.rs</a><br/> <strong>Ruff Docs (Astral)</strong>: <a href="https://docs.astral.sh/ruff/?featured_on=talkpython" target="_blank" >docs.astral.sh</a><br/> <strong>uv Repository</strong>: <a href="https://github.com/astral-sh/uv?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=XVwpL_cAvrw" target="_blank" >youtube.com</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/506/ty-astrals-new-type-checker-formerly-red-knot" target="_blank" >talkpython.fm</a><br/> <br/> <strong>--- Stay in touch with us ---</strong><br/> <strong>Subscribe to Talk Python on YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" >youtube.com</a><br/> <strong>Talk Python on Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm at bsky.app</a><br/> <strong>Talk Python on Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes at bsky.app</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Python Bytes
#432 How To Fix Your Computer
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://adamj.eu/tech/2025/05/07/pre-commit-install-uv/?featured_on=pythonbytes">pre-commit: install with uv</a></strong></li> <li><strong><a href="https://peps.python.org/pep-0773/?featured_on=pythonbytes">PEP 773</a>: A Python Installation Manager for Windows (Accepted)</strong></li> <li><a href="https://textual.textualize.io/blog/2025/05/07/the-future-of-textualize/?featured_on=pythonbytes"><strong>Changes for Textual</strong></a></li> <li><strong><a href="https://endler.dev/2025/best-programmers/?featured_on=pythonbytes">The Best Programmers I Know</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=eNH8L1voK9s' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="432">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by <strong>NordLayer</strong>: <a href="https://pythonbytes.fm/nordlayer"><strong>pythonbytes.fm/nordlayer</strong></a></p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes"><strong>@mkennedy.codes</strong></a> <strong>(bsky)</strong></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes"><strong>@brianokken.bsky.social</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/pythonbytes.fm"><strong>@pythonbytes.fm</strong></a> <strong>(bsky)</strong></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it. </p> <p><strong>Brian #1:</strong> <a href="https://adamj.eu/tech/2025/05/07/pre-commit-install-uv/?featured_on=pythonbytes">pre-commit: install with uv</a></p> <ul> <li>Adam Johnson</li> <li>uv tool works great at keeping tools you use on lots of projects up to date quickly, why not use it for pre-commit.</li> <li>The extension of <a href="https://pypi.org/project/pre-commit-uv/?featured_on=pythonbytes">pre-commit-uv</a> will use uv to create virtual environments and install packages fore pre-commit. This speeds up initial pre-commit cache creation. </li> <li>However, Adam is recommending this flavor of using pre-commit because it’s just plain easier to install pre-commit and dependencies than the official pre-commit install guide.</li> <li>Win-win.</li> <li>Side note: No Adam, I’m not going to pronounce uv “uhv”, I’ll stick with “you vee”, even Astral tells me I’m wrong </li> </ul> <p><strong>Michael #2:</strong> <a href="https://peps.python.org/pep-0773/?featured_on=pythonbytes">PEP 773</a>: A Python Installation Manager for Windows (Accepted)</p> <ul> <li>via pycoders newsletter</li> <li>One manager to rule them all – PyManager.</li> <li>PEP 773 replaces all existing Windows installers (.exe “traditional” bundle, per-version Windows Store apps, and the separate py.exe launcher) with a single MSIX app called Python Install Manager (nick-named PyManager).</li> <li>PyManager should be mainstream by CPython 3.15, and the traditional installer disappears no earlier than 3.16 (≈ mid-2027).</li> <li><strong>Simple, predictable commands.</strong> <ul> <li>python → launches “the best” runtime already present or auto-installs the latest CPython if none is found.</li> <li>py → same launcher as today <strong>plus</strong> management sub-commands:</li> <li>py install, py uninstall, py list, py exec, py help.</li> <li>Optional python3 and python3.x aliases can be enabled by adding one extra PATH entry.</li> </ul></li> </ul> <p><strong>Michael #3:</strong> <a href="https://textual.textualize.io/blog/2025/05/07/the-future-of-textualize/?featured_on=pythonbytes"><strong>Changes for Textual</strong></a></p> <ul> <li><strong>Bittersweet news: the business experiment ends, but the code lives on.</strong></li> <li>Textual began as a hobby project layered on top of Rich, but it has grown into a mature, “makes-the-terminal-do-the-impossible” TUI framework with an active community and standout documentation.</li> <li>Despite Textual’s technical success, the team couldn’t pinpoint a single pain-point big enough to sustain a business model, so <strong>the company will wind down in the coming weeks</strong>.</li> <li><strong>The projects themselves aren’t going anywhere</strong>: they’re stable, battle-tested, and will continue under the stewardship of the original author and the broader community.</li> </ul> <p><strong>Brian #4:</strong> <a href="https://endler.dev/2025/best-programmers/?featured_on=pythonbytes">The Best Programmers I Know</a></p> <ul> <li>Matthias Endler</li> <li>“I have met a lot of developers in my life. Lately, I asked myself: “What does it take to be one of the best? What do they all have in common?””</li> <li>The list <ul> <li>Read the reference</li> <li>Know your tools really well</li> <li>Read the error message</li> <li>Break down problems</li> <li>Don’t be afraid to get your hands dirty</li> <li>Always help others</li> <li>Write</li> <li>Never stop learning</li> <li>Status doesn’t matter</li> <li>Build a reputation</li> <li>Have patience</li> <li>Never blame the computer</li> <li>Don’t be afraid to say “I don’t know”</li> <li>Don’t guess</li> <li>Keep it simple</li> </ul></li> <li>Each topic has a short discussion. So don’t just ready the bullet points, check out the article.</li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li>I had a great time in Munich last week. I a talk at a company event, met with tons of people, and had a great time. <ul> <li>The best part was connecting with people from different divisions working on similar problems.</li> <li>I love the idea of internal conferences to get people to self organize by topic and meet people they wouldn’t otherwise, to share ideas.</li> <li>Also got started working on a second book on the plane trip back. </li> </ul></li> </ul> <p>Michael:</p> <ul> <li>Talk Python Clips (e.g. <a href="https://www.youtube.com/shorts/cARHSv-CwaE">mullet</a>)</li> <li>Embrace your cloud firewall (<a href="https://blobs.pythonbytes.fm/cloud-firewall-example.jpg">example</a>).</li> <li><a href="https://pythoninsider.blogspot.com/2025/05/python-3140-beta-1-is-here.html?featured_on=pythonbytes">Python 3.14.0 beta 1</a> is here</li> <li>Congrats to the <a href="https://pyfound.blogspot.com/2025/05/announcing-python-software-foundation.html?featured_on=pythonbytes">new PSF Fellows</a>.</li> <li><a href="https://www.linkedin.com/posts/mdboom_its-been-a-tough-couple-of-days-microsofts-activity-7328583333536268289-p4Lp/?rcm=ACoAAABOjqABPkOWTTbZXV9tmnQohvpkplQOibU&featured_on=pythonbytes">Cancelled faster CPython</a></li> <li><a href="https://bsky.app/profile/snarky.ca/post/3lp5w5j5tws2i?featured_on=pythonbytes">bsky.app/profile/snarky.ca post</a></li> </ul> <p><strong>Joke:</strong> <a href="https://x.com/PR0GRAMMERHUM0R/status/1559253623397404673?featured_on=pythonbytes">How To Fix Your Computer</a></p>