Planet Python
Last update: June 11, 2025 09:42 PM UTC
June 11, 2025
Talk Python Blog
Deep Dives Complete: 600K Words of Talk Python Insights
It’s been a real journey. I’m thrilled to announce that every single Talk Python To Me episode now has a deep dive analysis for that show. This short post is just a bit of a celebration (and heads up!) about this new way to learn from the podcast.
What are deep dives?
A quick review: Our deep dives are a quick, detailed way to revisit the best points of an episode without scrubbing through the audio or reading the full transcripts. They include additional insights and resources beyond our episode page notes and links such as “What to Know If You’re New to Python”, “Key Definitions and Terms”, and “Learning Resources.”
The Python Coding Stack
Are Python Dictionaries Ordered Data Structures?
Order the boxes from smallest to largest.
Stand in a queue in the order you arrived at the shop.
You don't need me to define what the word "order" means in either of these instructions above.
In Python, some data structures are ordered. Others aren't.
So, what about dictionaries? Are they ordered?
Some History First
Let's start with Python versions before Python 3.6. The answer is clear and unambiguous: No, dictionaries in Python versions before 3.6 are definitely not ordered.
Python 3.6 was released in 2016. Therefore, we're referring to older versions of Python that are no longer supported. Still, this historical detour is relevant to what's coming later.
Let's use a dictionary to place people who join a queue (or line for those who use US English) in a shop. A dictionary is unlikely to be the best data structure for a queue of people, but we'll use it for this example:

The values associated with each key are empty lists, ready to hold any items that these customers purchase from the shop. But we won't need these lists in this article, so they'll remain empty.
I no longer have Python versions older than 3.6 installed on my computer. However, when you display the dictionary in those older versions, you may see the items printed out in any order:
You had no guarantee of the order of the items when fetching them one after the other, such as when you display the dictionary or iterate through it.
Dictionaries in Python 3.6 and 3.7 (and Later)
Python 3.6 changed how dictionaries are implemented in the main Python interpreter, CPython. This is the interpreter you're likely to be using, even if you don't know it.
As a result of this change, Python dictionaries now maintained the order of insertion of key-value pairs. Therefore, the first item you add to a dictionary will always be the first displayed or yielded in an iteration. The second item you add will always be in second place, and so on.
This was merely an implementation detail in Python 3.6 that came about because of other changes in how dictionaries are implemented behind the scenes. However, in Python 3.7, this feature was included as part of the Python language specification. Therefore, from Python 3.7 onwards, the order of insertion is guaranteed. You can rely on it!
So, does that mean that Python dictionaries are now ordered data structures? Not so fast…
Dictionaries Preserve the Order of Insertion
Let's compare the dictionary you created with another one that has the same people but in a different order:
The dictionaries queue
and another_queue
contain the same items–the same key-value pairs. But they're not in the same order.
However, Python still treats these dictionaries as equal. The fact that the two dictionaries have the same key-value pairs is sufficient to make these dictionaries equal. The order is not important.
Let's compare this characteristic with the equivalent one for lists by creating two lists:
These lists have the same names but in a different order. However, the order of the items is a fundamental characteristic of lists. Therefore, these lists are not considered equal. This feature is part of the definition of all sequences, such as lists, tuples, and strings.
So, even though dictionaries maintain the order of insertion since Python 3.6/3.7, the order is not itself a key characteristic of a dictionary. This is an important distinction between dictionaries and lists (and other sequences).
This is why the Python documentation and other Python resources typically use the phrase "dictionaries preserve the order of insertion" rather than saying that dictionaries are ordered.
Dictionaries are not ordered data structures in the same way sequences are.
How about collections.OrderedDict
?
There's another mapping that's derived from dictionaries that you can find in the collections
module: OrderedDict
.
This data type existed in Python before the changes to dictionaries in Python 3.6 and 3.7. As its name implies, it's a dictionary that's also ordered. So, is the OrderedDict
data type redundant now that standard dictionaries preserve the order of insertion?
Let's recreate the queue
and another_queue
data structures using OrderedDict
instead of standard dictionaries:
Now, queue
and another_queue
, which are OrderedDict
instances, are no longer equal even though they have the same key-value pairs. In an OrderedDict
, the order matters. Recall that the order in a standard dictionary, even though it is preserved, doesn't matter–standard dictionaries with the same items but in a different order are still considered equal.
Note that I'm using a standard dictionary to create an OrderedDict
for simplicity in this example. If you're still using an older version of Python (prior to 3.6), the dictionary will not maintain order, so this code will not work. Use a list of tuples instead, which is also a valid way to initialise an OrderedDict
in modern versions of Python.
There are also other differences between OrderedDict
and standard dictionaries. Therefore, you may still find a use for collections.OrderedDict
.
Do you want to join a forum to discuss Python further with other Pythonistas? Upgrade to a paid subscription here on The Python Coding Stack to get exclusive access to The Python Coding Place's members' forum. More Python. More discussions. More fun.
And you'll also be supporting this publication. I put plenty of time and effort into crafting each article. Your support will help me keep this content coming regularly and, importantly, will help keep it free for everyone.
Final Words
Different data structures have different characteristics. That's the point of having a large selection of data structures. There isn't one data structure to rule them all. Different needs require different data structures.
Sequences are ordered. The order of items within a sequence matters. That's why you can use an index to fetch an item based on its position in a sequence. Therefore, it makes sense that sequences with the same items but in a different order are considered different.
However, the defining characteristic of a dictionary is the mapping between a key and its value. You find a value by using its key in a dictionary. The preservation of the insertion order in dictionaries is a nice-to-have feature, but it's not central to how dictionaries work.
PS: You'll need today's material in the next article I'll publish in a few days on The Python Coding Stack.
Photo by Alina Chernii: https://www.pexels.com/photo/people-waiting-and-standing-by-wall-25211989/
Code in this article uses Python 3.13
The code images used in this article are created using Snappify. [Affiliate link]
You can also support this publication by making a one-off contribution of any amount you wish.
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Further reading related to this article’s topic:
Appendix: Code Blocks
Code Block #1
queue = {"James": [], "Kate": [], "Andy": [], "Isabelle": []}
Code Block #2
queue
# # Display order was arbitrary before Python 3.6
# {'Kate': [], 'James': [], 'Isabelle': [], 'Andy': []}
Code Block #3
queue = {"James": [], "Kate": [], "Andy": [], "Isabelle": []}
queue
# # Starting from Python 3.7, the order is guaranteed
# {'James': [], 'Kate': [], 'Andy': [], 'Isabelle': []}
Code Block #4
queue = {"James": [], "Kate": [], "Andy": [], "Isabelle": []}
another_queue = {"Kate": [], "James": [], "Isabelle": [], "Andy": []}
queue == another_queue
# True
Code Block #5
queue_list = ["James", "Kate", "Andy", "Isabelle"]
another_queue_list = ["Kate", "James", "Isabelle", "Andy"]
queue_list == another_queue_list
# False
Code Block #6
from collections import OrderedDict
queue = OrderedDict({"James": [], "Kate": [], "Andy": [], "Isabelle": []})
another_queue = OrderedDict({"Kate": [], "James": [], "Isabelle": [], "Andy": []})
queue == another_queue
# False
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Real Python
Defining Your Own Python Function
A Python function is a named block of code that performs specific tasks and can be reused in other parts of your code. Python has several built-in functions that are always available, and you can also create your own. These are known as user-defined functions.
To define a function in Python, you use the def
keyword, followed by the function name and an optional list of parameters enclosed in a required pair of parentheses. You can call and reuse a function by using its name, a pair of parentheses, and the necessary arguments.
Learning to define and call functions is a fundamental skill for any Python developer. Functions help organize your code and make it more modular, reusable, and easier to maintain.
By the end of this tutorial, you’ll understand that:
- A Python function is a self-contained block of code designed to perform a specific task, which you can call and reuse in different parts of your code.
- You can define a Python function with the
def
keyword, followed by the function name, parentheses with optional parameters, a colon, and then an indented code block. - You call a Python function by writing its name followed by parentheses, enclosing any necessary arguments, to execute its code block.
Understanding functions is key to writing organized and efficient Python code. By learning to define and use your own functions, you’ll be able to manage complexity and make your code easier to read.
Get Your Code: Click here to download the free sample code that shows you how to define your own function in Python.
Take the Quiz: Test your knowledge with our interactive “Defining Your Own Python Function” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Defining Your Own Python FunctionIn this quiz, you'll test your understanding of defining and calling Python functions. You'll revisit the def keyword, parameters, arguments, and more.
Getting to Know Functions in Python
In mathematics, a function is a relationship or mapping between one or more inputs and a set of outputs. This concept is typically represented by the following equation:

Here, f() is a function that operates on the variables x and y, and it generates a result that’s assigned to z. You can read this formula as z is a function of x and y. But how does this work in practice? As an example, say that you have a concrete function that looks something like the following:

Now, say that x is equal to 4 and y is 2. You can find the value of z by evaluating the function. In other words, you add 4 + 2 to get 6 as a result. That’s it!
Functions are also used in programming. In fact, functions are so fundamental to software development that virtually all modern, mainstream programming languages support them. In programming, a function is a self-contained block of code that encapsulates a specific task under a descriptive name that you can reuse in different places of your code.
Many programming languages have built-in functions, and Python is no exception. For example, Python’s built-in id()
function takes an object as an argument and returns its unique identifier:
>>> language = "Python"
>>> id(language)
4390667816
The integer number in this output uniquely identifies the string object you’ve used as an argument. In CPython, this number represents the memory address where the object is stored.
Note how similar this notation is to what you find in mathematics. In this example, language
is equivalent to x or y, and the pair of parentheses calls the function to run its code, comparable to evaluating a math function. You’ll learn more about calling Python functions in a moment.
Note: To learn more about built-in functions, check out Python’s Built-in Functions: A Complete Exploration.
Similarly, the built-in len()
function takes a data collection as an argument and returns its length:
>>> numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> items = len(numbers)
>>> items
10
In this example, the list of numbers has ten values, so len()
returns 10
. You assign this value to the items
variable, which is equivalent to z in the mathematical equation you saw before.
Most programming languages—including Python—allow you to define your own functions. When you define a Python function, you decide whether it takes arguments. You’re also responsible for how the function internally computes its result.
Once you’ve defined a function, you can call it from different parts of your code to execute its specific computation or action. When the function finishes running, it returns to the location where you called it.
Read the full article at https://realpython.com/defining-your-own-python-function/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Defining Your Own Python Function
In this quiz, you’ll test your understanding of Defining Your Own Python Function.
You’ll revisit how to define a function with the def
keyword, specify parameters, pass arguments, and call your functions to make code modular and reusable. You’ll also see how functions help organize and maintain your Python projects.
Ready to demonstrate your skills? Let’s begin!
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python GUIs
6th Edition - Create GUI Applications with Python & Qt, Released — PyQt6 & PySide6 Books updated for 2025 with model view controller architecture, new Python/Qt features and more examples
The 6th edition of my book Create GUI Applications with Python & Qt is now available, for PyQt6 & PySide6.
This update brings the book up to date with the latest changes in PyQt6 & PySide6, and also updates code to make use of newer features in Python. Many of the chapters have been updated and extended with more examples of form layouts, built-in dialogs and architecture, particularly using Model View Controller (MVC) architecture.
You can buy the latest editions below --
- PyQt6 - PyQt6 Book, 6th Edition, Create GUI Applications with Python & Qt6
- PySide6 - PySide6 Book, 6th Edition, Create GUI Applications with Python & Qt6
As always, if you've previously bought a copy of the book you get these updates for free! Just go to your account downloads page and enter the email you used for the purchase.
If you bought the book elsewhere (in paperback or digital) you can register to get these updates too -- just email your receipt to register@pythonguis.com
Enjoy!
June 10, 2025
PyCoder’s Weekly
Issue #685: Polars Data Validation, reversed, Counting Words, and More (June 10, 2025)
#685 – JUNE 10, 2025
View in Browser »
Data Validation Libraries for Polars (2025 Edition)
Given that Polars is so hot right now and that data validation is an important part of a data pipeline, this post explores five Python data validation libraries that support Polars DataFrames. Through contrasting and comparison, the post puts forward which of them are best for specific use cases.
POSIT-DEV.GITHUB.IO • Shared by Richard Iannone
Looping in Reverse
Many iterables can be reversed using the built-in reversed
function whereas Python’s slicing syntax only works on sequences. Learn how to reverse your data.
TREY HUNNER
Prevent Postgres Slowdowns on Python Apps with this Check List
Avoid performance regressions in your Python app by staying on top of Postgres maintenance. This monthly check list outlines what to monitor, how to catch slow queries early, and ways to ensure indexes, autovacuum, and I/O are performing as expected →
PGANALYZE sponsor
Python Project: Build a Word Count Command-Line App
A self-paced coding challenge in which you’ll practice your Python skills by building a clone of the popular word count utility (wc) on Unix.
REAL PYTHON course
Python Jobs
Sr. Software Developer (Python, Healthcare) (USA)
Senior Software Engineer – Quant Investment Platform (LA or Dallas) (Los Angeles, CA, USA)
Causeway Capital Management LLC
Articles & Tutorials
Rodrigo Girão Serrão: Python Training, Itertools, and Idioms
Once you’ve learned the vocabulary and syntax of the Python language, how do you progress into learning the right combinations to put into your code? How can Python’s built-in itertools library enhance your skills? This week on the show, we speak with Rodrigo Girão Serrão about teaching Python through his blog and his passion for the itertools library.
REAL PYTHON podcast
Running live_server
Tests Last With pytest
You don’t want to go through all your slow tests just to have a fast one fail. Learn how to order your pytest execution so that the faster tests run first.
TIM KAMANIN
Working With INI Files Using configparser
Many programs require configuration and a popular format is the INI file. Python’s configparser
library can read these files, learn how to use it.
MIKE DRISCOLL
Thousands Separators
A quick TIL post on how to include thousands separators when converting numbers to strings with an f-string modifier.
RODRIGO GIRÃO SERRÃO
Optimizing Django Docker Builds With Astral’s uv
Learn how to speed up and harden your Django Docker builds using Astral’s uv for faster installs, better caching, and reproducible environments.
COGIT8.ORG • Shared by Rob Hudson
How to Find an Absolute Value in Python
Learn how to calculate the Python absolute value with abs()
, implement the math behind it from scratch, and customize it in your own classes.
REAL PYTHON
Quiz: How to Find an Absolute Value in Python
In this quiz, you’ll test your knowledge of calculating absolute values in Python, mastering both built-in functions and common use cases to improve your coding accuracy.
REAL PYTHON
How Local Variables Work in Python Bytecode
To better understand the internals of an interpreter, this article shows you how local variables get stored and how stacks and frames work.
FROM SCRATCH CODE
Personal Highlights of PyCon Italy 2025
Rodrigo shares his personal highlights of PyCon Italy 2025. He covers some lightning talks, a Python quiz, community events, and more.
RODRIGO GIRÃO SERRÃO
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
June 11, 2025
REALPYTHON.COM
Python Sucre Summit
June 14 to June 15, 2025
PYTHON-SUCRE-SUMMIT-2025.VERCEL.APP
PyDelhi User Group Meetup
June 14, 2025
MEETUP.COM
DFW Pythoneers 2nd Saturday Teaching Meeting
June 14, 2025
MEETUP.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #685.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Real Python
Python Continuous Integration and Deployment Using GitHub Actions
Creating software is an achievement worth celebrating. But software is never static. Bugs need to be fixed, features need to be added, and security demands regular updates. In today’s landscape, with agile methodologies dominating, robust DevOps systems are crucial for managing an evolving codebase. That’s where GitHub Actions shine, empowering Python developers to automate workflows and ensure their projects adapt seamlessly to change.
GitHub Actions for Python empowers developers to automate workflows efficiently. This enables teams to maintain software quality while adapting to constant change.
Continuous Integration and Continuous Deployment (CI/CD) systems help produce well-tested, high-quality software and streamline deployment. GitHub Actions makes CI/CD accessible to all, allowing automation and customization of workflows directly in your repository. This free service enables developers to execute their software development processes efficiently, improving productivity and code reliability.
In this video course, you’ll learn how to:
- Use GitHub Actions and workflows
- Automate linting, testing, and deployment of a Python project
- Secure credentials used for automation
- Automate security and dependency updates
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm
Faster Python: Concurrency in async/await and threading
If you have been coding with Python for a while, especially if you have been using frameworks and libraries such as Fast API and discord.py, then you have probably been using async/await
or asyncio
. You may have heard statements like “multithreading in Python isn’t real”, and you may also know about the famous (or infamous) GIL in Python. In light of the denial about multithreading in Python, you might be wondering what the difference between async/await
and multithreading actually is – especially in Python programming. If so, this is the blog post for you!
What is multithreading?
In programming, multithreading refers to the ability of a program to execute multiple sequential tasks (called threads) concurrently. These threads can run on a single processor core or across multiple cores. However, due to the limitation of the Global Interpreter Lock (GIL), multithreading in Python is only processed on a single core. The exception is nogil (also called thread-free) Python, which removes the GIL and will be covered in part 2 of this series. For this blog post, we will assume that the GIL is always present.
What is concurrency?
Concurrency in programming means that the computer is doing more than one thing at a time, or seems to be doing more than one thing at a time, even if the different tasks are executed on a single processor. By managing resources and interactions between different parts of a program, different tasks are allowed to make progress independently and in overlapping time intervals.
Both asyncio
and threading
appear concurrent in Python
Loosely speaking, both the asyncio
and threading
Python libraries enable the appearance of concurrency. However, your CPUs are not doing multiple things at the exact same time. It just seems like they are.
Imagine you are hosting a multi-course dinner for some guests. Some of the dishes take time to cook, for example, the pie that needs to be baked in the oven or the soup simmering on the stove. While we are waiting for those to cook, we do not just stand around and wait. We will do something else in the meantime. This is similar to concurrency in Python. Sometimes your Python process is waiting for something to get done. For example, some input/output (I/O) processes are being handled by the operating system, and in this time the Python process is just waiting. We can then use async to let another Python process run while it waits.

The difference is who is in charge
If both asyncio
and threading
appear concurrent, what is the difference between them? Well, the main difference is a matter of who is in charge of which process is running and when. For async/await
, the approach is sometimes called cooperative concurrency. A coroutine or future gives up its control to another coroutine or future to let others have a go. On the other hand, in threading
, the operating system’s manager will be in control of which process is running.
Cooperative concurrency is like a meeting with a microphone being passed around for people to speak. Whoever has the microphone can talk, and when they are done or have nothing else to say, they will pass the microphone to the next person. In contrast, multithreading is a meeting where there is a chairperson who will determine who has the floor at any given time.
Writing concurrent code in Python
Let’s have a look at how concurrency works in Python by writing some example code. We will create a fast food restaurant simulation using both asyncio
and threading
.
How async/await
works in Python
The asyncio
package was introduced in Python 3.4, while the async
and await
keywords were introduced in Python 3.5. One of the main things that make async/await
possible is the use of coroutines. Coroutines in Python are actually generators repurposed to be able to pause and pass back to the main function.
Now, imagine a burger restaurant where only one staff member is working. The orders are prepared according to a first-in-first-out queue, and no async operations can be performed:
import time def make_burger(order_num): print(f"Preparing burger #{order_num}...") time.sleep(5) # time for making the burger print(f"Burger made #{order_num}") def main(): for i in range(3): make_burger(i) if __name__ == "__main__": s = time.perf_counter() main() elapsed = time.perf_counter() - s print(f"Orders completed in {elapsed:0.2f} seconds.")
This will take a while to finish:
Preparing burger #0... Burger made #0 Preparing burger #1... Burger made #1 Preparing burger #2... Burger made #2 Orders completed in 15.01 seconds.
Now, imagine the restaurant brings in more staff, so that it can perform work concurrently:
import asyncio import time async def make_burger(order_num): print(f"Preparing burger #{order_num}...") await asyncio.sleep(5) # time for making the burger print(f"Burger made #{order_num}") async def main(): order_queue = [] for i in range(3): order_queue.append(make_burger(i)) await asyncio.gather(*(order_queue)) if __name__ == "__main__": s = time.perf_counter() asyncio.run(main()) elapsed = time.perf_counter() - s print(f"Orders completed in {elapsed:0.2f} seconds.")
We see the difference between the two:
Preparing burger #0... Preparing burger #1... Preparing burger #2... Burger made #0 Burger made #1 Burger made #2 Orders completed in 5.00 seconds.
Using the functions provided by asyncio
, like run
and gather
, and the keywords async
and await
, we have created coroutines that can make burgers concurrently.
Now, let’s take a step further and create a more complicated simulation. Imagine we only have two workers, and we can only make two burgers at a time.
import asyncio import time order_queue = asyncio.Queue() def take_order(): for i in range(3): order_queue.put_nowait(make_burger(i)) async def make_burger(order_num): print(f"Preparing burger #{order_num}...") await asyncio.sleep(5) # time for making the burger print(f"Burger made #{order_num}") class Staff: def __init__(self, name): self.name = name async def working(self): while order_queue.qsize() > 0: print(f"{self.name} is working...") task = await order_queue.get() await task print(f"{self.name} finished a task...") async def main(): staff1 = Staff(name="John") staff2 = Staff(name="Jane") take_order() await asyncio.gather(staff1.working(), staff2.working()) if __name__ == "__main__": s = time.perf_counter() asyncio.run(main()) elapsed = time.perf_counter() - s print(f"Orders completed in {elapsed:0.2f} seconds.")
Here we will use a queue to hold the tasks, and the staff will pick them up.
John is working... Preparing burger #0... Jane is working... Preparing burger #1... Burger made #0 John finished a task... John is working... Preparing burger #2... Burger made #1 Jane finished a task... Burger made #2 John finished a task... Orders completed in 10.00 seconds.
In this example, we use asyncio.Queue
to store the tasks, but it will be more useful if we have multiple types of tasks, as shown in the following example.
import asyncio import time task_queue = asyncio.Queue() order_num = 0 async def take_order(): global order_num order_num += 1 print(f"Order burger and fries for order #{order_num:04d}:") burger_num = input("Number of burgers:") for i in range(int(burger_num)): await task_queue.put(make_burger(f"{order_num:04d}-burger{i:02d}")) fries_num = input("Number of fries:") for i in range(int(fries_num)): await task_queue.put(make_fries(f"{order_num:04d}-fries{i:02d}")) print(f"Order #{order_num:04d} queued.") await task_queue.put(take_order()) async def make_burger(order_num): print(f"Preparing burger #{order_num}...") await asyncio.sleep(5) # time for making the burger print(f"Burger made #{order_num}") async def make_fries(order_num): print(f"Preparing fries #{order_num}...") await asyncio.sleep(2) # time for making fries print(f"Fries made #{order_num}") class Staff: def __init__(self, name): self.name = name async def working(self): while True: if task_queue.qsize() > 0: print(f"{self.name} is working...") task = await task_queue.get() await task print(f"{self.name} finish task...") else: await asyncio.sleep(1) #rest async def main(): task_queue.put_nowait(take_order()) staff1 = Staff(name="John") staff2 = Staff(name="Jane") await asyncio.gather(staff1.working(), staff2.working()) if __name__ == "__main__": s = time.perf_counter() asyncio.run(main()) elapsed = time.perf_counter() - s print(f"Orders completed in {elapsed:0.2f} seconds.")
In this example, there are multiple tasks, including making fries, which takes less time, and taking orders, which involves getting input from the user.
Notice that the program stops waiting for the user’s input, and even the other staff who are not taking the order stop working in the background. This is because the input
function is not async and therefore is not awaited. Remember, control in async code is only released when it is awaited. To fix that, we can replace:
input("Number of burgers:")
With
await asyncio.to_thread(input, "Number of burgers:")
And we do the same for fries – see the code below. Note that now the program runs in an infinite loop. If we need to stop it, we can deliberately crash the program with an invalid input.
import asyncio import time task_queue = asyncio.Queue() order_num = 0 async def take_order(): global order_num order_num += 1 print(f"Order burger and fries for order #{order_num:04d}:") burger_num = await asyncio.to_thread(input, "Number of burgers:") for i in range(int(burger_num)): await task_queue.put(make_burger(f"{order_num:04d}-burger{i:02d}")) fries_num = await asyncio.to_thread(input, "Number of fries:") for i in range(int(fries_num)): await task_queue.put(make_fries(f"{order_num:04d}-fries{i:02d}")) print(f"Order #{order_num:04d} queued.") await task_queue.put(take_order()) async def make_burger(order_num): print(f"Preparing burger #{order_num}...") await asyncio.sleep(5) # time for making the burger print(f"Burger made #{order_num}") async def make_fries(order_num): print(f"Preparing fries #{order_num}...") await asyncio.sleep(2) # time for making fries print(f"Fries made #{order_num}") class Staff: def __init__(self, name): self.name = name async def working(self): while True: if task_queue.qsize() > 0: print(f"{self.name} is working...") task = await task_queue.get() await task print(f"{self.name} finish task...") else: await asyncio.sleep(1) #rest async def main(): task_queue.put_nowait(take_order()) staff1 = Staff(name="John") staff2 = Staff(name="Jane") await asyncio.gather(staff1.working(), staff2.working()) if __name__ == "__main__": s = time.perf_counter() asyncio.run(main()) elapsed = time.perf_counter() - s print(f"Orders completed in {elapsed:0.2f} seconds.")
By using asyncio.to_thread
, we have put the input
function into a separate thread (see this reference). Do note, however, that this trick only unblocks I/O-bounded tasks if the Python GIL is present.
If you run the code above, you may also see that the standard I/O in the terminal is scrambled. The user I/O and the record of what is happening should be separate. We can put the record into a log to inspect later.
import asyncio import logging import time logger = logging.getLogger(__name__) logging.basicConfig(filename='pyburger.log', level=logging.INFO) task_queue = asyncio.Queue() order_num = 0 closing = False async def take_order(): global order_num, closing try: order_num += 1 logger.info(f"Taking Order #{order_num:04d}...") print(f"Order burger and fries for order #{order_num:04d}:") burger_num = await asyncio.to_thread(input, "Number of burgers:") for i in range(int(burger_num)): await task_queue.put(make_burger(f"{order_num:04d}-burger{i:02d}")) fries_num = await asyncio.to_thread(input, "Number of fries:") for i in range(int(fries_num)): await task_queue.put(make_fries(f"{order_num:04d}-fries{i:02d}")) logger.info(f"Order #{order_num:04d} queued.") print(f"Order #{order_num:04d} queued, please wait.") await task_queue.put(take_order()) except ValueError: print("Goodbye!") logger.info("Closing down... stop taking orders and finish all tasks.") closing = True async def make_burger(order_num): logger.info(f"Preparing burger #{order_num}...") await asyncio.sleep(5) # time for making the burger logger.info(f"Burger made #{order_num}") async def make_fries(order_num): logger.info(f"Preparing fries #{order_num}...") await asyncio.sleep(2) # time for making fries logger.info(f"Fries made #{order_num}") class Staff: def __init__(self, name): self.name = name async def working(self): while True: if task_queue.qsize() > 0: logger.info(f"{self.name} is working...") task = await task_queue.get() await task task_queue.task_done() logger.info(f"{self.name} finish task.") elif closing: return else: await asyncio.sleep(1) #rest async def main(): global task_queue task_queue.put_nowait(take_order()) staff1 = Staff(name="John") staff2 = Staff(name="Jane") print("Welcome to Pyburger!") logger.info("Ready for business!") await asyncio.gather(staff1.working(), staff2.working()) logger.info("All tasks finished. Closing now.") if __name__ == "__main__": s = time.perf_counter() asyncio.run(main()) elapsed = time.perf_counter() - s logger.info(f"Orders completed in {elapsed:0.2f} seconds.")
In this final code block, we have logged the simulation information in pyburger.log
and reserved the terminal for messages for customers. We also catch invalid input during the ordering process and switch a closing
flag to True
if the input is invalid, assuming the user wants to quit. Once the closing
flag is set to True
, the worker will return
, ending the coroutine’s infinite while
loop.
How does threading
work in Python?
In the example above, we put an I/O-bound task into another thread. You may wonder if we can put all tasks into separate threads and let them run concurrently. Let’s try using threading
instead of asyncio
.
Consider the code we have as shown below, where we create burgers concurrently with no limitation put in place:
import asyncio import time async def make_burger(order_num): print(f"Preparing burger #{order_num}...") await asyncio.sleep(5) # time for making the burger print(f"Burger made #{order_num}") async def main(): order_queue = [] for i in range(3): order_queue.append(make_burger(i)) await asyncio.gather(*(order_queue)) if __name__ == "__main__": s = time.perf_counter() asyncio.run(main()) elapsed = time.perf_counter() - s print(f"Orders completed in {elapsed:0.2f} seconds.") ``` Instead of creating async coroutines to make the burgers, we can just send functions down different threads like this: ``` import threading import time def make_burger(order_num): print(f"Preparing burger #{order_num}...") time.sleep(5) # time for making the burger print(f"Burger made #{order_num}") def main(): order_queue = [] for i in range(3): task = threading.Thread(target=make_burger, args=(i,)) order_queue.append(task) task.start() for task in order_queue: task.join() if __name__ == "__main__": s = time.perf_counter() main() elapsed = time.perf_counter() - s print(f"Orders completed in {elapsed:0.2f} seconds.")
In the first for
loop in main
, tasks are created in different threads and get a kickstart. The second for
loop makes sure all the burgers are made before the program moves on (that is, before it returns to main
).
It is more complicated when we have only two staff members. Each member of the staff is represented with a thread, and they will take tasks from a normal list where they are all stored.
import threading import time order_queue = [] def take_order(): for i in range(3): order_queue.append(make_burger(i)) def make_burger(order_num): def making_burger(): print(f"Preparing burger #{order_num}...") time.sleep(5) # time for making the burger print(f"Burger made #{order_num}") return making_burger def working(): while len(order_queue) > 0: print(f"{threading.current_thread().name} is working...") task = order_queue.pop(0) task() print(f"{threading.current_thread().name} finish task...") def main(): take_order() staff1 = threading.Thread(target=working, name="John") staff1.start() staff2 = threading.Thread(target=working, name="Jane") staff2.start() staff1.join() staff2.join() if __name__ == "__main__": s = time.perf_counter() main() elapsed = time.perf_counter() - s print(f"Orders completed in {elapsed:0.2f} seconds.")
When you run the code above, an error may occur in one of the threads, saying that it is trying to get a task from an empty list. You may wonder why this is the case, since we have a condition in the while
loop that causes it to continue only if the task_queue
is not empty. Nevertheless, we still get an error because we have encountered race conditions.
Race conditions
Race conditions can occur when multiple threads attempt to access the same resource or data at the same time and cause problems in the system. The timing and order of when the resource is accessed are important to the program logic, and unpredictable timing or the interleaving of multiple threads accessing and modifying shared data can cause errors.
To solve the race condition in our program, we will deploy a lock to the task_queue
:
queue_lock = threading.Lock()
For working, we need to make sure we have access rights to the queue when checking its length and getting tasks from it. While we have the rights, other threads cannot access the queue:
def working(): while True: with queue_lock: if len(order_queue) == 0: return else: task = order_queue.pop(0) print(f"{threading.current_thread().name} is working...") task() print(f"{threading.current_thread().name} finish task...") ``` Based on what we have learned so far, we can complete our final code with threading like this: ``` import logging import threading import time logger = logging.getLogger(__name__) logging.basicConfig(filename="pyburger_threads.log", level=logging.INFO) queue_lock = threading.Lock() task_queue = [] order_num = 0 closing = False def take_order(): global order_num, closing try: order_num += 1 logger.info(f"Taking Order #{order_num:04d}...") print(f"Order burger and fries for order #{order_num:04d}:") burger_num = input("Number of burgers:") for i in range(int(burger_num)): with queue_lock: task_queue.append(make_burger(f"{order_num:04d}-burger{i:02d}")) fries_num = input("Number of fries:") for i in range(int(fries_num)): with queue_lock: task_queue.append(make_fries(f"{order_num:04d}-fries{i:02d}")) logger.info(f"Order #{order_num:04d} queued.") print(f"Order #{order_num:04d} queued, please wait.") with queue_lock: task_queue.append(take_order) except ValueError: print("Goodbye!") logger.info("Closing down... stop taking orders and finish all tasks.") closing = True def make_burger(order_num): def making_burger(): logger.info(f"Preparing burger #{order_num}...") time.sleep(5) # time for making the burger logger.info(f"Burger made #{order_num}") return making_burger def make_fries(order_num): def making_fries(): logger.info(f"Preparing fried #{order_num}...") time.sleep(2) # time for making fries logger.info(f"Fries made #{order_num}") return making_fries def working(): while True: with queue_lock: if len(task_queue) == 0: if closing: return else: task = None else: task = task_queue.pop(0) if task: logger.info(f"{threading.current_thread().name} is working...") task() logger.info(f"{threading.current_thread().name} finish task...") else: time.sleep(1) # rest def main(): print("Welcome to Pyburger!") logger.info("Ready for business!") task_queue.append(take_order) staff1 = threading.Thread(target=working, name="John") staff1.start() staff2 = threading.Thread(target=working, name="Jane") staff2.start() staff1.join() staff2.join() logger.info("All tasks finished. Closing now.") if __name__ == "__main__": s = time.perf_counter() main() elapsed = time.perf_counter() - s logger.info(f"Orders completed in {elapsed:0.2f} seconds.")
If you compare the two code snippets using asyncio
and threading
, they should have similar results. You may wonder which one is better and why you should choose one over the other.
Practically, writing asyncio
code is easier than multithreading because we don’t have to take care of potential race conditions and deadlocks by ourselves. Controls are passed around coroutines by default, so no locks are needed. However, Python threads do have the potential to run in parallel, just not most of the time with the GIL in place. We can revisit this when we talk about nogil (thread-free) Python in the next blog post.
Benefiting from concurrency
Why do we want to use concurrency in programming? There’s one main reason: speed. Like we have illustrated above, tasks can be completed faster if we can cut down the waiting time. There are different types of waiting in computing, and for each one, we tend to use different methods to save time.
I/O-bound tasks
A task or program is considered input/output (I/O) bound when its execution speed is primarily limited by the speed of I/O operations, such as reading from a file or network, or waiting for user input. I/O operations are generally slower than other CPU operations, and therefore, tasks that involve lots of them can take significantly more time. Typical examples of these tasks include reading data from a database, handling web requests, or working with large files.
Using async/await
concurrency can help optimize the waiting time during I/O-bound tasks by unblocking the processing sequence and letting other tasks be taken care of while waiting.
Async/await
concurrency is beneficial in many Python applications, such as web applications that involve a lot of communication with databases and handling web requests. GUIs (graphical user interfaces) can also benefit from async/await
concurrency by allowing background tasks to be performed while the user is interacting with the application.
CPU-bound tasks
A task or program is considered CPU-bound when its execution speed is primarily limited by the speed of the CPU. Typical examples include image or video processing, like resizing or editing, and complex mathematical calculations, such as matrix multiplication or training machine learning models.
Contrary to I/O-bound tasks, CPU-bound tasks can rarely be optimised by using async/await
concurrency, as the CPU is already busy working on the tasks. If you have more than one CPU in your machine, or if you can offload some of these tasks to one or more GPUs, then CPU-bound tasks can be finished faster by creating more threads and performing multiprocessing. Multiprocessing can optimise how these CPUs and GPUs are used, which is also why many machine learning and AI models these days are trained on multiple GPUs.
This, however, is tough to perform with pure Python code, as Python itself is designed to provide abstract layers so users do not have to control the lower-level computation processes. Moreover, Python’s GIL limits the sharing of Python resources across multiple threads on your computer. Recently, Python 3.13 made it possible to remove the GIL, allowing for true multithreading. We will discuss the GIL, and the ability to go without it, in the next blog post.
Sometimes, none of the methods we mentioned above are able to speed up CPU-bound tasks sufficiently. When that is the case, the CPU-bound tasks may need to be broken into smaller ones so that they can be performed simultaneously over multiple threads, multiple processors, or even multiple machines. This is parallel processing, and you may have to rewrite your code completely to implement it. In Python, the multiprocessing
package offers both local and remote concurrency, which can be used to work around the limitation of the GIL. We will also look at some examples of that in the next blog post.
Debugging concurrent code in PyCharm
Debugging async or concurrent code can be hard, as the program is not executed in sequence, meaning it is hard to see where and when the code is being executed. Many developers use print
to help trace the flow of the code, but this approach is not recommended, as it is very clumsy and using it to investigate a complex program, like a concurrent one, isn’t easy. Plus, it is messy to tidy up after.
Many IDEs provide debuggers, which are great for inspecting variables and the flow of the program. Debuggers also provide a clear stack trace across multiple threads. Let’s see how we can track the task_queue
of our example restaurant simulation in PyCharm.
First, we will put down some breakpoints in our code. You can do that by clicking the line number of the line where you want the debugger to pause. The line number will turn into a red dot, indicating that a breakpoint is set there. We will put breakpoints at lines 23, 27, and 65, where the task_queue
is changed in different threads.


Then we can run the program in debug mode by clicking the little bug icon in the top right.

After clicking on the icon, the Debug window will open up. The program will run until it hits the first breakpoint highlighted in the code.

Here we see the John
thread is trying to pick up the task, and line 65 is highlighted. At this point, the highlighted line has not been executed yet. This is useful when we want to inspect the variables before entering the breakpoint.
Let’s inspect what’s in the task_queue
. You can do so simply by starting to type in the Debug window, as shown below.

Select or type in “task_queue”, and then press Enter. You will see that the take_order
task is in the queue.

Now, let’s execute the breakpoint by clicking the Step in button, as shown below.

After pressing that and inspecting the Special Variables window that pops up, we see that the task variable is now take_order
in the John
thread.

When querying the task_queue
again, we see that now the list is empty.

Now let’s click the Resume Program button and let the program run.

When the program hits the user input part, PyCharm will bring us to the Console window so we can provide the input. Let’s say we want two burgers. Type “2” and press Enter.

Now we hit the second breakpoint. If we click on Threads & Variables to go back to that window, we’ll see that burger_num
is two, as we entered.

Now let’s step into the breakpoint and inspect the task_queue
, just like we did before. We see that one make_burger
task has been added.

We let the program run again, and if we step into the breakpoint when it stops, we see that Jane
is picking up the task.

You can inspect the rest of the code yourself. When you are done, simply press the red Stop button at the top of the window.

With the debugger in PyCharm, you can follow the execution of your program across different threads and inspect different variables very easily.
Conclusion
Now we have learned the basics of concurrency in Python, and I hope you will be able to master it with practice. In the next blog post, we will have a look at the Python GIL, the role it plays, and what changes when it is absent.
PyCharm provides powerful tools for working with concurrent Python code. As demonstrated in this blog post, the debugger allows the step-by-step inspection of both async and threaded code, helping you track the execution flow, monitor shared resources, and detect issues. With intuitive breakpoints, real-time variable views, seamless console integration for user input, and robust logging support, PyCharm makes it easier to write, test, and debug applications with confidence and clarity.
Django Weblog
Django bugfix releases issued: 5.2.3, 5.1.11, and 4.2.23
Following the June 4, 2025 security release, the Django team is issuing releases for Django 5.2.3, Django 5.1.11, and Django 4.2.23 to complete mitigation for CVE-2025-48432: Potential log injection via unescaped request path (full description).
These follow-up releases migrate remaining response logging paths to a safer logging implementation, ensuring that all untrusted input is properly escaped before being written to logs. This update does not introduce a new CVE but strengthens the original fix.
We encourage all users of Django to upgrade as soon as possible.
Affected supported versions
- Django main
- Django 5.2
- Django 5.1
- Django 4.2
Resolution
Patches to resolve the issue have been applied to Django's main, 5.2, 5.1, and 4.2 branches. The patches may be obtained from the following changesets.
CVE-2025-48432: Potential log injection via unescaped request path
- On the main branch
- On the 5.2 branch
- On the 5.1 branch
- On the 4.2 branch
The following releases have been issued
- Django 5.2.3 (download Django 5.2.3 | 5.2.3 checksums)
- Django 5.1.11 (download Django 5.1.11 | 5.1.11 checksums)
- Django 4.2.23 (download Django 4.2.23 | 4.2.23 checksums)
The PGP key ID used for this release is : 3955B19851EA96EF
Armin Ronacher
GenAI Criticism and Moral Quandaries
I've received quite a bit of feedback on the last thing I wrote about AI, particularly around the idea that I'm too quick to brush aside criticism. Given that Glyph — who I respect a lot — wrote a lengthy piece on why he's largely opting out of AI with some thoughtfully articulated criticism, I thought is would be a good opportunity to respond.
Focusing on Code
For this discussion, I'm focusing on AI as a tool for generating text and code — not images, video, or music. My perspective is that there’s a clear difference between utilitarian outputs (code, simple text) and creative outputs that are meant to evoke emotion (art, music, well articulated writings, etc.). For example, when I get an email from a real estate broker, I expect clear information, not art. Similarly, when I add something to a virtual shopping cart, I don’t care how artistic the code is that makes it work. In fact, even today without AI, I better not know.
So, like Glyph, I want to focus on code.
Quality of Output and Adoption
If you read my earlier post, you probably picked up that I see a lot of potential in AI. That hasn't always been my stance, and I intend to remain critical, but right now I'm quite positive about its usefulness. That is in a stark contrast to Glyph's experience.
He writes:
My experiences of genAI are all extremely bad, but that is barely even anecdata. Their experiences are neutral-to-positive. Little scientific data exists. How to resolve this?
I can't judge Glyph's experiences, and I don't want to speculate about why they differ from mine. I've certainly had my own frustrations with AI tools.
The difference, I think, is that I've learned over time how to use these tools more effectively, and that's led to better results. For me, it's not just “neutral-to-positive” — it's been astonishingly positive. As I write this, my agent is fixing code in another window for me. I recorded a video of it fixing issues in a library if you want to see what this looks like.
Glyph also argues that adoption is being forced by management and that people would not voluntarily use it:
Despite this plethora of negative experiences, executives are aggressively mandating the use of AI6. It looks like without such mandates, most people will not bother to use such tools, so the executives will need muscular policies to enforce its use.7
This doesn't match what I've seen. In my experience, people are adopting AI on their own, often before their companies are even aware.
Even at Sentry the adoption of AI happened by employees before the company even put money behind it. In fact my memory is that if anything only at the point where an exceeding number of AI invoices showed up from IC expenses did we realize how widespread adoption has been. This was entirely grounds up. For my non techy friends they sometimes need to hide their AI usage from their employers because some companies try to prevent the adoption of AI, but they are paying for it themselves to help them with the work. Some of them pay for the expensive ChatGPT subscription even!
Yes, there are companies like Shopify that put AI on their banners and are mandating this, but there are probably many more companies that leverage AI via a secret grassroots adoption.
Enjoying Programming
Glyph makes the point that LLMs reduce code review to a non enjoyable part. For me code review is a fact of life and part of the job. That's just what we do as programmers. I don't do it because I want the person that wrote the code to grow and become a better programmer, I do it because I want code to be merged. That does not mean I do not care about the career opportunities or skills of the other person, I do! But that's an effort all on its own. Sometimes it takes place in a code review, most of the time however that's happening in a one-on-one setting. The reality is that we're often not in the mindset of wanting personal growth when receiving review comments either.
Now I admit that I do a lot more code review than I do programming at the moment, but I also find it quite enjoyable. On the one hand because the novelty of a machine programming hasn't worn off yet, on the other hand because it's a very patient recipient of feedback and change requests. You just tell it stuff, you don't spend too much time to think about how the other person is going to respond, if it's a good idea to nitpick a small thing and put extra load on them. It's quite freeing really and it does have a different feeling to me than a regular code review.
So is programming still enjoyable if I don't hit the keys? For me, yes. I still write code, just less of it, and it doesn't diminish the satisfaction at all. I'm still in control, and the quality still depends on the effort I put into guiding the tool.
Energy, Climate and Stealing
Glyph doesn't talk too much about the economics and the climate impact, but he does mention it. My stance on this is rather simple: margins will erode, there will be a lot of competition and we all will pay for the inference necessary and someone will make money. Energy usage will go up but we need more energy even without AI as we're electrifying our cars. AI might change this trajectory slightly, but we had a climate problem before all of this and we have give or take the same climate problem until we shift towards more renewable energy. In fact, this new increased energy consumption might actually do us a great service here. Solar is already the cheapest energy solution [1] on the market and if we need more, that's quite likely the source that we will build more of. Particularly now that cost of energy storage is also going down quickly.
As for copyright and “stealing”: I've always felt that copyright terms are too long, scraping is beneficial, and sharing knowledge is a net positive for humanity. That's what drew me to Open Source in the first place. Glyph argues that scrapers are more aggressive now, but I'm not sure if that is actually true. I think there are just more of them. We got so used that it was mostly a handful of search engines scraping lowering the cost of it to all. I tend to think that more competition is good here and we might just have to accept it for a little while.
Educational Impact
I addressed this in my previous article, but I believe LLMs have significant potential to improve learning. Glyph disagrees, partly because of concerns about cheating and that it will make it worse:
LLMs are making academic cheating incredibly rampant. […] For learning, genAI is a forklift at the gym. […] But it was within those inefficiencies and the inconveniences of the academic experience that real learning was, against all odds, still happening in schools.
I disagree strongly here. This is where I have the most first-hand experience, considering time spent with AI. Since the early days of ChatGPT, I've used LLMs extensively for learning. That's because I'm not great at learning from books, and I have found LLMs to make the process much more enjoyable and helpful to me.
To give you some ideas of how useful this can be, here is an excellent prompt that Darin Gordon shared for getting a GPT to act as a teacher of algorithms that uses the Socratic method: socratic_fp_learning.md. It works even super well if you dumb it down. I had this explain to my son how hash tables work and I did a modification to the prompt to help him understand entropy. It's surprisingly effective.
Now, that does not do much about the cheating part. But surely in a situation where students cheat, it wasn't about learning in the first case, it was about passing a test. That has not much to do with learning, but with performance assessment. When you feel the need to cheat, you probably did not learn something properly in the first place. AI might just make these pre-existing problems more visible, and even Glyph acknowledged that.
AI may complicate things for educators in the near team, but it can also offer real improvements. Either way, education needs reform to adapt to present realities.
Fatigue and Surrender
Glyph concludes by sharing that the pace of change is overwhelming him and opting out feels like the only sane response. I understand that. The pace of AI advancement can make anyone feel like they're falling behind and I too feel like that sometimes.
I offer a different view: just assume AI will win out and we will see agents! Then the path that takes us to that future is less relevant. Many of the things that are currently asking for people's attention are going to look different in a few years — or might not even exist any longer. I initially used GitHub Copilot just to move to Cursor, now to mostly move to Claude Code, maybe I will be back with Cursor's background agents in a month. First there was v0, then there was lovable, who knows what there be in a year. But the path for me is pretty clear: it's going towards me working together with the machine. I find that thought very calming and it takes out the stress. Taking a positive view gives you a form of an excited acceptance of the future.
In Closing
I really don't want to dismiss anyone's concerns. I just feel that, for me, the utility of these tools has become obvious enough that I don't feel the need to argue or justify my choices anymore.
[1] | https://en.wikipedia.org/wiki/Cost_of_electricity_by_source |
June 09, 2025
Ari Lamstein
Video: Covid Demographics Explorer v2
I just put together a video walkthrough of my latest blog post.
Since the post was pretty detailed and technical, I thought a video could make the content more accessible.
I’d love for you to check it out and let me know what you think!
PS: If you find the video helpful, please give it a “like” on YouTube! More visibility means more people discovering the Covid Demographics Explorer, and your support can make a real difference.
Django Weblog
DSF calls for applicants for a Django Fellow
The Django Software Foundation is announcing a call for Django Fellow applications. A Django Fellow is a contractor, paid by the Django Software Foundation, who dedicates time to maintain the Django framework.
The Fellowship program was started in 2014 as a way to dedicate high-quality and consistent resources to the maintenance of Django. The Django Software Foundation currently supports two Fellows –Natalia Bidart and Sarah Boyce– and has approved funding for a new full-time Fellow. This position will be initially for a period of one year, but may be extended depending on fundraising levels.
Beyond keeping Django running, a fellow is a representative of Django itself. They embody the welcoming culture of Django and aid the community to progress the framework. Fellows are often called upon to speak at Django conferences and events.
They are also usually leading Django Sprints occurring in conferences or other setups. Hence a Django Fellow often engages in both informal and formal mentorship.
Responsibilities
Fellow duties include (but are not limited to):
- Monitoring security reports and ensuring security issues are acknowledged and responded to promptly
- Fixing release blockers and helping to backport fixes to these and security issues
- Ensure timely releases including being a release manager for a new version of Django
- Triaging tickets on Trac
- Reviewing and merging pull requests
- Answering contributor questions on the Forum
- Helping new Django contributors land patches and learn our philosophy
Requirements
A Django fellow reviews a very large amount of Django contributions. This requires knowledge in every aspect of web development that the Django framework touches. This turns out to be an intimidatingly-large list of technical topics, many of which are listed below. It’s not our expectation that you come into the job knowing everything on this list! We hope you’ll have solid experience in a few of these topics, particularly some of the “core” technologies important to Django (Python, relational databases, HTTP). But we fully expect that you’ll learn most of this on the job. A willingness to learn, and a demonstrated history of doing so, is more important than comprehensive knowledge.
The technical topics you can expect to work on includes (and is not limited to):
- SQL and Databases: SQLite, MySQL, Postgres, Oracle
- Technical Documentation
- Javascript
- CSS
- Semantic HTML
- Accessibility
- UI/UX design (Web and CLI)
- Python async
- Python features (and versions), compatibility matrix, etc.
- Everything around HTTP
- Security best practices
There are also:
- Complex processes which need adhering to
- Multiple discussions which need opinions and direction
- Requirements for both formal and informal mentorship
And required professional skills such as:
- Conflict resolution
- Time management and prioritization expertise
- Ability to focus in short periods of time and do substantial context switches
- Self-awareness to recognize their own limits and reach out for help
- Relationship-building and coordination with Django teams, working groups, and potentially external parties.
- Tenacity, patience, compassion and empathy are essential
Therefore a Django Fellow requires the skills and knowledge of a senior generalist engineer with extensive experience in Python and Django. Open source experience, especially in contributing to Django, is a big plus.
Being a Django contributor isn't a prerequisite for this position — we can help get you up to speed. We'll consider applications from anyone with a proven history of working with either the Django community or another similar open-source community. While no geographical location is required, we have a slight preference for timezones between around -8 and +3 UTC to allow for better working hours to overlap the current fellows.
If you're interested in applying for the position, please email us at fellowship-committee@djangoproject.com describing why you would be a good fit along with details of your relevant experience and community involvement. Lastly, please include at least one recommendation.
The current hourly rate for a fellow is $82.26 USD.
Applicants will be evaluated based on the following criteria:
- Details of Django and/or other open-source contributions
- Details of community support in general
- Understanding of the position
- Clarity, formality, and precision of communications
- Strength of recommendation(s)
Applications will be open until midnight AoE, 1 July, 2025, with the expectation that the successful candidate will start around August 1, 2025.
Real Python
Python Hits the Big Screen and Other Python News for June 2025
A newly announced documentary brings Python’s history and culture to the screen, offering a rare behind-the-scenes look at the people and philosophies that shaped it. Meanwhile, new releases and PEPs continue to drive the evolution in packaging and language design.
Conferences also continue to foster inclusion, learning, and connection. With new leadership at the PSF and a slate of impactful updates, the Python community is clearly energized and looking ahead.
Let’s dive into the biggest developments shaping Python this month.
Python Documentary Trailer Released
CultRepo, formerly known as Honeypot, has unveiled the trailer for its upcoming feature-length documentary Python: The Documentary, set to premiere on YouTube later this year. Known for high-quality open-source origin stories like those of Vue.js, React, and Node.js, CultRepo is returning to form with a long-awaited tribute to our beloved programming language.
The documentary promises a deep dive into Python’s cultural and technical journey, featuring interviews with key contributors including Guido van Rossum, Mariatta Wijaya, Brett Cannon, and many others who’ve shaped Python’s legacy.
The trailer generated buzz across social media and the Python community after debuting at PyCon US. In the first 15 hours alone, it racked up over 35,000 YouTube views.
Python 3.14.0 Beta Feature Freeze Begins
After a busy alpha cycle, Python 3.14 has entered beta with the release of versions 3.14.0b1 and 3.14.0b2. This milestone marks the feature freeze, shifting the development focus to bug fixes, polish, and documentation ahead of the final release in October 2025.
Python 3.14 is shaping up to be a feature-packed release. Highlights include:
- Template strings for safer string processing, allowing expressions like
t"Hello {name}"
to capture string components and interpolated values as structured data before formatting. This allows for custom processing and helps prevent injection attacks in templating scenarios. - Deferred annotations as the new default, meaning type hints are stored as strings rather than evaluated immediately. This reduces import overhead and allows forward references without using quotes.
- Sigstore replacing PGP for release verification, offering a modern, keyless signing approach that leverages certificate transparency and removes the complexity of key management that has long hindered PGP adoption.
- Enhanced JIT compiler with continued improvements to the copy-and-patch technique introduced in Python 3.13, providing performance changes ranging from 10% slower to 20% faster depending on workload.
- Safe external debugger interface enabling
pdb
to attach to running Python processes by process ID, allowing real-time debugging of live applications without stopping or restarting them.
The timeline remains on track for the final release in October 2025, with additional beta releases expected throughout the summer. Users can expect improved startup performance, better type checking capabilities, and enhanced security verification processes.
You can preview the upcoming changes by installing the beta release in an isolated environment, and help improve Python 3.14 by reporting bugs or compatibility issues.
Three Accepted PEPs Tackle Typing, Installation, and Compression
Read the full article at https://realpython.com/python-news-june-2025/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Bytes
#435 Stop with .folders in my ~/
<strong>Topics covered in this episode:</strong><br> <ul> <li><a href="https://pypi.org/project/platformdirs/?featured_on=pythonbytes"><strong>platformdirs</strong></a></li> <li><a href="https://poethepoet.natn.io/index.html?featured_on=pythonbytes"><strong>poethepoet</strong></a> <strong>-</strong> <strong>“</strong>Poe the Poet is a batteries included task runner that works well with poetry or with uv.”</li> <li><strong><a href="https://thenewstack.io/python-pandas-ditches-numpy-for-speedier-pyarrow/?featured_on=pythonbytes">Python Pandas Ditches NumPy for Speedier PyArrow</a></strong></li> <li><a href="https://posit-dev.github.io/pointblank/?featured_on=pythonbytes"><strong>pointblank:</strong> </a><a href="https://posit-dev.github.io/pointblank/?featured_on=pythonbytes"><em>Data validation made beautiful and powerful</em></a></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=noyERa6SccQ' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="435">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Support</strong></a><a href="https://www.patreon.com/pythonbytes"><strong>ers</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes"><strong>@mkennedy.codes</strong></a> <strong>(bsky)</strong></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes"><strong>@brianokken.bsky.social</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/pythonbytes.fm"><strong>@pythonbytes.fm</strong></a> <strong>(bsky)</strong></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1:</strong> <a href="https://pypi.org/project/platformdirs/?featured_on=pythonbytes"><strong>platformdirs</strong></a></p> <ul> <li>A small Python module for determining appropriate platform-specific dirs, e.g. a "user data dir".</li> <li><strong>Why the community moved on from</strong> <strong>appdirs</strong> <strong>to</strong> <strong>platformdirs</strong></li> <li>At AppDirs: <ul> <li>Note: This project has been officially deprecated. You may want to check out <a href="https://pypi.org/project/platformdirs/?featured_on=pythonbytes">pypi.org/project/platformdirs/</a> which is a more active fork of appdirs. Thanks to everyone who has used appdirs. Shout out to ActiveState for the time they gave their employees to work on this over the years.</li> </ul></li> <li>Better than AppDirs: <ul> <li><strong>Works today, works tomorrow</strong> – new Python releases sometimes change low-level APIs (win32com, pathlib, Apple sandbox rules). platformdirs tracks those changes so your code keeps running.</li> <li><strong>First-class typing</strong> – no more types-appdirs stubs; editors autocomplete paths as Path objects.</li> <li><strong>Richer directory set</strong> – if you need a user’s <strong>Downloads</strong> folder or a per-session <strong>runtime dir</strong>, there’s a helper for it.</li> <li><strong>Cleaner internals</strong> – rewritten to use pathlib, caching, and extensive test coverage; all platforms are exercised in CI.</li> <li><strong>Community stewardship</strong> – the project lives in the PyPA orbit and gets security/compatibility patches quickly.</li> </ul></li> </ul> <p><strong>Brian #2:</strong> <a href="https://poethepoet.natn.io/index.html?featured_on=pythonbytes"><strong>poethepoet</strong></a> <strong>-</strong> <strong>“</strong>Poe the Poet is a batteries included task runner that works well with poetry or with uv.”</p> <ul> <li>from <a href="https://www.linkedin.com/posts/bbelderbos_i-love-makefiles-they-save-me-time-and-help-activity-7335215074938089473-4sOm/?utm_source=share&utm_medium=member_ios&rcm=ACoAAAD3mh8BiKsgWuoxCvrNNA1YysaKpZ6oaS0&featured_on=pythonbytes">Bob Belderbos</a></li> <li>Tasks are <a href="https://poethepoet.natn.io/tasks/index.html?featured_on=pythonbytes">easy to define</a> and are defined in pyproject.toml</li> </ul> <p><strong>Michael #3:</strong> <a href="https://thenewstack.io/python-pandas-ditches-numpy-for-speedier-pyarrow/?featured_on=pythonbytes">Python Pandas Ditches NumPy for Speedier PyArrow</a></p> <ul> <li>Pandas 3.0 will significantly boost performance by replacing NumPy with PyArrow as its default engine, enabling faster loading and reading of columnar data.</li> <li>Recently <a href="https://talkpython.fm/episodes/show/503/the-pyarrow-revolution?featured_on=pythonbytes">talked with Reuven Lerner about this on Talk Python too</a>.</li> <li>In the next version, v3.0, PyArrow <a href="https://pandas.pydata.org/pdeps/0010-required-pyarrow-dependency.html?featured_on=pythonbytes">will be a required dependency</a>, with <em>pyarrow.string</em> being the default type inferred for string data.</li> <li>PyArrow is 10 times faster.</li> <li><a href="https://arrow.apache.org/docs/python/index.html?featured_on=pythonbytes">PyArrow</a> offers <a href="https://thenewstack.io/apache-arrow-designed-accelerate-hadoop-spark-columnar-layouts-data/?featured_on=pythonbytes">columnar storage</a>, which eliminates all that computational back and forth that comes with NumPy. </li> <li>PyArrow paves the way for running Pandas, by default, on <a href="https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html?featured_on=pythonbytes">Copy on Write</a> mode, which improves memory and performance usage.</li> </ul> <p><strong>Brian #4:</strong> <a href="https://posit-dev.github.io/pointblank/?featured_on=pythonbytes"><strong>pointblank:</strong> </a><a href="https://posit-dev.github.io/pointblank/?featured_on=pythonbytes"><em>Data validation made beautiful and powerful</em></a></p> <ul> <li>“With its … chainable API, you can … validate your data against comprehensive quality checks …” </li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li><a href="https://docs.astral.sh/ruff/rules/?featured_on=pythonbytes">Ruff rules</a></li> <li><a href="https://old.reddit.com/r/Python/comments/1kttfst/ruff_users_what_rules_are_using_and_what_are_you/?featured_on=pythonbytes">Ruff users, what rules are using and what are you ignoring?</a></li> <li><a href="https://www.python.org/downloads/release/python-3140b2/?featured_on=pythonbytes">Python 3.14.0b2</a> - did we already cover this?</li> <li><a href="https://fedi.tips/transferring-your-mastodon-account-to-another-server/?featured_on=pythonbytes">Transferring your Mastodon account to another server</a>, in case anyone was <a href="https://coreysnipes.com/thoughts-on-fosstodon.html">thinking about doing that</a></li> <li>I’m trying out <a href="https://usefathom.com?featured_on=pythonbytes">Fathom Analytics</a> for privacy friendly analytics</li> </ul> <p>Michael:</p> <ul> <li><a href="https://training.talkpython.fm/courses/polars-for-power-users?featured_on=pythonbytes">Polars for Power Users: Transform Your Data Analysis Game Course</a></li> </ul> <p><strong>Joke:</strong> <a href="https://x.com/PR0GRAMMERHUM0R/status/1915465792684015991?featured_on=pythonbytes"><strong>Does your dog bite</strong></a>?</p>
June 08, 2025
ListenData
How to Use Web Search in ChatGPT API
In this tutorial, we will explore how to use web search in OpenAI API.
Installation Step : Please make sure to install the openai library using the command - pip install openai
.
from openai import OpenAI client = OpenAI(api_key="sk-xxxxxxxxx") # Replace with your actual API key response = client.responses.create( model="gpt-4.1", tools=[{"type": "web_search_preview"}], input="Apple (AAPL) most recent stock price" ) print(response.output_text)
As of the latest available data (June 7, 2025), Apple Inc. (AAPL) stock is trading at $203.92 per share, reflecting an increase of $3.30 (approximately 1.64%) from the previous close.
In the openai latest models, the search_context_size setting controls how much information the tool gathers from the web to answer your question. A higher setting gives better answers but is slower and costs more while a lower setting is faster and cheaper but might not be as accurate. Possible values are high, medium or low.
from openai import OpenAI client = OpenAI(api_key="sk-xxxxxxxxx") # Replace with your actual API key response = client.responses.create( model="gpt-4.1", tools=[{ "type": "web_search_preview", "search_context_size": "high", }], input="Which team won the latest FIFA World Cup?" ) print(response.output_text)
You can improve the relevance of search results by providing approximate geographic details such as country, city, region or timezone. For example, use a two-letter country code like GB for the United Kingdom or free-form text for cities and regions like London. You may also specify the user's timezone using IANA format such as Europe/London.
from openai import OpenAI client = OpenAI(api_key="sk-xxxxxxxxx") # Use your actual API key response = client.responses.create( model="gpt-4.1", tools=[{ "type": "web_search_preview", "user_location": { "type": "approximate", "country": "GB", # ISO 2-letter country code "city": "London", # Free text for city "region": "London", # Free text for region/state "timezone": "Europe/London" # IANA timezone (optional) } }], input="What are the top-rated places to eat near Buckingham Palace?", ) print(response.output_text)
You can use the following code to get the URL, title and location of the cited sources.
# Citations response = client.responses.create( model="gpt-4.1", tools=[{"type": "web_search_preview"}], input="most recent news from New York?" ) annotations = response.output[1].content[0].annotations print("Annotations:", annotations) print("Annotations List:") print("-" * 80) for i, annotation in enumerate(annotations, 1): print(f"Annotation {i}:") print(f" Title: {annotation.title}") print(f" URL: {annotation.url}") print(f" Type: {annotation.type}") print(f" Start Index: {annotation.start_index}") print(f" End Index: {annotation.end_index}") print("-" * 80)
Alternative method to use web search is by integrating Google's Custom Search API with ChatGPT.
By using Google's Custom Search API, we can get real-time search results. Refer the steps below how to get an API key from the Google Developers Console and creating a custom search engine.
To read this article in full, please click hereJune 06, 2025
Real Python
The Real Python Podcast – Episode #252: Rodrigo Girão Serrão: Python Training, itertools, and Idioms
Once you've learned the vocabulary and syntax of the Python language, how do you progress into learning the right combinations to put into your code? How can Python's built-in itertools library enhance your skills? This week on the show, we speak with Rodrigo Girão Serrão about teaching Python through his blog and his passion for the itertools library.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Talk Python to Me
#508: Program Your Own Computer with Python
If you've heard the phrase "Automate the boring things" for Python, this episode starts with that idea and takes it to another level. We have Glyph back on the podcast to talk about "Programming YOUR computer with Python." We dive into a bunch of tools and frameworks and especially spend some time on integrating with existing platform APIs (e.g. macOS's BrowserKit and Window's COM APIs) to build desktop apps in Python that make you happier and more productive. Let's dive in!<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/workbench'>Posit</a><br> <a href='https://talkpython.fm/agntcy'>Agntcy</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading">Links from the show</h2> <div><strong>Glyph on Mastodon</strong>: <a href="https://mastodon.social/@glyph?featured_on=talkpython" target="_blank" >@glyph@mastodon.social</a><br/> <strong>Glyph on GitHub</strong>: <a href="https://github.com/glyph?featured_on=talkpython" target="_blank" >github.com/glyph</a><br/> <br/> <strong>Glyph's Conference Talk: LceLUPdIzRs</strong>: <a href="https://www.youtube.com/watch?v=LceLUPdIzRs&ab_channel=SFPython" target="_blank" >youtube.com</a><br/> <strong>Notify Py</strong>: <a href="https://ms7m.github.io/notify-py/?featured_on=talkpython" target="_blank" >ms7m.github.io</a><br/> <strong>Rumps</strong>: <a href="https://github.com/jaredks/rumps?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>QuickMacHotkey</strong>: <a href="https://pypi.org/project/quickmachotkey/?featured_on=talkpython" target="_blank" >pypi.org</a><br/> <strong>QuickMacApp</strong>: <a href="https://pypi.org/project/quickmacapp/?featured_on=talkpython" target="_blank" >pypi.org</a><br/> <strong>LM Studio</strong>: <a href="https://lmstudio.ai/?featured_on=talkpython" target="_blank" >lmstudio.ai</a><br/> <strong>Coolify</strong>: <a href="https://www.coolify.io/?featured_on=talkpython" target="_blank" >coolify.io</a><br/> <strong>PyWin32</strong>: <a href="https://pypi.org/project/pywin32/?featured_on=talkpython" target="_blank" >pypi.org</a><br/> <strong>WinRT</strong>: <a href="https://pypi.org/project/winrt/?featured_on=talkpython" target="_blank" >pypi.org</a><br/> <strong>PyObjC</strong>: <a href="https://pypi.org/project/pyobjc/?featured_on=talkpython" target="_blank" >pypi.org</a><br/> <strong>PyObjC Documentation</strong>: <a href="https://pyobjc.readthedocs.io/en/latest/#" target="_blank" >pyobjc.readthedocs.io</a><br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=KuGWQeo_vws" target="_blank" >youtube.com</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/508/program-your-own-computer-with-python" target="_blank" >talkpython.fm</a><br/> <br/> <strong>--- Stay in touch with us ---</strong><br/> <strong>Subscribe to Talk Python on YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" >youtube.com</a><br/> <strong>Talk Python on Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm at bsky.app</a><br/> <strong>Talk Python on Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes at bsky.app</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
eGenix.com
Python Meeting Düsseldorf - 2025-06-18
The following text is in German, since we're announcing a regional user group meeting in Düsseldorf, Germany.
Ankündigung
Das nächste Python Meeting Düsseldorf findet an folgendem Termin statt:
18.06.2025, 18:00 Uhr
Raum 1, 2.OG im Bürgerhaus Stadtteilzentrum Bilk
Düsseldorfer Arcaden, Bachstr. 145, 40217 Düsseldorf
Programm
Bereits angemeldete Vorträge
- Klaus Bremer:
Using Python's argparse - Jochen Wersdörfer:
MCP Server - Connect LLMs to your data - Detlef Lannert:
WeasyPrint - Print from HTML/CSS to PDF - Marc-André Lemburg:
DuckLake - Rethinking Lakehouse architectures
Startzeit und Ort
Wir treffen uns um 18:00 Uhr im Bürgerhaus in den Düsseldorfer Arcaden.
Das Bürgerhaus teilt sich den Eingang mit dem Schwimmbad und befindet
sich an der Seite der Tiefgarageneinfahrt der Düsseldorfer Arcaden.
Über dem Eingang steht ein großes "Schwimm’ in Bilk" Logo. Hinter der Tür
direkt links zu den zwei Aufzügen, dann in den 2. Stock hochfahren. Der
Eingang zum Raum 1 liegt direkt links, wenn man aus dem Aufzug kommt.
>>> Eingang in Google Street View
Einleitung
Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python Begeisterte aus der Region wendet.
Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf:
Format
Das Python Meeting Düsseldorf nutzt eine Mischung aus (Lightning) Talks und offener Diskussion.
Vorträge können vorher angemeldet werden, oder auch spontan während des Treffens eingebracht werden. Ein Beamer mit HDMI und FullHD Auflösung steht zur Verfügung.(Lightning) Talk Anmeldung bitte formlos per EMail an info@pyddf.de
Kostenbeteiligung
Das Python Meeting Düsseldorf wird von Python Nutzern für Python Nutzer veranstaltet.
Tagungsraum, Beamer und Getränke produzieren Kosten. Daher bitten wir die Teilnehmer um eine Kostenbeteiligung in Höhe von EUR 10,00 inkl. 19% Mwst. Schüler und Studenten zahlen EUR 5,00 inkl. 19% Mwst.
Wir möchten alle Teilnehmer bitten, den Betrag in bar mitzubringen.
Anmeldung
Da wir nur 25 Personen in dem angemieteten Raum empfangen können, möchten wir bitten, sich vorher anzumelden.
Meeting Anmeldung bitte per Meetup
Weitere Informationen
Weitere Informationen finden Sie auf der Webseite des Meetings:
https://pyddf.de/
Viel Spaß !
Marc-Andre Lemburg, eGenix.com
June 05, 2025
Mike Driscoll
Parsing XML Comments with Python
The Extensible Markup Language (XML) is a markup format originally for XML. Some companies use XML as a data serialization format or for configuration. Recently, I needed to learn how to uncomment some lines in an XML file to enable some settings that were defined in the XML.
Fortunately, Python’s xml
module provides a way to do this. Let’s find out how!
AI Answers Might Not Work
When I used Google to search for an answer to this question: “How to edit comments in XML with Python”, Google Gemini piped up with this answer:
import xml.etree.ElementTree as ET xml_file = "PATH_TO_XML" tree = ET.parse(xml_file) root = tree.getroot() for element in root.iter(): if isinstance(element.tag, str) and element.tag.startswith('{'): continue if element.tag == ET.Comment: text = element.text.strip() print(text)
Unfortunately, this code does not work. But it is close.
If you look through StackOverflow and similar forums, you will discover that you need a custom parser. Here’s how to create one:
import xml.etree.ElementTree as ET xml_file = r"PATH_TO_XML" parser = ET.XMLParser(target=ET.TreeBuilder(insert_comments=True)) tree = ET.parse(xml_file, parser) root = tree.getroot() for element in root.iter(): if isinstance(element.tag, str) and element.tag.startswith('{'): continue if element.tag == ET.Comment: text = element.text.strip() print(text)
The key point here is to create an instance of ET.XMLParser and set insert_comments to True. Then the code will work.
Note that this example just prints out the commented text. You would need to do something like this to grab the commented text and reinsert it as a valid XML element:
for element in root.iter(): if isinstance(element.tag, str) and element.tag.startswith('{'): continue if element.tag == ET.Comment: text = element.text.strip() if "COMMENTED CODE SUBSTRING" in text: new_element = ET.fromstring(f"<{text}>") # Insert the uncommented text as a new XML element root.insert(list(root).index(element), new_element) # Remove the element that was commented out originally root.remove(element) # Make indentation work for the output ET.indent(tree, space="\t", level=0) with open(XML_PATH, "wb") as f: tree.write(f)
Here, you loop over each element or tag in the XML. You check if the element is a comment type. If it is, you check for the substring you are looking for in the comment’s text. When you find the substring, you extract the entire string from the comment, create a new element, insert it as a regular element, and remove the comment.
Wrapping Up
XML is a handy format, and Python includes several different methods of working with XML in its xml
module. Several different third-party XML modules, such as lxml, are also great alternatives. If you work with XML, hopefully you will find this article helpful.
Have fun and happy coding!
The post Parsing XML Comments with Python appeared first on Mouse Vs Python.
Glyph Lefkowitz
I Think I’m Done Thinking About genAI For Now
The Problem
Like many other self-styled thinky programmer guys, I like to imagine myself as a sort of Holmesian genius, making trenchant observations, collecting them, and then synergizing them into brilliant deductions with the keen application of my powerful mind.
However, several years ago, I had an epiphany in my self-concept. I finally understood that, to the extent that I am usefully clever, it is less in a Holmesian idiom, and more, shall we say, Monkesque.
For those unfamiliar with either of the respective franchises:
- Holmes is a towering intellect honed by years of training, who catalogues intentional, systematic observations and deduces logical, factual conclusions from those observations.
- Monk, on the other hand, while also a reasonably intelligent guy, is highly neurotic, wracked by unresolved trauma and profound grief. As both a consulting job and a coping mechanism, he makes a habit of erratically wandering into crime scenes, and, driven by a carefully managed jenga tower of mental illnesses, leverages his dual inabilities to solve crimes. First, he is unable to filter out apparently inconsequential details, building up a mental rat’s nest of trivia about the problem; second, he is unable to let go of any minor incongruity, obsessively ruminating on the collection of facts until they all make sense in a consistent timeline.
Perhaps surprisingly, this tendency serves both this fictional wretch of a detective, and myself, reasonably well. I find annoying incongruities in abstractions and I fidget and fiddle with them until I end up building something that a lot of people like, or perhaps something that a smaller number of people get really excited about. At worst, at least I eventually understand what’s going on. This is a self-soothing activity but it turns out that, managed properly, it can very effectively soothe others as well.
All that brings us to today’s topic, which is an incongruity I cannot smooth out or fit into a logical framework to make sense. I am, somewhat reluctantly, a genAI skeptic. However, I am, even more reluctantly, exposed to genAI Discourse every damn minute of every damn day. It is relentless, inescapable, and exhausting.
This preamble about personality should hopefully help you, dear reader, to understand how I usually address problematical ideas by thinking and thinking and fidgeting with them until I manage to write some words — or perhaps a new open source package — that logically orders the ideas around it in a way which allows my brain to calm down and let it go, and how that process is important to me.
In this particular instance, however, genAI has defeated me. I cannot make it make sense, but I need to stop thinking about it anyway. It is too much and I need to give up.
My goal with this post is not to convince anyone of anything in particular — and we’ll get to why that is a bit later — but rather:
- to set out my current understanding in one place, including all the various negative feelings which are still bothering me, so I can stop repeating it elsewhere,
- to explain why I cannot build a case that I think should be particularly convincing to anyone else, particularly to someone who actively disagrees with me,
- in so doing, to illustrate why I think the discourse is so fractious and unresolvable, and finally
- to give myself, and hopefully by proxy to give others in the same situation, permission to just peace out of this nightmare quagmire corner of the noosphere.
But first, just because I can’t prove that my interlocutors are Wrong On The Internet, doesn’t mean I won’t explain why I feel like they are wrong.
The Anti-Antis
Most recently, at time of writing, there have been a spate of “the genAI discourse is bad” articles, almost exclusively written from the perspective of, not boosters exactly, but pragmatically minded (albeit concerned) genAI users, wishing for the skeptics to be more pointed and accurate in our critiques. This is anti-anti-genAI content.
I am not going to link to any of these, because, as part of their self-fulfilling prophecy about the “genAI discourse”, they’re also all bad.
Mostly, however, they had very little worthwhile to respond to because they were straw-manning their erstwhile interlocutors. They are all getting annoyed at “bad genAI criticism” while failing to engage with — and often failing to even mention — most of the actual substance of any serious genAI criticism. At least, any of the criticism that I’ve personally read.
I understand wanting to avoid a callout or Gish-gallop culture and just express your own ideas. So, I understand that they didn’t link directly to particular sources or go point-by-point on anyone else’s writing. Obviously I get it, since that’s exactly what this post is doing too.
But if you’re going to talk about how bad the genAI conversation is, without even mentioning huge categories of problem like “climate impact” or “disinformation”1 even once, I honestly don’t know what conversation you’re even talking about. This is peak “make up a guy to get mad at” behavior, which is especially confusing in this circumstance, because there’s an absolutely huge crowd of actual people that you could already be mad at.
The people writing these pieces have historically seemed very thoughtful to me. Some of them I know personally. It is worrying to me that their critical thinking skills appear to have substantially degraded specifically after spending a bunch of time intensely using this technology which I believe has a scary risk of degrading one’s critical thinking skills. Correlation is not causation or whatever, and sure, from a rhetorical perspective this is “post hoc ergo propter hoc” and maybe a little “ad hominem” for good measure, but correlation can still be concerning.
Yet, I cannot effectively respond to these folks, because they are making a practical argument that I cannot, despite my best efforts, find compelling evidence to refute categorically. My experiences of genAI are all extremely bad, but that is barely even anecdata. Their experiences are neutral-to-positive. Little scientific data exists. How to resolve this?2
The Aesthetics
As I begin to state my own position, let me lead with this: my factual analysis of genAI is hopelessly negatively biased. I find the vast majority of the aesthetic properties of genAI to be intensely unpleasant.
I have been trying very hard to correct for this bias, to try to pay attention to the facts and to have a clear-eyed view of these systems’ capabilities. But the feelings are visceral, and the effort to compensate is tiring. It is, in fact, the desire to stop making this particular kind of effort that has me writing up this piece and trying to take an intentional break from the subject, despite its intense relevance.
When I say its “aesthetic qualities” are unpleasant, I don’t just mean the aesthetic elements of output of genAIs themselves. The aesthetic quality of genAI writing, visual design, animation and so on, while mostly atrocious, is also highly variable. There are cherry-picked examples which look… fine. Maybe even good. For years now, there have been, famously, literally award-winning aesthetic outputs of genAI3.
While I am ideologically predisposed to see any “good” genAI art as accruing the benefits of either a survivorship bias from thousands of terrible outputs or simple plagiarism rather than its own inherent quality, I cannot deny that in many cases it is “good”.
However, I am not just talking about the product, but the process; the aesthetic experience of interfacing with the genAI system itself, rather than the aesthetic experience of the outputs of that system.
I am not a visual artist and I am not really a writer4, particularly not a writer of fiction or anything else whose experience is primarily aesthetic. So I will speak directly to the experience of software development.
I have seen very few successful examples of using genAI to produce whole, working systems. There are no shortage of highly public miserable failures, particularly from the vendors of these systems themselves, where the outputs are confused, self-contradictory, full of subtle errors and generally unusable. While few studies exist, it sure looks like this is an automated way of producing a Net Negative Productivity Programmer, throwing out chaff to slow down the rest of the team.5
Juxtapose this with my aforementioned psychological motivations, to wit, I want to have everything in the computer be orderly and make sense, I’m sure most of you would have no trouble imagining that sitting through this sort of practice would make me extremely unhappy.
Despite this plethora of negative experiences, executives are aggressively mandating the use of AI6. It looks like without such mandates, most people will not bother to use such tools, so the executives will need muscular policies to enforce its use.7
Being forced to sit and argue with a robot while it struggles and fails to produce a working output, while you have to rewrite the code at the end anyway, is incredibly demoralizing. This is the kind of activity that activates every single major cause of burnout at once.
But, at least in that scenario, the thing ultimately doesn’t work, so there’s a hope that after a very stressful six month pilot program, you can go to management with a pile of meticulously collected evidence, and shut the whole thing down.
I am inclined to believe that, in fact, it doesn’t work well enough to be used this way, and that we are going to see a big crash. But that is not the most aesthetically distressing thing. The most distressing thing is that maybe it does work; if not well enough to actually do the work, at least ambiguously enough to fool the executives long-term.
This project, in particular, stood out to me as an example. Its author, a self-professed “AI skeptic” who “thought LLMs were glorified Markov chain generators that didn’t actually understand code and couldn’t produce anything novel”, did a green-field project to test this hypothesis.
Now, this particular project is not totally inconsistent with a world in which LLMs cannot produce anything novel. One could imagine that, out in the world of open source, perhaps there is enough “OAuth provider written in TypeScript” blended up into the slurry of “borrowed8” training data that the minor constraint of “make it work on Cloudflare Workers” is a small tweak9. It is not fully dispositive of the question of the viability of “genAI coding”.
But it is a data point related to that question, and thus it did make me contend with what might happen if it were actually a fully demonstrative example. I reviewed the commit history, as the author suggested. For the sake of argument, I tried to ask myself if I would like working this way. Just for clarity on this question, I wanted to suspend judgement about everything else; assuming:
- the model could be created with ethically, legally, voluntarily sourced training data
- its usage involved consent from labor rather than authoritarian mandates
- sensible levels of energy expenditure, with minimal CO2 impact
- it is substantially more efficient to work this way than to just write the code yourself
and so on, and so on… would I like to use this magic robot that could mostly just emit working code for me? Would I use it if it were free, in all senses of the word?
No. I absolutely would not.
I found the experience of reading this commit history and imagining myself using such a tool — without exaggeration — nauseating.
Unlike many programmers, I love code review. I find that it is one of the best parts of the process of programming. I can help people learn, and develop their skills, and learn from them, and appreciate the decisions they made, develop an impression of a fellow programmer’s style. It’s a great way to build a mutual theory of mind.
Of course, it can still be really annoying; people make mistakes, often can’t see things I find obvious, and in particular when you’re reviewing a lot of code from a lot of different people, you often end up having to repeat explanations of the same mistakes. So I can see why many programmers, particularly those more introverted than I am, hate it.
But, ultimately, when I review their code and work hard to provide clear and actionable feedback, people learn and grow and it’s worth that investment in inconvenience.
The process of coding with an “agentic” LLM appears to be the process of carefully distilling all the worst parts of code review, and removing and discarding all of its benefits.
The lazy, dumb, lying robot asshole keeps making the same mistakes over and over again, never improving, never genuinely reacting, always obsequiously pretending to take your feedback on board.
Even when it “does” actually “understand” and manages to load your instructions into its context window, 200K tokens later it will slide cleanly out of its memory and you will have to say it again.
All the while, it is attempting to trick you. It gets most things right, but it consistently makes mistakes in the places that you are least likely to notice. In places where a person wouldn’t make a mistake. Your brain keeps trying to develop a theory of mind to predict its behavior but there’s no mind there, so it always behaves infuriatingly randomly.
I don’t think I am the only one who feels this way.
The Affordances
Whatever our environments afford, we tend to do more of. Whatever they resist, we tend to do less of. So in a world where we were all writing all of our code and emails and blog posts and texts to each other with LLMs, what do they afford that existing tools do not?
As a weirdo who enjoys code review, I also enjoy process engineering. The central question of almost all process engineering is to continuously ask: how shall we shape our tools, to better shape ourselves?
LLMs are an affordance for producing more text, faster. How is that going to shape us?
Again arguing in the alternative here, assuming the text is free from errors and hallucinations and whatever, it’s all correct and fit for purpose, that means it reduces the pain of circumstances where you have to repeat yourself. Less pain! Sounds great; I don’t like pain.
Every codebase has places where you need boilerplate. Every organization has defects in its information architecture that require repetition of certain information rather than a link back to the authoritative source of truth. Often, these problems persist for a very long time, because it is difficult to overcome the institutional inertia required to make real progress rather than going along with the status quo. But this is often where the highest-value projects can be found. Where there’s muck, there’s brass.
The process-engineering function of an LLM, therefore, is to prevent fundamental problems from ever getting fixed, to reward the rapid-fire overwhelm of infrastructure teams with an immediate, catastrophic cascade of legacy code that is now much harder to delete than it is to write.
There is a scene in Game of Thrones where Khal Drogo kills himself. He does so by replacing a stinging, burning, therapeutic antiseptic wound dressing with some cool, soothing mud. The mud felt nice, addressed the immediate pain, removed the discomfort of the antiseptic, and immediately gave him a lethal infection.
The pleasing feeling of immediate progress when one prompts an LLM to solve some problem feels like cool mud on my brain.
The Economics
We are in the middle of a mania around this technology. As I have written about before, I believe the mania will end. There will then be a crash, and a “winter”. But, as I may not have stressed sufficiently, this crash will be the biggest of its kind — so big, that it is arguably not of a kind at all. The level of investment in these technologies is bananas and the possibility that the investors will recoup their investment seems close to zero. Meanwhile, that cost keeps going up, and up, and up.
Others have reported on this in detail10, and I will not reiterate that all here, but in addition to being a looming and scary industry-wide (if we are lucky; more likely it’s probably “world-wide”) economic threat, it is also going to drive some panicked behavior from management.
Panicky behavior from management stressed that their idea is not panning out is, famously, the cause of much human misery. I expect that even in the “good” scenario, where some profit is ultimately achieved, will still involve mass layoffs rocking the industry, panicked re-hiring, destruction of large amounts of wealth.
It feels bad to think about this.
The Energy Usage
For a long time I believed that the energy impact was overstated. I am even on record, about a year ago, saying I didn’t think the energy usage was a big deal. I think I was wrong about that.
It initially seemed like it was letting regular old data centers off the hook. But recently I have learned that, while the numbers are incomplete because the vendors aren’t sharing information, they’re also extremely bad.11
I think there’s probably a version of this technology that isn’t a climate emergency nightmare, but that’s not the version that the general public has access to today.
The Educational Impact
LLMs are making academic cheating incredibly rampant.12
Not only is it so common as to be nearly universal, it’s also extremely harmful to learning.13
For learning, genAI is a forklift at the gym.
To some extent, LLMs are simply revealing a structural rot within education and academia that has been building for decades if not centuries. But it was within those inefficiencies and the inconveniences of the academic experience that real learning was, against all odds, still happening in schools.
LLMs produce a frictionless, streamlined process where students can effortlessly glide through the entire credential, learning nothing. Once again, they dull the pain without regard to its cause.
This is not good.
The Invasion of Privacy
This is obviously only a problem with the big cloud models, but then, the big cloud models are the only ones that people actually use. If you are having conversations about anything private with ChatGPT, you are sending all of that private information directly to Sam Altman, to do with as he wishes.
Even if you don’t think he is a particularly bad guy, maybe he won’t even create the privacy nightmare on purpose. Maybe he will be forced to do so as a result of some bizarre kafkaesque accident.14
Imagine the scenario, for example, where a woman is tracking her cycle and uploading the logs to ChatGPT so she can chat with it about a health concern. Except, surprise, you don’t have to imagine, you can just search for it, as I have personally, organically, seen three separate women on YouTube, at least one of whom lives in Texas, not only do this on camera but recommend doing this to their audiences.
Citation links withheld on this particular claim for hopefully obvious reasons.
I assure you that I am neither particularly interested in menstrual products nor genAI content, and if I am seeing this more than once, it is probably a distressingly large trend.
The Stealing
The training data for LLMs is stolen. I don’t mean like “pirated” in the sense where someone illicitly shares a copy they obtained legitimately; I mean their scrapers are ignoring both norms15 and laws16 to obtain copies under false pretenses, destroying other people’s infrastructure17.
The Fatigue
I have provided references to numerous articles outlining rhetorical and sometimes data-driven cases for the existence of certain properties and consequences of genAI tools. But I can’t prove any of these properties, either at a point in time or as a durable ongoing problem.
The LLMs themselves are simply too large to model with the usual kind of heuristics one would use to think about software. I’d sooner be able to predict the physics of dice in a casino than a 2 trillion parameter neural network. They resist scientific understanding, not just because of their size and complexity, but because unlike a natural phenomenon (which could of course be considerably larger and more complex) they resist experimentation.
The first form of genAI resistance to experiment is that every discussion is a
motte-and-bailey. If
I use a free model and get a bad result I’m told it’s because I should have
used the paid model. If I get a bad result with ChatGPT I should have used
Claude. If I get a bad result with a chatbot I need to start using an agentic
tool. If an agentic tool deletes my hard drive by putting os.system(“rm -rf
~/”)
into sitecustomize.py
then I guess I should have built my own MCP
integration with a completely novel heretofore never even considered security
sandbox or something?
What configuration, exactly, would let me make a categorical claim about these things? What specific methodological approach should I stick to, to get reliably adequate prompts?
For the record though, if the idea of the free models is that they are going to be provocative demonstrations of the impressive capabilities of the commercial models, and the results are consistently dogshit, I am finding it increasingly hard to care how much better the paid ones are supposed to be, especially since the “better”-ness cannot really be quantified in any meaningful way.
The motte-and-bailey doesn’t stop there though. It’s a war on all fronts. Concerned about energy usage? That’s OK, you can use a local model. Concerned about infringement? That’s okay, somewhere, somebody, maybe, has figured out how to train models consensually18. Worried about the politics of enriching the richest monsters in the world? Don’t worry, you can always download an “open source” model from Hugging Face. It doesn’t matter that many of these properties are mutually exclusive and attempting to fix one breaks two others; there’s always an answer, the field is so abuzz with so many people trying to pull in so many directions at once that it is legitimately difficult to understand what’s going on.
Even here though, I can see that characterizing everything this way is unfair to a hypothetical sort of person. If there is someone working at one of these thousands of AI companies that have been springing up like toadstools after a rain, and they really are solving one of these extremely difficult problems, how can I handwave that away? We need people working on problems, that’s like, the whole point of having an economy. And I really don’t like shitting on other people’s earnest efforts, so I try not to dismiss whole fields. Given how AI has gotten into everything, in a way that e.g. cryptocurrency never did, painting with that broad a brush inevitably ends up tarring a bunch of stuff that isn’t even really AI at all.
The second form of genAI resistance to experiment is the inherent obfuscation of productization. The models themselves are already complicated enough, but the products that are built around the models are evolving extremely rapidly. ChatGPT is not just a “model”, and with the rapid19 deployment of Model Context Protocol tools, the edges of all these things will blur even further. Every LLM is now just an enormous unbounded soup of arbitrary software doing arbitrary whatever. How could I possibly get my arms around that to understand it?
The Challenge
I have woefully little experience with these tools.
I’ve tried them out a little bit, and almost every single time the result has been a disaster that has not made me curious to push further. Yet, I keep hearing from all over the industry that I should.
To some extent, I feel like the motte-and-bailey characterization above is fair; if the technology itself can really do real software development, it ought to be able to do it in multiple modalities, and there’s nothing anyone can articulate to me about GPT-4o which puts it in a fundamentally different class than GPT-3.5.
But, also, I consistently hear that the subjective experience of using the premium versions of the tools is actually good, and the free ones are actually bad.
I keep struggling to find ways to try them “the right way”, the way that people I know and otherwise respect claim to be using them, but I haven’t managed to do so in any meaningful way yet.
I do not want to be using the cloud versions of these models with their potentially hideous energy demands; I’d like to use a local model. But there is obviously not a nicely composed way to use local models like this.
Since there are apparently zero models with ethically-sourced training data, and litigation is ongoing20 to determine the legal relationships of training data and outputs, even if I can be comfortable with some level of plagiarism on a project, I don’t feel that I can introduce the existential legal risk into other people’s infrastructure, so I would need to make a new project.
Others have differing opinions of course, including some within my dependency chain, which does worry me, but I still don’t feel like I can freely contribute further to the problem; it’s going to be bad enough to unwind any impact upstream. Even just for my own sake, I don’t want to make it worse.
This especially presents a problem because I have way too much stuff going on already. A new project is not practical.
Finally, even if I did manage to satisfy all of my quirky21 constraints, would this experiment really be worth anything? The models and tools that people are raving about are the big, expensive, harmful ones. If I proved to myself yet again that a small model with bad tools was unpleasant to use, I wouldn’t really be addressing my opponents’ views.
I’m stuck.
The Surrender
I am writing this piece to make my peace with giving up on this topic, at least for a while. While I do idly hope that some folks might find bits of it convincing, and perhaps find ways to be more mindful with their own usage of genAI tools, and consider the harm they may be causing, that’s not actually the goal. And that is not the goal because it is just so much goddamn work to prove.
Here, I must return to my philosophical hobbyhorse of sprachspiel. In this case, specifically to use it as an analytical tool, not just to understand what I am trying to say, but what the purpose for my speech is.
The concept of sprachspiel is most frequently deployed to describe the goal of the language game being played, but in game theory, that’s only half the story. Speech — particularly rigorously justified speech — has a cost, as well as a benefit. I can make shit up pretty easily, but if I want to do anything remotely like scientific or academic rigor, that cost can be astronomical. In the case of developing an abstract understanding of LLMs, the cost is just too high.
So what is my goal, then? To be king Canute, standing astride the shore of “tech”, whatever that is, commanding the LLM tide not to rise? This is a multi-trillion dollar juggernaut.
Even the rump, loser, also-ran fragment of it has the power to literally suffocate us in our homes22 if they so choose, completely insulated from any consequence. If the power curve starts there, imagine what the winners in this industry are going to be capable of, irrespective of the technology they’re building - just with the resources they have to hand. Am I going to write a blog post that can rival their propaganda apparatus? Doubtful.
Instead, I will just have to concede that maybe I’m wrong. I don’t have the skill, or the knowledge, or the energy, to demonstrate with any level of rigor that LLMs are generally, in fact, hot garbage. Intellectually, I will have to acknowledge that maybe the boosters are right. Maybe it’ll be OK.
Maybe the carbon emissions aren’t so bad. Maybe everybody is keeping them secret in ways that they don’t for other types of datacenter for perfectly legitimate reasons. Maybe the tools really can write novel and correct code, and with a little more tweaking, it won’t be so difficult to get them to do it. Maybe by the time they become a mandatory condition of access to developer tools, they won’t be miserable.
Sure, I even sincerely agree, intellectual property really has been a pretty bad idea from the beginning. Maybe it’s OK that we’ve made an exception to those rules. The rules were stupid anyway, so what does it matter if we let a few billionaires break them? Really, everybody should be able to break them (although of course, regular people can’t, because we can’t afford the lawyers to fight off the MPAA and RIAA, but that’s a problem with the legal system, not tech).
I come not to praise “AI skepticism”, but to bury it.
Maybe it really is all going to be fine. Perhaps I am simply catastrophizing; I have been known to do that from time to time. I can even sort of believe it, in my head. Still, even after writing all this out, I can’t quite manage to believe it in the pit of my stomach.
Unfortunately, that feeling is not something that you, or I, can argue with.
Acknowledgments
Thank you to my patrons. Normally, I would say, “who are supporting my writing on this blog”, but in the case of this piece, I feel more like I should apologize to them for this than to thank them; these thoughts have been preventing me from thinking more productive, useful things that I actually have relevant skill and expertise in; this felt more like a creative blockage that I just needed to expel than a deliberately written article. If you like what you’ve read here and you’d like to read more of it, well, too bad; I am sincerely determined to stop writing about this topic. But, if you’d like to read more stuff like other things I have written, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!
-
And yes, disinformation is still an issue even if you’re “just” using it for coding. Even sidestepping the practical matter that technology is inherently political, validation and propagation of poor technique is a form of disinformation. ↩
-
I can’t resolve it, that’s the whole tragedy here, but I guess we have to pretend I will to maintain narrative momentum here. ↩
-
The story in Creative Bloq, or the NYT, if you must ↩
-
although it’s not for lack of trying, Jesus, look at the word count on this ↩
-
These are sometimes referred to as “10x” programmers, because they make everyone around them 10x slower. ↩
-
Douglas B. Laney at Forbes, Viral Shopify CEO Manifesto Says AI Now Mandatory For All Employees ↩
-
The National CIO Review, AI Mandates, Minimal Use: Closing the Workplace Readiness Gap ↩
-
Matt O’Brien at the AP, Reddit sues AI company Anthropic for allegedly ‘scraping’ user comments to train chatbot Claude ↩
-
Using the usual tricks to find plagiarism like searching for literal transcriptions of snippets of training data did not pull up anything when I tried, but then, that’s not how LLMs work these days, is it? If it didn’t obfuscate the plagiarism it wouldn’t be a very good plagiarism-obfuscator. ↩
-
David Gerard at Pivot to AI, “Microsoft and AI: spending billions to make millions”, Edward Zitron at Where’s Your Ed At, “The Era Of The Business Idiot”, both sobering reads ↩
-
James O’Donnell and Casey Crownhart at the MIT Technology Review, We did the math on AI’s energy footprint. Here’s the story you haven’t heard. ↩
-
Lucas Ropek at Gizmodo, AI Cheating Is So Out of Hand In America’s Schools That the Blue Books Are Coming Back ↩
-
James D. Walsh at the New York Magazine Intelligencer, Everyone Is Cheating Their Way Through College ↩
-
Ashley Belanger at Ars Technica, OpenAI slams court order to save all ChatGPT logs, including deleted chats ↩
-
Ashley Belanger at Ars Technica, AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt ↩
-
Blake Brittain at Reuters, Judge in Meta case warns AI could ‘obliterate’ market for original works ↩
-
Xkeeper, TCRF has been getting DDoSed ↩
-
Kate Knibbs at Wired, Here’s Proof You Can Train an AI Model Without Slurping Copyrighted Content ↩
-
and, I should note, extremely irresponsible ↩
-
Porter Anderson at Publishing Perspectives, Meta AI Lawsuit: US Publishers File Amicus Brief ↩
-
It feels bizarre to characterize what feel like baseline ethical concerns this way, but the fact remains that within the “genAI community”, this places me into a tiny and obscure minority. ↩
-
Ariel Wittenberg for Politico, ‘How come I can’t breathe?’: Musk’s data company draws a backlash in Memphis ↩
Wingware
Wing Python IDE Version 11 - June 5, 2025
Wing Python IDE version 11 is now available. It improves the AI assisted development UI and adds support for Claude, Grok, Gemini, OpenAI, Perplexity, Mistral, Deepseek, Ollama, and other OpenAI API compatible AI providers. Wing 11 also adds package management with uv, improves Python code analysis, updates the UI localizations, improves diff/merge, adds easier custom key binding assignment, and much more.
Most of these improvements are available only in Wing Pro. Compare Products for details.

Downloads
Wing 10 and earlier versions are not affected by installation of Wing 11 and may be installed and used independently. However, project files for Wing 10 and earlier are converted when opened by Wing 11 and should be saved under a new name, since Wing 11 projects cannot be opened by older versions of Wing.
New in Wing 11
Improved AI Assisted Development
Wing 11 improves the user interface for AI assisted development by introducing two separate tools AI Coder and AI Chat. AI Coder can be used to write, redesign, or extend code in the current editor. AI Chat can be used to ask about code or iterate in creating a design or new code without directly modifying the code in an editor.
Wing 11's AI assisted development features now support not just OpenAI but also Claude, Grok, Gemini, Perplexity, Mistral, Deepseek, and any other OpenAI completions API compatible AI provider.
This release also improves setting up AI request context, so that both automatically and manually selected and described context items may be paired with an AI request. AI request contexts can now be stored, optionally so they are shared by all projects, and may be used independently with different AI features.
AI requests can now also be stored in the current project or shared with all projects, and Wing comes preconfigured with a set of commonly used requests. In addition to changing code in the current editor, stored requests may create a new untitled file or run instead in AI Chat. Wing 11 also introduces options for changing code within an editor, including replacing code, commenting out code, or starting a diff/merge session to either accept or reject changes.
Wing 11 also supports using AI to generate commit messages based on the changes being committed to a revision control system.
You can now also configure multiple AI providers for easier access to different models.
For details see AI Assisted Development under Wing Manual in Wing 11's Help menu.
Package Management with uv
Wing Pro 11 adds support for the uv package manager in the New Project dialog and the Packages tool.
For details see Project Manager > Creating Projects > Creating Python Environments and Package Manager > Package Management with uv under Wing Manual in Wing 11's Help menu.
Improved Python Code Analysis
Wing 11 improves code analysis of literals such as dicts and sets, parametrized type aliases, typing.Self, type variables on the def or class line that declares them, generic classes with [...], and __all__ in *.pyi files.
Updated Localizations
Wing 11 updates the German, French, and Russian localizations, and introduces a new experimental AI-generated Spanish localization. The Spanish localization and the new AI-generated strings in the French and Russian localizations may be accessed with the new User Interface > Include AI Translated Strings preference.
Improved diff/merge
Wing Pro 11 adds floating buttons directly between the editors to make navigating differences and merging easier, allows undoing previously merged changes, and does a better job managing scratch buffers, scroll locking, and sizing of merged ranges.
For details see Difference and Merge under Wing Manual in Wing 11's Help menu.
Other Minor Features and Improvements
Wing 11 also improves the custom key binding assignment user interface, adds a Files > Auto-Save Files When Wing Loses Focus preference, warns immediately when opening a project with an invalid Python Executable configuration, allows clearing recent menus, expands the set of available special environment variables for project configuration, and makes a number of other bug fixes and usability improvements.
Changes and Incompatibilities
Since Wing 11 replaced the AI tool with AI Coder and AI Chat, and AI configuration is completely different than in Wing 10, you will need to reconfigure your AI integration manually in Wing 11. This is done with Manage AI Providers in the AI menu. After adding the first provider configuration, Wing will set that provider as the default. You can switch between providers with Switch to Provider in the AI menu.
If you have questions, please don't hesitate to contact us at support@wingware.com.
Stéphane Wirtel
Ce que je fabrique pendant mes « vacances »
Ce que je fabrique pendant mes « vacances » Pour une fois, j’écris ce billet en français. Après tout, c’est ma langue maternelle, et j’ai envie de partager ce que je fais ces derniers temps avec un peu plus de spontanéité. Depuis fin avril, je suis en congé forcé. Mon contrat avec la Banque Européenne d’Investissement s’est terminé, et après deux années et demie de collaboration intense, passionnante et ultra-formante sur les sujets financiers et d’investissement, je me suis retrouvé avec… du temps.
June 04, 2025
Real Python
How to Find an Absolute Value in Python
Learn how to work with absolute values in Python using the built-in abs()
function for numbers, arrays, and custom objects. This tutorial shows you how to implement the absolute value function from scratch, use abs()
with numbers, and customize its behavior for data types like NumPy arrays and pandas Series.
By the end of this tutorial, you’ll understand that:
- You can implement the absolute value function in Python using conditional statements or mathematical operations.
- Python’s built-in
abs()
function efficiently handles integers, floating-point numbers, complex numbers, and more. - NumPy and pandas extend the
abs()
function to work directly with arrays and Series. - You can customize the behavior of
abs()
for your own data types by implementing the.__abs__()
method. - The
abs()
function can process fractions and decimals from Python’s standard library.
Don’t worry if your mathematical knowledge of the absolute value function is a little rusty. You’ll begin by refreshing your memory before diving deeper into Python code. That said, feel free to skip the next section and jump right into the nitty-gritty details that follow.
Get Your Code: Click here to download the free sample code that you’ll use to find absolute values in Python.
Take the Quiz: Test your knowledge with our interactive “How to Find an Absolute Value in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Find an Absolute Value in PythonIn this quiz, you'll test your knowledge of calculating absolute values in Python, mastering both built-in functions and common use cases to improve your coding accuracy.
Defining the Absolute Value
The absolute value lets you determine the size or magnitude of an object, such as a number or a vector, regardless of its direction. Real numbers can have one of two directions when you ignore zero: they can be either positive or negative. On the other hand, complex numbers and vectors can have many more directions.
Note: When you take the absolute value of a number, you lose information about its sign or, more generally, its direction.
Consider a temperature measurement as an example. If the thermometer reads -12°C, then you can say it’s twelve degrees Celsius below freezing. Notice how you decomposed the temperature in the last sentence into a magnitude, twelve, and a sign. The phrase below freezing means the same as below zero degrees Celsius. The temperature’s size or absolute value is identical to the absolute value of the much warmer +12°C.
Using mathematical notation, you can define the absolute value of 𝑥 as a piecewise function, which behaves differently depending on the range of input values. A common symbol for absolute value consists of two vertical lines:

This function returns values greater than or equal to zero without alteration. On the other hand, values smaller than zero have their sign flipped from a minus to a plus. Algebraically, this is equivalent to taking the square root of a number squared:

When you square a real number, you always get a positive result, even if the number that you started with was negative. For example, the square of -12 and the square of 12 have the same value, equal to 144. Later, when you compute the square root of 144, you’ll only get 12 without the minus sign.
Geometrically, you can think of an absolute value as the distance from the origin, which is zero on a number line in the case of the temperature reading from before:

To calculate this distance, you can subtract the origin from the temperature reading (-12°C - 0°C = -12°C) or the other way around (0°C - (-12°C) = +12°C), and then drop the sign of the result. Subtracting zero doesn’t make much difference here, but the reference point may sometimes be shifted. That’s the case for vectors bound to a fixed point in space, which becomes their origin.
Vectors, just like numbers, convey information about the direction and the magnitude of a physical quantity, but in more than one dimension. For example, you can express the velocity of a falling snowflake as a three-dimensional vector:
This vector indicates the snowflake’s current position relative to the origin of the coordinate system. It also shows the snowflake’s direction and pace of motion through the space. The longer the vector, the greater the magnitude of the snowflake’s speed. As long as the coordinates of the vector’s initial and terminal points are expressed in meters, calculating its length will get you the snowflake’s speed measured in meters per unit of time.
Note: There are two ways to look at a vector. A bound vector is an ordered pair of fixed points in space, whereas a free vector only tells you about the displacement of the coordinates from point A to point B without revealing their absolute locations. Consider the following code snippet as an example:
>>> A = [1, 2, 3]
>>> B = [3, 2, 1]
>>> bound_vector = [A, B]
>>> bound_vector
[[1, 2, 3], [3, 2, 1]]
>>> free_vector = [b - a for a, b in zip(A, B)]
>>> free_vector
[2, 0, -2]
A bound vector wraps both points, providing quite a bit of information. In contrast, a free vector only represents the shift from A to B. You can calculate a free vector by subtracting the initial point, A, from the terminal one, B. One way to do so is by iterating over the consecutive pairs of coordinates with a list comprehension.
A free vector is essentially a bound vector translated to the origin of the coordinate system, so it begins at zero.
The length of a vector, also known as its magnitude, is the distance between its initial and terminal points, 𝐴 and 𝐵, which you can calculate using the Euclidean norm:

This formula calculates the length of the 𝑛-dimensional vector 𝐴𝐵, by summing the squares of the differences between the coordinates of points 𝐴 and 𝐵 in each dimension indexed by 𝑖. For a free vector, the initial point, 𝐴, becomes the origin of the coordinate system—or zero—which simplifies the formula, as you only need to square the coordinates of your vector.
Read the full article at https://realpython.com/python-absolute-value/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: How to Find an Absolute Value in Python
In this quiz, you’ll test your understanding of How to Find an Absolute Value in Python.
By working through this quiz, you’ll revisit key concepts such as how to use Python’s built-in functions to compute absolute values, apply them in mathematical operations, and handle different data types effectively.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
Django security releases issued: 5.2.2, 5.1.10, and 4.2.22
In accordance with our security release policy, the Django team is issuing releases for Django 5.2.2, Django 5.1.10, and Django 4.2.22. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.
CVE-2025-48432: Potential log injection via unescaped request path
Internal HTTP response logging used request.path directly, allowing control characters (e.g. newlines or ANSI escape sequences) to be written unescaped into logs. This could enable log injection or forgery, letting attackers manipulate log appearance or structure, especially in logs processed by external systems or viewed in terminals.
Although this does not directly impact Django's security model, it poses risks when logs are consumed or interpreted by other tools. To fix this, the internal django.utils.log.log_response() function now escapes all positional formatting arguments using a safe encoding.
Thanks to Seokchan Yoon (https://ch4n3.kr/) for the report.
This issue has severity "low" according to the Django security policy.
Affected supported versions
- Django main
- Django 5.2
- Django 5.1
- Django 4.2
Resolution
Patches to resolve the issue have been applied to Django's main, 5.2, 5.1, and 4.2 branches. The patches may be obtained from the following changesets.
CVE-2025-48432: Potential log injection via unescaped request path
- On the main branch
- On the 5.2 branch
- On the 5.1 branch
- On the 4.2 branch
The following releases have been issued
- Django 5.2.2 (download Django 5.2.2 | 5.2.2 checksums)
- Django 5.1.10 (download Django 5.1.10 | 5.1.10 checksums)
- Django 4.2.22 (download Django 4.2.22 | 4.2.22 checksums)
The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E
General notes regarding security reporting
As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum. Please see our security policies for further information.