Planet Python
Last update: April 11, 2026 04:43 PM UTC
April 11, 2026
Rodrigo Girão Serrão
Personal highlights of PyCon Lithuania 2026
In this article I share my personal highlights of PyCon Lithuania 2026.
Shout out to the organisers and volunteers
This was my second time at PyCon Lithuania and, for the second time in a row, I leave with the impression that everything was very well organised and smooth. Maybe the organisers and volunteers were stressed out all the time — organising a conference is never easy — but everything looked under control all the time and well thought-through.
Thank you for an amazing experience!
And by the way, congratulations for 15 years of PyCon Lithuania. To celebrate, they even served a gigantic cake during the first networking event. The cake was at least 80cm by 30cm:
The PyCon Lithuania cake.I'll be honest with you: I didn't expect the cake to be good. The quality of food tends to degrade when it's cooked at a large scale... But even the taste was great and the cake had three coloured layers in yellow, green, and red.
Social activities
The organisers prepared two networking events, a speakers' dinner, and three city tours (one per evening) for speakers. There was always something for you to do.
The city tour is a brilliant idea and I wonder why more conferences don't do it:
- Participants get to know a bit more of the city that's hosting the conference.
- Participants get the chance to talk to each other in a relaxed and informal environment.
- Hiring a tour guide is typically fairly cheap, especially when compared to organising a full-blown social event in a dedicated venue and with dedicated catering.
I had taken the city tour last time I had been at PyCon Lithuania and taking it again was not a mistake.
The conference organisers even made sure that the city tour ended close to the location of the speakers' dinner and that the tour ended at the same time as the dinner started. Another small detail that was carefully planned.
The atmosphere of the restaurant was very pleasant and the staff there was helpful and kind, so we had a wonderful night. At some point, at our table, we noticed that the folks at the other two tables were projecting something on a big screen. There was a large curtain that partially separated our table from the other two, so we took some time to realise that an impromptu Python quiz was about to take place.
I'm (way too) competitive and immediately got up to play. After six questions, which included learning about the existence of the web framework Falcon and correctly reordering the first four sentences of the Zen of Python, I was crowned the winner:
The final score for the quiz.The top three players got a free spin on the PyCon Lithuania wheel of fortune.
Egg hunt and swag
On each day of the conference there was an egg hunt running throughout the full day. You'd get stamps by talking to sponsors, which is a fun way of getting more people to talk...
Armin Ronacher
The Center Has a Bias
Whenever a new technology shows up, the conversation quickly splits into camps. There are the people who reject it outright, and there are the people who seem to adopt it with religious enthusiasm. For more than a year now, no topic has been more polarising than AI coding agents.
What I keep noticing is that a lot of the criticism directed at these tools is perfectly legitimate, but it often comes from people without a meaningful amount of direct experience with them. They are not necessarily wrong. In fact, many of them cite studies, polls and all kinds of sources that themselves spent time investigating and surveying. And quite legitimately they identified real issues: the output can be bad, the security implications are scary, the economics are strange and potentially unsustainable, there is an environmental impact, the social consequences are unclear, and the hype is exhausting.
But there is something important missing from that criticism when it comes from a position of non-use: it is too abstract.
There is a difference between saying “this looks flawed in principle” and saying “I used this enough to understand where it breaks, where it helps, and how it changes my work.” The second type of criticism is expensive. It costs time, frustration, and a genuine willingness to engage.
The enthusiast camp consists of true believers. These are the people who have adopted the technology despite its shortcomings, sometimes even because they enjoy wrestling with them. They have already decided that the tool is worth fitting into their lives, so they naturally end up forgiving a lot. They might not even recognize the flaws because for them the benefits or excitement have already won.
But what does the center look like? I consider myself to be part of the center: cautiously excited, but also not without criticism. By my observation though that center is not neutral in the way people imagine it to be. Its bias is not towards endorsement so much as towards engagement, because the middle ground between rejecting a technology outright and embracing it fully is usually occupied by people willing to explore it seriously enough to judge it.
Bias on Both Sides
The compositions of the groups of people in the discussions about new technology are oddly shaped because one side has paid the cost of direct experience and the other has not, or not to the same degree. That alone creates an asymmetry.
Take coding agents as an example. If you do not use them, or at least not for productive work, you can still criticize them on many grounds. You can say they generate sloppy code, that they lower your skills, etc. But if you have not actually spent serious time with them, then your view of their practical reality is going to be inherited from somewhere else. You will know them through screenshots, anecdotes, the most annoying users on Twitter, conference talks, company slogans, and whatever filtered back from the people who did use them. That is not nothing, but it is not the same as contact.
The problem is not that such criticism is worthless. The problem is that people often mistake non-use for neutrality. It is not. A serious opinion on a new language, framework, device, or way of working usually has some minimum buy-in. You have to cross a threshold of use before your criticism becomes grounded in the thing itself rather than in its reputation.
That threshold is inconvenient. It asks you to spend time on something that may not pay off, and to risk finding yourself at least partially won over. It is a lot to ask of people. But because that threshold exists, the measured middle is rarely populated by people who are perfectly indifferent to change. It is populated by people who were willing to move toward it enough in order to evaluate it properly.
Simultaneously, it’s important to remember that usage does not automatically create wisdom. The enthusiastic adopter might have their own distortions. They may enjoy the novelty, feel a need to justify the time they invested, or overgeneralize from the niche where the technology works wonderfully. They may simply like progress and want to be associated with it.
This is particularly visible with AI. There are clearly people who have decided that the future is here, all objections are temporary, and every workflow must now be rebuilt around agents. What makes AI weirder is that it’s such a massive shift in capabilities that has triggered a tremendous injection of money, and a meaningful number of adopters have bet their future on that technology.
So if one pole is uninformed abstraction and the other is overcommitted enthusiasm, then surely the center must sit right in the middle between them?
Engagement Is Not Endorsement
The center, I would argue, naturally needs to lean towards engagement. The reason is simple: a genuinely measured opinion on a new technology requires real engagement with it.
You do not get an informed view by trying something for 15 minutes, getting annoyed once, and returning to your previous tools. You also do not get it by admiring demos, listening to podcasts or discussing on social media. You have to use it enough to get past both the first disappointment and the honeymoon phase. Seemingly with AI tools, true understanding is not a matter of hours but weeks of investment.
That means the people in the center are selected from a particular group: people who were willing to give the thing a fair chance without yet assuming it deserved a permanent place in their lives.
That willingness is already a bias towards curiosity and experimentation which makes the center look more like adopters in behavior, because exploration requires use, but it does not make the center identical to enthusiasts in judgment.
This matters because from the perspective of the outright rejecter, all of these people can look the same. If someone spent serious time with coding agents, found them useful in some areas, harmful in others, and came away with a nuanced view, they may still be thrown into the same bucket as the person who thinks agents can do no wrong.
But those are not the same position at all. It’s important to recognize that engagement with those tools does not automatically imply endorsement or at the very least not blanket endorsement.
The Center Looks Suspicious
This is why discussions about new technology, and AI in particular feel so polarized. The actual center is hard to see because it does not appear visually centered. From the outside, serious exploration can look a lot like adoption.
If you map opinions onto a line, you might imagine the middle as the point equally distant from rejection and enthusiasm. But in practice that is not how it works. The middle is shifted toward the side of the people who have actually interacted with the technology enough to say something concrete about it. That does not mean the middle has accepted the adopter’s conclusion. It means the middle has adopted some of the adopter’s behavior, because investigation requires contact.
That creates a strange effect because the people with the most grounded criticism are often also adopters. I would argue some of the best criticism of coding agents right now comes from people who use them extensively. Take Mario: he created a coding agent, yet is also one of the most vocal voices of criticism in the space. These folks can tell you in detail how they fail and they can tell you where they waste time, where they regress code quality, where they need carefully designed tooling, where they only work well in some ecosystems, and where the whole thing falls apart.
But because those people kept using the tools long enough to learn those lessons, they can appear compromised to outsiders. And worse: if they continue to use them, contribute thoughts and criticism back, they are increasingly thrown in with the same people who are devoid of any criticism.
Failure Is Possible
This line of thinking could be seen as an inherent “pro-innovation bias”. That would be wrong, as plenty of technology deserves resistance. Many people are right to resist, and sometimes the people who never gave a technology a chance saw problems earlier than everyone else. Crypto is a good reminder: plenty of projects looked every bit as exciting as coding agents do now, and still collapsed when the economics no longer worked.
What matters here is a narrower point. The center is not biased towards novelty so much as towards contact with the thing that creates potential change. The middle ground is not between use and non-use, but between refusal and commitment and the people in the center will often look more like adopters than skeptics, not because they have already made up their minds, but because getting an informed view requires exploration.
If you want to criticize a new thing well, you first have to get close enough to dislike it for the right reasons. And for some technologies, you also have to hang around long enough to understand what, exactly, deserves criticism.
April 10, 2026
Talk Python to Me
#544: Wheel Next + Packaging PEPs
When you pip install a package with compiled code, the wheel you get is built for CPU features from 2009. Want newer optimizations like AVX2? Your installer has no way to ask for them. GPU support? You're on your own configuring special index URLs. The result is fat binaries, nearly gigabyte-sized wheels, and install pages that read like puzzle books. A coalition from NVIDIA, Astral, and QuantSight has been working on Wheel Next: A set of PEPs that let packages declare what hardware they need and let installers like uv pick the right build automatically. Just uv pip install torch and it works. I sit down with Jonathan Dekhtiar from NVIDIA, Ralf Gommers from QuantSight and the NumPy and SciPy teams, and Charlie Marsh, founder of Astral and creator of uv, to dig into all of it.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code talkpython26</a><br> <a href='https://talkpython.fm/temporal'>Temporal</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guests</strong><br/> <strong>Charlie Marsh</strong>: <a href="https://github.com/charliermarsh?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Ralf Gommers</strong>: <a href="https://github.com/rgommers?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Jonathan Dekhtiar</strong>: <a href="https://github.com/DEKHTIARJonathan?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>CPU dispatcher</strong>: <a href="https://numpy.org/doc/stable/reference/simd/how-it-works.html?featured_on=talkpython" target="_blank" >numpy.org</a><br/> <strong>build options</strong>: <a href="https://numpy.org/doc/stable/reference/simd/build-options.html?featured_on=talkpython" target="_blank" >numpy.org</a><br/> <strong>Red Hat RHEL</strong>: <a href="https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux?featured_on=talkpython" target="_blank" >www.redhat.com</a><br/> <strong>Red Hat RHEL AI</strong>: <a href="https://www.redhat.com/en/products/ai?featured_on=talkpython" target="_blank" >www.redhat.com</a><br/> <strong>RedHats presentation</strong>: <a href="https://wheelnext.dev/summits/2025_03/assets/WheelNext%20Community%20Summit%20-%2006%20-%20Red%20Hat.pdf?featured_on=talkpython" target="_blank" >wheelnext.dev</a><br/> <strong>CUDA release</strong>: <a href="https://developer.nvidia.com/cuda/toolkit?featured_on=talkpython" target="_blank" >developer.nvidia.com</a><br/> <strong>requires a PEP</strong>: <a href="https://discuss.python.org/t/pep-proposal-platform-aware-gpu-packaging-and-installation-for-python/91910?featured_on=talkpython" target="_blank" >discuss.python.org</a><br/> <strong>WheelNext</strong>: <a href="https://wheelnext.dev/?featured_on=talkpython" target="_blank" >wheelnext.dev</a><br/> <strong>Github repo</strong>: <a href="https://github.com/wheelnext?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>PEP 817</strong>: <a href="https://peps.python.org/pep-0817/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>PEP 825</strong>: <a href="https://discuss.python.org/t/pep-825-wheel-variants-package-format-split-from-pep-817/106196?featured_on=talkpython" target="_blank" >discuss.python.org</a><br/> <strong>uv</strong>: <a href="https://docs.astral.sh/uv/?featured_on=talkpython" target="_blank" >docs.astral.sh</a><br/> <strong>A variant-enabled build of uv</strong>: <a href="https://astral.sh/blog/wheel-variants?featured_on=talkpython" target="_blank" >astral.sh</a><br/> <strong>pyx</strong>: <a href="https://astral.sh/blog/introducing-pyx?featured_on=talkpython" target="_blank" >astral.sh</a><br/> <strong>pypackaging-native</strong>: <a href="https://pypackaging-native.github.io?featured_on=talkpython" target="_blank" >pypackaging-native.github.io</a><br/> <strong>PEP 784</strong>: <a href="https://peps.python.org/pep-0784/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=761htncGZpU" target="_blank" >youtube.com</a><br/> <strong>Episode #544 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/544/wheel-next-packaging-peps#takeaways-anchor" target="_blank" >talkpython.fm/544</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/544/wheel-next-packaging-peps" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
PyCharm
How (Not) to Learn Python
While listening to Mark Smith’s inspirational talk for Python Unplugged on PyTV about How to Learn Python, what caught my attention was that Mark suggested turning off some of PyCharm’s AI features to help you learn Python more effectively.
As a PyCharm user myself, I’ve found the AI-powered features beneficial in my day-to-day work; however, I never considered that I could turn certain features on or off to customize my experience. This can be done from the settings menu under Editor | General | Code Completion | Inline.
While we are at it, let’s have a look at these features and investigate in more detail why they are great for professional developers but may not be ideal for learners.
Local full line code completion suggestions
JetBrains AI credits are not consumed when you use local line completion. The completion prediction is performed using a built-in local deep learning model. To use this feature, make sure the box for Enable inline completion using language models is checked, and choose either Local or Cloud and local in the options. To show the complete results using the local model alone, we will look at the predictions when only Local is selected.
When it’s selected, you see that the only code completion available out of the box in PyCharm is for Python. To make suggestions available for CSS or HTML, you need to download additional models.
When you are writing code, you will see suggestions pop up in grey with a hint for you to use Tab to complete the line.
After completing that line, you can press Enter to go to the next one, where there may be a new suggestion that you can again use Tab to complete. As you see, this can be very convenient for developers in their daily coding, as it saves time that would otherwise be spent typing obvious lines of code that follow the flow naturally.
However, for beginners, mindlessly hitting Tab and letting the model complete lines may discourage them from learning how to use the functions correctly. An alternative is to use the hint provided by PyCharm to help you choose an appropriate method from the available list, determine which parameters are needed, check the documentation if necessary, and write the code yourself. Here is what the hint looks like when code completion is turned off:
Cloud-based completion suggestions
Let’s have a look at cloud-based completion in contrast to local completion. When using cloud-based completion, next-edit suggestions are also available (which we will look at in more detail in the next section).
Cloud-based completion comes with support for multiple languages by default, and you can switch it on or off for each language individually.
Cloud-based completion provides more functionality than local model completion, but you need a JetBrains AI subscription to use it.
You may also connect to a third-party AI provider for your cloud-based completion. Since this support is still in Beta in PyCharm 2026.1, it is highly recommended to keep your JetBrains AI subscription active as a backup to ensure all features are available.
After switching to cloud-based completion, one of the differences I noticed was that it is better at multiple-line completion, which can be more convenient. However, I have also encountered situations where the completion provided too much for me, and I had to jump in to make my own modifications after accepting the suggestions.
For learners of Python, again, you may want to disable this functionality or have to audit all the suggestions in detail yourself. In addition to the danger of relying too heavily on code completion, which removes opportunities to learn, cloud code completion poses another risk for learners. Because larger suggestions require active review from the developer, learners may not be equipped to fully audit the wholesale suggestions they are accepting. Disabling this feature for learners not only encourages learning, but it can also help prevent mistakes.
Next edit suggestions
In addition to cloud-based completion, JetBrains AI Pro, Ultimate, and Enterprise users are able to take advantage of next edit suggestions.
When they are enabled, every time you make changes to your code, for example, renaming a variable, you will be given suggestions about other places that need to be changed.
And when you press Tab, the changes will be made automatically. You can also customize this behavior so you can see previews of the changes and jump continuously to the next edit until no more are suggested.
This is, no doubt, a very handy feature. It can help you avoid some careless mistakes, like forgetting to refactor your code when you make changes. However, for learners, thinking about what needs to be done is a valuable thought exercise, and using this feature can deprive them of some good learning opportunities.
Conclusion
PyCharm offers a lot of useful features to smooth out your day-to-day development workflow. However, these features may be too powerful, and even too convenient, for those who have just started working with Python and need to learn by making mistakes. It is good to use AI features to improve our work, but we also need to double-check the results and make sure that we want what the AI suggests.
To learn more about how to level up your Python skills, I highly recommend watching Mark’s talk on PyTV and checking out all the AI features that JetBrains AI has to offer. I hope you will find the perfect way to integrate them into your work while remaining ready to turn them off when you plan to learn something new.
Ahmed Bouchefra
Build Your Own AI Meme Matcher: A Beginner's Guide to Computer Vision with Python
Have you ever wondered how Snapchat filters know exactly where your eyes and mouth are? Or how your phone can unlock just by looking at your face? The magic behind this is called Computer Vision, a field of Artificial Intelligence that allows computers to “see” and understand digital images.
Today, we are going to build something incredibly fun using Computer Vision: a Real-Time Meme Matcher.
Point your webcam at yourself, make a shocked face, and watch as the app instantly matches you with the “Overly Attached Girlfriend” meme. Smile and raise your hand, and Leonardo DiCaprio raises a glass right back at you.
But this isn’t just a fun project. We are going to build this using Object-Oriented Programming (OOP). OOP is a professional coding style that makes your code clean, organized, and easy to upgrade. By the end of this tutorial, you will have a working AI app and a solid understanding of how professional software is structured.
Let’s dive in!
Prerequisites
Before we start coding, make sure you have the following ready:
- Python 3.11 or newer installed on your computer.
- A working webcam.
- A folder named
assetsin your project directory containing a few popular meme images (likesuccess_kid.jpg,disaster_girl.jpg, etc.).
You will also need to install a few Python libraries. Open your terminal or command prompt and run:
pip install mediapipe opencv-python numpy
The Theory: How Does It Work?
Before we look at the code, let’s understand the two main concepts powering our application: Computer Vision (Facial Landmarks) and Object-Oriented Programming.
1. Facial Landmarks (How the AI “Sees” You)
We are using a Google library called MediaPipe. When you feed an image to MediaPipe, it places a virtual “mesh” of 478 invisible dots (called landmarks) over your face.
To figure out your expression, we use simple math. For example, how do we know if your mouth is open in surprise?
We measure the vertical distance between the dot on your top lip and the dot on your bottom lip.
If the distance is large, your mouth is open! We do the same for your eyes and eyebrows to calculate “scores” for surprise, smiling, or concern.
2. Object-Oriented Programming (OOP)
Instead of writing one massive, confusing block of code, OOP allows us to break our program into separate components called Classes.
Think of a Class as a blueprint.
For our Meme Matcher, we will create three distinct classes, each with a “Single Responsibility” (a golden rule of coding):
ExpressionAnalyzer(The Brain): Handles the AI math and MediaPipe.MemeLibrary(The Database): Loads the images and compares the user’s face to the memes.MemeMatcherApp(The UI): Opens the webcam and draws the pictures on the screen.
Step 1: Building the Brain
Let’s start by creating the class that does all the heavy lifting. Create a file named meme_matcher.py and import the necessary tools. Then, we will define our first class.
import cv2
import numpy as np
import mediapipe as mp
from pathlib import Path
from concurrent.futures import ThreadPoolExecutor
import pickle
import os
import subprocess
class ExpressionAnalyzer:
"""
The ExpressionAnalyzer class acts as the 'Brain' of our project.
It encapsulates (hides away) the complex MediaPipe machine learning logic.
"""
# Class Variables: Landmark indices for eyes, eyebrows, and mouth
LEFT_EYE_UPPER = [159, 145, 158]
LEFT_EYE_LOWER = [23, 27, 133]
RIGHT_EYE_UPPER = [386, 374, 385]
RIGHT_EYE_LOWER = [253, 257, 362]
LEFT_EYEBROW = [70, 63, 105, 66, 107]
RIGHT_EYEBROW = [300, 293, 334, 296, 336]
MOUTH_OUTER = [61, 291, 39, 181, 0, 17, 269, 405]
MOUTH_INNER = [78, 308, 95, 88]
NOSE_TIP = 4
def __init__(self, frame_skip: int = 2):
self.last_features = None
self.frame_counter = 0
self.frame_skip = frame_skip
# Download the required AI models automatically
self.face_model_path = self._download_model(
"face_landmarker.task",
"[https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task](https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task)"
)
self.hand_model_path = self._download_model(
"hand_landmarker.task",
"[https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/1/hand_landmarker.task](https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/1/hand_landmarker.task)"
)
# Initialize MediaPipe objects for both video and images
self.face_mesh_video = self._init_face_landmarker(video_mode=True)
self.hand_detector_video = self._init_hand_landmarker(video_mode=True)
self.face_mesh_image = self._init_face_landmarker(video_mode=False)
self.hand_detector_image = self._init_hand_landmarker(video_mode=False)
Understanding the Brain
In the code above, we define lists of numbers like LEFT_EYE_UPPER. These are the exact dot numbers (out of the 478) that outline the eye.
The __init__ method is a special function called a constructor.
Whenever we create an ExpressionAnalyzer, this code runs automatically to set everything up.
It downloads the MediaPipe AI models from Google’s servers and loads them into memory so they are ready to process faces.
Next, we add the logic to extract features:
# ... (Add this inside the ExpressionAnalyzer class) ...
def extract_features(self, image: np.ndarray, is_static: bool = False) -> dict:
"""Analyzes an image and returns facial/hand features as a dictionary."""
face_landmarker = self.face_mesh_image if is_static else self.face_mesh_video
hand_landmarker = self.hand_detector_image if is_static else self.hand_detector_video
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=rgb)
if is_static:
face_res = face_landmarker.detect(mp_image)
hand_res = hand_landmarker.detect(mp_image)
else:
self.frame_counter += 1
if self.frame_counter % self.frame_skip != 0:
return getattr(self, "last_features", None)
face_res = face_landmarker.detect_for_video(mp_image, self.frame_counter)
hand_res = hand_landmarker.detect_for_video(mp_image, self.frame_counter)
if not face_res.face_landmarks:
return None
landmarks = face_res.face_landmarks[0]
landmark_array = np.array([[l.x, l.y] for l in landmarks])
# Calculate the mathematical features
features = self._compute_features(landmark_array, hand_res)
self.last_features = features
return features
def _compute_features(self, landmark_array: np.ndarray, hand_res) -> dict:
"""Helper function to calculate Eye Aspect Ratio (How open the eye is)"""
def ear(upper, lower):
vert = np.linalg.norm(landmark_array[upper] - landmark_array[lower], axis=1).mean()
horiz = np.linalg.norm(landmark_array[upper[0]] - landmark_array[upper[-1]])
return vert / (horiz + 1e-6)
left_ear = ear(self.LEFT_EYE_UPPER, self.LEFT_EYE_LOWER)
right_ear = ear(self.RIGHT_EYE_UPPER, self.RIGHT_EYE_LOWER)
avg_ear = (left_ear + right_ear) / 2.0
# Mouth calculations
mouth_top, mouth_bottom = landmark_array[13], landmark_array[14]
mouth_height = np.linalg.norm(mouth_top - mouth_bottom)
mouth_left, mouth_right = landmark_array[61], landmark_array[291]
mouth_width = np.linalg.norm(mouth_left - mouth_right)
mouth_ar = mouth_height / (mouth_width + 1e-6)
# Eyebrow calculations
left_brow_y = landmark_array[self.LEFT_EYEBROW][:, 1].mean()
right_brow_y = landmark_array[self.RIGHT_EYEBROW][:, 1].mean()
left_eye_center = landmark_array[self.LEFT_EYE_UPPER + self.LEFT_EYE_LOWER][:, 1].mean()
right_eye_center = landmark_array[self.RIGHT_EYE_UPPER + self.RIGHT_EYE_LOWER][:, 1].mean()
avg_brow_h = ((left_eye_center - left_brow_y) + (right_eye_center - right_brow_y)) / 2.0
# Check for hands
num_hands = len(hand_res.hand_landmarks) if hand_res.hand_landmarks else 0
hand_raised = 1.0 if num_hands > 0 else 0.0
return {
'eye_openness': avg_ear,
'mouth_openness': mouth_ar,
'eyebrow_height': avg_brow_h,
'hand_raised': hand_raised,
'surprise_score': avg_ear * avg_brow_h * mouth_ar,
'smile_score': (1.0 - mouth_ar),
}
This section might look heavily mathematical, but it’s just measuring distances!
For instance, mouth_height calculates the distance from the top lip to the bottom lip.
We bundle all these measurements into a neat little package (a Python dictionary) and return it.
Step 2: Building the Database
Now that our brain can understand expressions, we need a library to hold our memes.
class MemeLibrary:
"""
Acts as a database for our memes.
It 'has-a' relationship with ExpressionAnalyzer (Dependency Injection).
"""
CACHE_FILE = "meme_features_cache.pkl"
def __init__(self, analyzer: ExpressionAnalyzer, assets_folder: str = "assets", meme_height: int = 480):
self.analyzer = analyzer
self.assets_folder = assets_folder
self.meme_height = meme_height
self.memes = []
self.meme_features = []
self.feature_keys = ['surprise_score', 'smile_score', 'hand_raised', 'eye_openness', 'mouth_openness', 'eyebrow_height']
self.feature_weights = np.array([25, 20, 25, 20, 25, 20])
self.feature_factors = np.array([10, 10, 15, 5, 5, 5])
self.load_memes()
def load_memes(self):
"""Loads memes from disk or a cache file to save time."""
if os.path.exists(self.CACHE_FILE):
with open(self.CACHE_FILE, "rb") as f:
self.memes, self.meme_features = pickle.load(f)
return
assets_path = Path(self.assets_folder)
image_files = list(assets_path.glob("*.jpg")) + list(assets_path.glob("*.png"))
# Analyze multiple memes at the same time
with ThreadPoolExecutor() as executor:
results = list(executor.map(self._process_single_meme, image_files))
for r in results:
if r:
meme, features = r
self.memes.append(meme)
self.meme_features.append(features)
with open(self.CACHE_FILE, "wb") as f:
pickle.dump((self.memes, self.meme_features), f)
def _process_single_meme(self, img_file: Path) -> tuple:
img = cv2.imread(str(img_file))
if img is None: return None
h, w = img.shape[:2]
scale = self.meme_height / h
img_resized = cv2.resize(img, (int(w * scale), self.meme_height))
features = self.analyzer.extract_features(img_resized, is_static=True)
if features is None: return None
return {'image': img_resized, 'name': img_file.stem.replace('_', ' ').title(), 'path': str(img_file)}, features
def compute_similarity(self, features1: dict, features2: dict) -> float:
"""Mathematical formula to compare two dictionaries of facial features."""
if features1 is None or features2 is None: return 0.0
vec1 = np.array([features1.get(k, 0) for k in self.feature_keys])
vec2 = np.array([features2.get(k, 0) for k in self.feature_keys])
diff = np.abs(vec1 - vec2)
similarity = np.exp(-diff * self.feature_factors)
return float(np.sum(self.feature_weights * similarity))
def find_best_match(self, user_features: dict) -> tuple:
if user_features is None or not self.memes: return None, 0.0
scores = np.array([self.compute_similarity(user_features, mf) for mf in self.meme_features])
if len(scores) == 0: return None, 0.0
best_idx = int(np.argmax(scores))
return self.memes[best_idx], scores[best_idx]
The Magic of Dependency Injection
Did you notice how the __init__ method takes analyzer: ExpressionAnalyzer as an argument?
This is a concept called Dependency Injection.
Instead of the Library trying to build its own AI model, we just hand it the Brain we already built. This keeps our code completely separate and organized!
The find_best_match function is where the matching happens.
It takes the dictionary of your face (how wide your eyes are, etc.) and compares it to the dictionaries of all the memes.
The meme with the closest numbers wins!
Step 3: Building the App Controller
With our AI brain and meme database built, it’s time to bring them to life! We need an application class to turn on your webcam, capture the video, and draw the results on your screen.
class MemeMatcherApp:
"""
The main Application class.
It initializes the other classes and contains the main while loop.
"""
def __init__(self, assets_folder="assets"):
self.analyzer = ExpressionAnalyzer()
self.library = MemeLibrary(analyzer=self.analyzer, assets_folder=assets_folder)
def run(self):
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
print("\n🎥 Camera started! Press 'q' to quit\n")
while cap.isOpened():
ret, frame = cap.read()
if not ret: break
frame = cv2.flip(frame, 1) # Mirror effect
# 1. Ask the Analyzer to look at the webcam frame
user_features = self.analyzer.extract_features(frame)
# 2. Ask the Library to find the best matching meme
best_meme, score = self.library.find_best_match(user_features)
# 3. Handle the User Interface (Displaying the result)
h, w = frame.shape[:2]
if best_meme:
meme_img = best_meme['image']
meme_h, meme_w = meme_img.shape[:2]
scale = h / meme_h
new_w = int(meme_w * scale)
meme_resized = cv2.resize(meme_img, (new_w, h))
display = np.zeros((h, w + new_w, 3), dtype=np.uint8)
display[:, :w] = frame
display[:, w:w + new_w] = meme_resized
# Draw UI Text boxes
cv2.rectangle(display, (5, 5), (200, 45), (0, 0, 0), -1)
cv2.putText(display, "YOU", (10, 35), cv2.FONT_HERSHEY_SIMPLEX, 1.2, (0, 255, 0), 2)
cv2.rectangle(display, (w + 5, 5), (w + new_w - 5, 75), (0, 0, 0), -1)
cv2.putText(display, best_meme['name'], (w + 10, 35), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 255), 2)
else:
display = frame
cv2.putText(display, "No face detected!", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
cv2.imshow("Meme Matcher - Press Q to quit", display)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
The Infinite Loop
The core of any video application is a while loop.
The application reads one picture from your webcam, asks the ExpressionAnalyzer for the features, asks the MemeLibrary for a match, glues the webcam picture and the meme picture together side-by-side using NumPy, and displays it.
Then, it repeats this instantly for the next frame!
Step 4: Putting it All Together
Finally, we just need to start the application. At the very bottom of your file, add the entry point:
if __name__ == "__main__":
print("Meme Matcher Starting...\n")
# Create the application object and run it
app = MemeMatcherApp(assets_folder="assets")
app.run()
Conclusion
Congratulations! You have just built a complex Artificial Intelligence application using advanced Computer Vision techniques.
More importantly, you built it the right way. By structuring your code using Object-Oriented Programming, your project is scalable. Want to add a Graphical User Interface (GUI) with buttons later? You don’t have to touch the math inside the Brain or the Database; you only have to modify the App class.
To see the real magic, download a few distinct meme images, put them in an assets folder next to your script, and run it.
Try raising your eyebrows, opening your mouth wide, or throwing up a peace sign.
Happy coding!
Check out all our books that you can read for free from this page https://10xdev.blog/library
Real Python
The Real Python Podcast – Episode #290: Advice on Managing Projects & Making Python Classes Friendly
What goes into managing a major project? What techniques can you employ for a project that's in crisis? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Exploring Protocols in Python
In this quiz, you’ll test your understanding of Exploring Protocols in Python.
The questions review Python protocols, how they define required methods and attributes, and how static type checkers use them. You’ll also explore structural subtyping, generic protocols, and subprotocols.
This quiz helps you confirm the concepts covered in the course and shows you where to focus further study. If you want to review the material, the course covers these topics in depth at the link above.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
April 09, 2026
Rodrigo Girão Serrão
Who wants to be a millionaire: iterables edition
Play this short quiz to test your Python knowledge!
At PyCon Lithuania 2026 I did a lightning talk where I presented a “Who wants to be a millionaire?” Python quiz, themed around iterables. There's a whole performance during the lightning talk which was recorded and will be eventually linked to from here. This article includes only the four questions, the options presented, and a basic system that allows you to check whether you got it right or not.
Question 1
This is an easy one to get you started. It makes more sense if you watch the performance of the lightning talk.
What is the output of the following Python program?
print("Hello, world!")
- Hello, world!
- Hello world!
- Hello world
- Hello world!!
Question 2
What is the output of the following Python program?
squares = (x ** 2 for x in range(3))
print(type(squares))
<class 'generator'><class 'gen_expr'><class 'list'><class 'tuple'>
Question 3
This was a reference to the talk I'd given earlier today, where I talked about tee.
The only object in itertools that is not an iterable.
Out of the 20, how many objects in itertools are iterables?
- 19
- 20
- 1
- 0
Question 4
What is the output of the following Python program?
from itertools import *
print(sum(chain.from_iterable(chain(*next(
islice(permutations(islice(batched(pairwise(
count()),5),3,9)),15,None)))))
- 1800
- 0
- 🇱🇹❤️🐍
SyntaxError
uv skills for coding agents
This article shares two skills you can add to your coding agents so they use uv workflows.
I have fully adopted uv into my workflows and most of the time I want my coding agents to use uv workflows as well, like when running any Python code or managing and running scripts that may or may not have dependencies.
To make this more convenient for me, I created two SKILL.md files for two of the most common workflows that the coding agents get wrong on the first few tries:
python-via-uv: this skill tells the agent that it should use uv whenever it wants to run any piece of Python code, be it one-liners or scripts. This is relevant because I don't even have the commandpython/python3in the shell path, so whenever the LLM tries running something withpython ..., it fails.uv-script-workflow: this skill is specifically for when the agent wants to create and run a script. It instructs the LLM to initalise the script withuv init --script ...and then tells it about the relevant commands to manage the script dependencies.
The two skills also add a note about sandboxing, since uv's default cache directory will be outside your sandbox. When that's the case, the agent is already instructed to use a valid temporary location for the uv cache.
Installing a skill usually just means dropping a Markdown file in the correct folder, but you should check the documentation for the tools you use.
Here are the two skills for you to download:
I also included the skills verbatim here, for your convenience:
Skill forpython-via-uv---
name: python-via-uv
description: Enforce Python execution through `uv` instead of direct interpreter calls. Use when Codex needs to run Python scripts, modules, one-liners, tools, test runners, or package commands in a workspace and should avoid invoking `python` or `python3` directly.
---
# Python Via Uv
Use `uv` for every Python command.
Do not run `python`.
Do not run `python3`.
Do not suggest `python` or `python3` in instructions unless the user explicitly requires them and the constraint must be called out as a conflict.
## Execution Rules
When sandboxed, set `UV_CACHE_DIR` to a temporary directory the agent can write to before running `uv` commands.
Prefer these patterns:
- Run a script: `UV_CACHE_DIR=/tmp/uv-cache uv run path/to/script.py`
- Run a module: `UV_CACHE_DIR=/tmp/uv-cache uv run -m package.module`
- Run a one-liner: `UV_CACHE_DIR=/tmp/uv-cache uv run python -c "print('hello')"`
- Run a tool exposed by dependencies: `UV_CACHE_DIR=/tmp/uv-cache uv run tool-name`
- Add a dependency for an ad hoc command: `UV_CACHE_DIR=/tmp/uv-cache uv run --with <package> python -c "..."`
## Notes
Using `python` inside `uv run ...` is acceptable because `uv` is still the entrypoint controlling interpreter selection and environment setup.
If the workspace already defines a project-specific temporary cache directory, prefer that over `/tmp/uv-cache`.
If a command example or existing documentation uses `python` or `python3` directly, translate it to the closest `uv` form before executing it....
Real Python
Quiz: Reading Input and Writing Output in Python
In this quiz, you’ll test your understanding of Reading Input and Writing Output in Python.
By working through this quiz, you’ll revisit taking keyboard input with input(), showing results with print(), formatting output, and handling basic input types.
This quiz helps you practice building simple interactive scripts and reinforces best practices for clear console input and output.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
James Bennett
Let’s talk about LLMs
Everybody seems to agree we’re in the middle of something, though what, exactly, seems to be up for debate. It might be an unprecedented revolution in productivity and capabilities, perhaps even the precursor to a technological “singularity” beyond which it’s impossible to guess what the world might look like. It might be just another vaporware hype cycle that will blow over. It might be a dot-com-style bubble that will lead to a big crash but still leave us with something useful (the way the dot-com bubble drove mass adoption of the web). It might be none of those things.
Many thousands of words have already been spent arguing variations of these positions. So of course today I’m going to throw a few thousand more words at it, because that’s what blogs are for. At least all the ones you’ll read here were written by me (and you can pry my em-dashes from my cold, dead hands).
Terminology, and picking a lane
But first, a couple quick notes:
I’m going to be using the terms “LLM” and “LLMs” almost exclusively in this post, because I think the precision is useful. “AI” is a vague and overloaded term, and it’s too easy to get bogged down in equivocations and debates about what exactly someone means by “AI”. And virtually everything that’s contentious right now about programming and “AI” is really traceable specifically to the advent of large language models. I suppose a slightly higher level of precision might come from saying “GPT” instead, but OpenAI keeps trying to claim that one as their own exclusive term, which is a different sort of unwelcome baggage. So “LLMs” it is.
And when I talk about “LLM coding”, I mean use of an LLM to generate code in some programming language. I use this as an umbrella term for all such usage, whether done under human supervision or not, whether used as the sole producer of code (with no human-generated code at all) or not, etc.
I’m also going to try to limit my comments here to things directly related to technology and to programming as a profession, because that’s what I know (I have a degree in philosophy, so I’m qualified to comment on some other aspects of LLMs, but I’m deliberately staying away from them in this post because I find a lot of those debates tedious and literally sophomoric, as in reminding me of things I was reading and discussing when I was a sophomore).
If you’re using an LLM in some other field, well, I probably don’t know that field well enough to usefully comment on it. Having seen some truly hot takes from people who didn’t follow this principle, I’ve thought several times that we really need some sort of cute portmanteau of “LLM” and “Gell-Mann Amnesia” for the way a lot of LLM-related discourse seems to be people expecting LLMs to take over every job and field except their own.
No silver bullet
A few years ago I wrote about Fred Brooks’ No Silver Bullet, and said I think it may have been the best thing Brooks ever wrote. If you’ve never read No Silver Bullet, I strongly recommend you do so, and I recommend you read the whole thing for yourself (rather than just a summary of it).
No Silver Bullet was published at a time when computing hardware was advancing at an incredible rate, but our ability to build software was not even close to keeping up. And so Brooks made a bold prediction about software:
There is no single development, in either technology or management technique, which by itself promises even a single order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.
To support this he looked at sources of difficulty in software development, and assigned them to two broad categories (emphasis as in the original):
Following Aristotle, I divide them into essence—the difficulties inherent in the nature of the software—and accidents—those difficulties that today attend its production but that are not inherent.
A classic example is memory management: some programming languages require the programmer to manually allocate, keep track of, and free memory, which is a source of difficulty. And this is accidental difficulty, because there’s nothing which inherently requires it; plenty of other programming languages have automatic memory management.
But other sources of difficulty are different, and seem to be inherent to software development itself. Here’s one of the ways Brooks summarizes it (emphasis matches what’s in my copy of No Silver Bullet):
The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions. This essence is abstract, in that the conceptual construct is the same under many different representations. It is nonetheless highly precise and richly detailed.
I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared to the conceptual errors in most systems.
If this is true, building software will always be hard. There is inherently no silver bullet.
And to drive the point home, he also explains the diminishing returns of only addressing accidental difficulty:
How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement.
This is a straightforward mathematical argument. If its two empirical premises—that the accidental/essential distinction is real and that the accidental difficulty remaining today does not represent 90%+ of total—are true, then the conclusion which rules out an order-of-magnitude gain from reducing accidental difficulty follows automatically.
I think most programmers believe the first premise, at least implicitly, and once the first premise is accepted it becomes very difficult to argue against the second. In fact, I’d personally go further than the minimum required for Brooks’ argument. His math holds up as long as accidental difficulty doesn’t reach that 90%+ mark, since anything lower makes a 10x improvement from eliminating accidental difficulty impossible. But I suspect accidental difficulty, today, is a vastly smaller proportion of the total than that. In a lot of mature domains of programming I’d be surprised if there’s even a doubling of productivity still available from a complete elimination of remaining accidental difficulty.
There’s also a section in No Silver Bullet about potential “hopes for the silver” which addresses “AI”, though what Brooks considered to be “AI” (and there is a tangent about clarifying exactly what the term means) was significantly different from what’s promoted today as “AI”. The most apt comparison to LLMs in No Silver Bullet is actually not the discussion of “AI”, it’s the discussion of automatic programming, which has meant a lot of different things over the years, but was defined by Brooks at the time as “the generation of a program for solving a problem from a statement of the problem specifications”. That’s pretty much the task for which LLMs are currently promoted to programmers.
But Brooks quotes David Parnas on the topic: “automatic programming always has been a euphemism for programming with a higher-level language than was presently available to the programmer.” And Brooks did not believe higher-level languages on their own could be a silver bullet. As he put it in a discussion of the Ada language:
It is, after all, just another high-level language, and the biggest payoff from such languages came from the first transition, up from the accidental complexities of the machine into the more abstract statement of step-by-step solutions. Once those accidents have been removed, the remaining ones are smaller, and the payoff from their removal will surely be less.
Many people are currently promoting LLMs as a revolutionary step forward for software development, but are doing so based almost exclusively on claims about LLMs’ ability to generate code at high speed. The No Silver Bullet argument poses a problem for these claims, since it sets a limit on how much we can gain from merely generating code more quickly.
In chapter 2 of The Mythical Man-Month, Brooks suggested as a scheduling guideline that five-sixths (83%) of time on a “software task” would be spent on things other than coding, which puts a pretty low cap on productivity gains from speeding up just the coding. And even if we assume LLMs reduce coding time to zero, and go with the more generous No Silver Bullet formulation which merely predicts no order-of-magnitude gain from a single development, that’s still less than the gain Brooks himself believed could come from hiring good human programmers. From chapter 3 of The Mythical Man-Month:
Programming managers have long recognized wide productivity variations between good programmers and poor ones. But the actual measured magnitudes have astounded all of us. In one of their studies, Sackman, Erikson, and Grant were measuring performances of a group of experienced programmers. Within just this group the ratios between best and worst performances averaged about 10:1 on productivity measurements and an amazing 5:1 on program speed and space measurements!
(although I’m personally skeptical of the “10x programmer” concept, the software industry overall does seem to accept it as true)
Anecdote time: much of what I’ve done over my career as a professional programmer is building database-backed web applications and services, and I don’t see much of a gain from LLMs. I suppose it looks impressive, if you’re not familiar with this field of programming, to auto-generate the skeleton of an entire application and the basic create/retrieve/update/delete HTTP handlers from no more than a description of the data you want to work with. But that capability predates LLMs: Rails’ scaffolding, for example, could do it twenty years ago.
And not just raw code generation, but also the abstractions available to work with, have progressed to the point where I basically never feel like the raw speed of production of code is holding me back. Just as Fred Brooks would have predicted, the majority of my time is spent elsewhere: talking to people who want new software (or who want existing software to be changed); finding out what it is they want and need; coming up with an initial specification; breaking it down into appropriately-sized pieces for programmers (maybe me, maybe someone else) to work on; testing the first prototype and getting feedback; preparing the next iteration; reviewing or asking for review, etc. I haven’t personally tracked whether it matches Brooks’ five-sixths estimate, but I wouldn’t be at all surprised if it did.
Given all that, just having an LLM churn out code faster than I would have myself is not going to offer me an order of magnitude improvement, or anything like it. Or as a recent popular blog post by the CEO of Tailscale put it:
AI’s direct impact on this problem is minimal. Okay, so Claude can code it in 3 minutes instead of 30? That’s super, Claude, great work.
Now you either get to spend 27 minutes reviewing the code yourself in a back-and-forth loop with the AI (this is actually kinda fun); or you save 27 minutes and submit unverified code to the code reviewer, who will still take 5 hours like before, but who will now be mad that you’re making them read the slop that you were too lazy to read yourself. Little of value was gained.
More simply: throwing more patches into the review queue, when the review queue still drains at the same rate as before, is not a recipe for increased velocity. Real software development involves not just a review queue but all the other steps and processes I outlined above, and more, and having an LLM generate code more quickly does not increase the speed or capacity of all those other things.
So as someone who accepts Brooks’ argument in No Silver Bullet, I am committed to believe on theoretical grounds that LLMs cannot offer “even a single order-of-magnitude improvement … in productivity, in reliability, in simplicity”. And my own experience matches up with that prediction.
Practice makes (im)perfect
But enough theory. What about the empirical actual reality of LLM coding?
Every fan of LLMs for coding has an anecdote about their revolutionary qualities, but the non-anecdotal data points we have are a lot more mixed. For example, several times now I’ve been linked to and asked to read the DORA report on the “State of AI-assisted Software Development”. And initially it certainly seems like it’s declaring the effects of LLMs are settled, in favor of the LLMs. From its executive summary (page 3):
[T]he central question for technology leaders is no longer if they should adopt AI, but how to realize its value.
And elsewhere it makes claims like (page 34) “AI is the new normal in software development”.
But then, going back to the executive summary, things start sounding less uniformly positive:
The research reveals a critical truth: AI’s primary role in software development is that of an amplifier. It magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones.
And then (still on page 3):
The greatest returns on AI investment come not from the tools themselves, but from a strategic focus on the underlying organizational system: the quality of the internal platform, the clarity of workflows, and the alignment of teams. Without this foundation, AI creates localized pockets of productivity that are often lost to downstream chaos.
Continuing on to page 4:
AI adoption now improves software delivery throughput, a key shift from last year. However, it still increases delivery instability. This suggests that while teams are adapting for speed, their underlying systems have not yet evolved to safely manage AI-accelerated development.
“Delivery instability” is defined (page 13) in terms of two factors:
- Change fail rate: “The ratio of deployments that require immediate intervention following a deployment.”
- Rework rate: “The ratio of deployments that are unplanned but happen as a result of an incident in production.”
Later parts of the report get into more detail on this. Page 38 charts the increase in delivery instability, for example. And elsewhere in the section containing that chart, there’s a discussion of whether increases in throughput (defined by DORA as a combination of lead time for changes, deployment frequency, and failed deployment recovery time) are enough to offset or otherwise make up for this increase in instability (page 41, emphasis added by me):
Some might argue that instability is an acceptable trade-off for the gains in development throughput that AI-assisted development enables.
The reasoning is that the volume and speed of AI-assisted delivery could blunt the detrimental effects of instability, perhaps by enabling such rapid bug fixes and updates that the negative impact on the end-user is minimized.
However, when we look beyond pure software delivery metrics, this argument does not hold up. To assess this claim, we checked whether AI adoption weakens the harms of instability on our outcomes which have been hurt historically by instability.
We found no evidence of such a moderating effect. On the contrary, instability still has significant detrimental effects on crucial outcomes like product performance and burnout, which can ultimately negate any perceived gains in throughput.
And the chart on page 38 appears to show the increase in instability as quite a bit larger than the increase in throughput, in any case.
Curiously, that chart also claims a significant increase in “code quality”, and other parts of the report (page 30, for example) claim a significant increase in “productivity”, alongside the significant increase in delivery instability, which seems like it ought to be a contradiction. As far as I can tell, DORA’s source for both “productivity” and “code quality” is perceived impact as self-reported by survey respondents. Other studies and reports have designed less subjective and more quantitative ways to measure these things. For example, this much-discussed study on adoption of the Cursor LLM coding tool used the results of static analysis of the code to measure quality and complexity. And self-reported productivity impacts, in particular, ought to be a deeply suspect measure. From (to pick one relevant example) the METR early-2025 study (emphasis added by me):
This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.
LLM coding advocates have often criticized this particular study’s finding of slower development for being based on older generations of LLMs (more on that argument in a bit), but as far as I’m aware nobody’s been able to seriously rebut the finding that developers are not very effective at self-estimating their productivity. So to see DORA relying on self-estimated productivity is disappointing.
The DORA report goes on to provide a seven-part “AI capabilities model” for organizations (begins on page 49), which consists of recommendations like: strong version control practices, working in small batches, quality internal platforms, user-centric focus… all of which feel like they should be table stakes for any successful organization regardless of whether they also happen to be using LLMs.
Suppose, for sake of a silly example, that someone told you a new technology is revolutionizing surgery, but the gains are not uniformly distributed, and the best overall outcomes are seen in surgical teams where in addition to using the new thing, team members also wash their hands prior to operating. That’s not as extreme a comparison as it might sound: the sorts of practices recommended for maximizing LLM-related gains in the DORA report, and in many other similar whitepapers and reports and studies, are or ought to be as fundamental to software development as hand-washing is to surgery. The Joel Test was recommending quite a few of these practices a quarter-century ago, the Agile Manifesto implied several of them, and even back then they weren’t really new; if you dig into the literature on effective software development you can find variations of much of the DORA advice going all the way back to the 1970s and even earlier.
For a more recent data point, I’ve seen a lot of people talking about and linking me to CircleCI’s 2026 “State of Software Delivery” which, like the DORA report, claims an uneven distribution of benefits from LLM adoption, and even says (page 8) “the majority of teams saw little to no increase in overall throughput”. The CircleCI report also raises a worrying point that echoes the increase in “delivery instability” seen in the DORA report (CircleCI executive summary, page 3):
Key stability indicators show that AI-driven changes are breaking more often and taking teams longer to fix, making validation and integration the primary bottleneck.
CircleCI further reports (page 11) that, year-over-year, they see a 13% increase in recovery time for a broken main branch, and a 25% increase for broken feature branches. And (page 12) they also say failures are increasing:
[S]uccess rates on the main branch fell to their lowest level in over 5 years, to 70.8%. In other words, attempts at merging changes into production code bases now fail 30% of the time.
For comparison, their own recommended benchmark of success for main branches is 90%.
The cost of these increasing failures and the increasing time to resolve them is quantified (emphasis matches the report, page 14):
For a team pushing 5 changes to the main branch per day, going from a 90% success rate to 70% is the difference between one showstopping breakage every two days to 1.5 every single day (a 3x increase).
At just 60 minutes recovery time per failure, you’re looking at an additional 250 hours in debugging and blocked deployments every year. And that’s at a relatively modest scale. Teams pushing 500 changes per day would lose the equivalent of 12 full-time engineers.
The usual response to reports like these is to claim they’re based on people using older LLMs, and the models coming out now are the truly revolutionary ones, which won’t have any of those problems. For example, this is the main argument that’s been leveled against the METR study I mentioned above. But that argument was flimsy to begin with (since it’s rarely accompanied by the kind of evidence needed to back up the claim), and its repeated usage is self-discrediting: if the people claiming “this time is the world-changing revolutionary leap, for sure” were wrong all the prior times they said that (as they have to have been, since if any prior time had actually been the revolutionary leap they wouldn’t need to say this time will be), why should anyone believe them this time?
Also, I’ve read a lot of studies and reports on LLM coding, and these sorts of findings—uneven or inconsistent impact, quality/stability declines, etc.—seem to be remarkably stable, across large numbers of teams using a variety of different models and different versions of those models, over an extended period of time (DORA does have a bit of a messy situation with contradictory claims that “code quality” is increasing while “delivery instability” is increasing even more, but as noted above that seems to be a methodological problem). The two I’ve quoted most extensively in this post (the DORA and CircleCI reports) were chosen specifically because they’re often recommended to me by advocates of LLM coding, and seem to be reasonably pro-LLM in their stances.
The other expected response to these findings is a claim that it’s not necessarily older models but older workflows which have been obsoleted, that the state of the art is no longer to just prompt an LLM and accept its output directly, but rather involves one LLM (or LLM-powered agent) generating code while one or more layers of “adversarial” ones review and fix up the code and also review each other’s reviews and responses and fixes, thus introducing a mechanism by which the LLM(s) will automatically improve the quality of the output.
I’m unaware of rigorous studies on these approaches (yet), but several well-publicized early examples do not inspire confidence. I’ll pick on Cloudflare here since they’ve been prominent advocates for using LLMs in this fashion. In their LLM rebuild of Next.js:
We wired up AI agents for code review too. When a PR was opened, an agent reviewed it. When review comments came back, another agent addressed them. The feedback loop was mostly automated.
But their public release of it, vetted through this process and, apparently, some amount of human review on top, was initially unable to run even the basic default Next.js application, and also was apparently riddled with security issues. From one disclosure post (emphasis added by me):
AI is now very good at getting a system to the point where it looks complete.
One specific problem cited was that the LLM rebuild simply did not pull in all the original tests, and therefore could miss security-critical cases those tests were checking. From the same disclosure post:
The process was feature-first: decide which viNext features existed, then port the corresponding Next.js tests. That is a sensible way to move quickly. It gives you broad happy-path coverage.
But it does not guarantee that you bring over the ugly regression tests, missing-export cases, and fail-open behavior checks that mature frameworks accumulate over years.
So middleware could look “covered” while the one test that proves it fails safely never made it over.
For example, Next.js has a dedicated test directory (
test/e2e/app-dir/proxy-missing-export/) that validates what happens when middleware files lack required exports. That test was never ported because middleware was already considered “covered” by other tests.
On the whole, that post is somewhat optimistic, but considering that the Next.js rebuild was carried out by presumably knowledgeable people who presumably were following good modern practices and prompting good modern LLMs to perform a type of task those LLMs are supposed to be extremely good at—a language and framework well-represented in training data, well-documented, with a large existing test suite written in the target language to assist automated verification—I have a hard time being that optimistic.
And though I haven’t personally read through the recent alleged leak of the Claude Code source, I’ve read some commentary and analysis from people who have, and again it seems like a team that should be as well-positioned as anyone to take maximum advantage of the allegedly revolutionary capabilities of LLM coding isn’t managing to do so.
So the consistent theme here, in the studies and reports and in more recent public examples, is that being able to generate code much more quickly than before, even in 2026 with modern LLMs and modern practices, is still no guarantee of being able to deliver software much more quickly than before. As the CircleCI report puts it (page 3):
The data points to a clear conclusion: success in the AI era is no longer determined by how fast code can be written. The decisive factor is the ability to validate, integrate, and recover at scale.
And if that sounds like the kind of thing Fred Brooks used to say, that’s because it is the kind of thing Fred Brooks used to say. Raw speed of generating code is not and was not the bottleneck in software development, and speeding that up or even reducing the time to generate code to effectively zero does not have the effect of making all the other parts of software development go away or go faster.
So at this point it seems clear to me that in practice as well as in theory LLM coding does not represent a silver bullet, and it seems highly unlikely to transform into one at any point in the near future.
On being left behind
When expressing skepticism about LLM coding, a common response is that not adopting it, or even just delaying slightly in adopting it, will inevitably result in being “left behind”, or even stronger effects (for example, words like “obliterated” have been used, more than once, by acquaintances of mine who really ought to know better). LLMs are the future, it’s going to happen whether you like it or not, so get with the program before it’s too late!
I said I’ll stick to the technical mode here, but I’ll just mention in passing that the “it’s going to happen whether you like it or not” framing is something I’ve encountered a lot and found to be pretty disturbing and off-putting, and not at all conducive to changing my mind. And milder forms like “It’s undeniable that…” are rhetorically suspect. The burden of proof ought to be on the person making the claim that LLMs truly are revolutionary, but framing like this tries to implicitly shift that burden and is a rare example of literally begging the question: it assumes as given the conclusion (LLMs are in fact revolutionary) that it needs to prove.
Meanwhile, I see two possible outcomes:
- The skeptical position wins. LLM coding tools do not achieve revolutionary silver-bullet status. Perhaps they become another tool in the toolbox, like TDD or pair programming, where some people and companies are really into them. Perhaps they become just another feature of IDEs, providing functionality like boilerplate generators to bootstrap a new project (if your favorite library/framework doesn’t provide its own bootstrap anyway).
- The skeptical position loses. LLM coding tools do achieve true revolutionary silver-bullet status or beyond (consistently delivering one or more orders of magnitude improvement in software development productivity), and truly become a mandatory part of every working programmer’s tools and workflows, taking over all or nearly all generation of code.
In the first case, delayed adoption has no downside unless someone happens to be working at one of the companies that decide to mandate LLM use. And they can always pick it up at that point, if they don’t mind or if they don’t feel like looking for a new job.
As to the second case: based on what I’ve argued above about the status and prospects of LLMs up to now, I obviously think that continuing the type of progress in models and practices that’s been seen to date does not offer any viable path to a silver bullet. Which means a truly revolutionary breakthrough will have to be something sufficiently different from the current state of the art that it will necessarily invalidate many (or perhaps even all) prior LLM-based workflows in addition to invalidating non-LLM-based workflows.
And even if that doesn’t result in a completely clean-slate starting point with everyone equal—even if experience with older LLM workflows is still an advantage in the post-silver-bullet world—I don’t think it can ever be the sort of insurmountable advantage it’s often assumed to be. For one thing, even with vastly higher average productivity, there likely would not be sufficient people with sufficient pre-existing LLM experience to fill the vastly expanded demand for software that would result (this is why a lot of LLM advocates, across many fields, spend so much time talking about the Jevons paradox). For another, any true silver-bullet breakthrough would have to attack and reduce the essential difficulty of building software, rather than the accidental difficulty. Let us return once again to Brooks:
I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation.
Much of the skill required of human LLM users today consists of exactly this: specifying and designing the software as a “conceptual construct”, albeit in specific ways that can be placed into an LLM’s context window in order to have it generate code. In any true silver-bullet world, much or all of that skillset would have to be rendered obsolete, which significantly reduces the penalty for late adoption if and when the silver bullet is finally achieved.
Power to the people?
Aside from impact on professional programmers and professional software-development teams, another claim often made in favor of LLM coding is that it will democratize access to software development. With LLM coding tools, people who aren’t experienced professional programmers can produce software that solves problems they face in their day-to-day jobs and lives. Surely that’s a huge societal benefit, right? And it’s tons of fun, too!
Setting aside that the New York Times piece linked above was written by someone who is an experienced professional, I’m not convinced of this use case either.
Mostly I think this is a situation where you can’t have it both ways. It seems to be widely agreed among advocates of LLM coding that it’s a skill which requires significant understanding, practice, and experience before one is able to produce consistent useful results (this is the basis of the “adopt now or be left behind” claim dealt with in the previous section); strong prior knowledge of how to design and build good software is also generally recommended or assumed. But that’s very much at odds with the democratized-software claim: that someone with no prior programming knowledge or experience will simply pick up an LLM, ask it in plain non-technical natural language to build something, and receive a sufficiently functional result.
I think the most likely result is that a non-technical user will receive something that’s obviously not fit for purpose, since they won’t have the necessary knowledge to prompt the LLM effectively. They won’t know how to set up directories of Markdown files containing instructions and skill definitions and architectural information for their problem. They won’t have practice at writing technical specifications (whether for other humans or for LLMs) to describe what they want in sufficient detail. They won’t know how to design and architect good software. They won’t know how to orchestrate multiple LLMs or LLM-powered agents to adversarially review each other. In short, they won’t have any of the skills that are supposed to be vital for successful LLM coding use.
There’s also the possibility that “natural” human language alone will never be sufficient to specify programs, even to much more advanced LLMs or other future “AI” systems, due to inherent ambiguity and lack of precision. In that case, some type of specialized formal language for specifying programs would always be necessary. Edsger W. Dijkstra, for example, took this position and famously derided what he called “the foolishness of ‘natural language programming’”, which is worth reading for some classic Dijkstra-isms like:
When all is said and told, the “naturalness” with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious.
Another possible outcome for LLM coding by non-programmers is the often-mentioned analogy to 3D printing, which also was hyped up as a great democratizer that would let anyone design and make anything, but never delivered on that promise and, at the individual level, became a niche hobby for the small number of enthusiasts who were willing and able to put in the time, money, and effort to get moderately good at it.
But the nightmare result is that non-programmer LLM users will receive something that seems to work, and only reveals its shortcomings much later on. Given how often I see it argued that LLMs will democratize coding and write utility programs for people working in fields where privacy and confidentiality are both vital and legally mandated, I’m terrified by that potential failure mode. And I think one of the worst possible things that could happen for advocates of LLM adoption is to have the news full of stories of well-meaning non-technical people who had their lives ruined by, say, accidentally enabling a data breach with their LLM-coded helper programs, or even “just” turning loose a subtly-incorrect financial model on their business. So even if I were an advocate of LLM coding, I’d be very wary of pushing it to non-programmers.
But ultimately, the only situation in which LLMs could meaningfully democratize access to software development is one where they achieve a true silver bullet, by significantly reducing or removing essential difficulty from the software development process. And as noted above, LLM advocates seem to believe that even in the silver-bullet situation there would still be such a gap between those with pre-existing LLM usage skills and those without, that those without could never meaningfully catch up. Although I happen to disagree with that belief, it remains the case that advocates can’t have it both ways: either LLM coding will be an exclusive club for those who built up the necessary skills, XOR it will be a great democratizer and do away with the need for those skills.
Takeaways
I’m already over 6,000 words in this post, and though I could easily write many more, I should probably wrap it up.
If I had to summarize my position on LLM coding in one sentence, it would be “Please go read No Silver Bullet”. I think Brooks’ argument there is both theoretically correct and validated by empirical results, and sets some pretty strong limits on the impact LLM coding, or any other tool or technique which solely or primarily attacks accidental difficulty, can have.
Of course, limits on what we can do or gain aren’t necessarily the end of the world. Many of the foundations of computer science, from On Computable Numbers to Rice’s theorem and beyond, place inflexible limits on what we can do, but we still write software nonetheless, and we still work to advance the state of our art. So the No Silver Bullet argument is not the same as arguing that LLMs are necessarily useless, or that no gains can possibly be realized from them. But it is an argument that any gains we do realize are likely going to be incremental and evolutionary, rather than the world-changing revolution many people seem to be expecting.
Correspondingly, I think there is not a huge downside, right now, to slow or delayed adoption of LLM coding. Very few organizations have the strong fundamentals needed to absorb even a relatively moderate, incremental increase in the amount of code they generate, which I suspect is why so many studies and reports find mixed results and lots of broken CI pipelines. Not only is there no silver bullet, there especially is no quick or magical gain to be had from rushing to adopt LLM coding without first working on those fundamentals. In fact, the evidence we have says you’re more likely to hurt than help your productivity by doing so.
I also don’t think LLMs are going to meaningfully democratize coding any time soon; even if they become indispensable tools for programmers, they are likely to continue requiring users to “think like a programmer” when specifying and prompting. We would be much better served by teaching many more people how to think rigorously and reason about abstractions (and they would be much better served, too) than we would by just plopping them as-is in front of LLMs.
As for what you should be doing instead of rushing to adopt LLM coding out of fear that you’ll be left behind: I think you should be listening to what all those whitepapers and reports and studies are actually telling you, and working on fundamentals. You should be adopting and perfecting solid foundational software development practices like version control, comprehensive test suites, continuous integration, meaningful documentation, fast feedback cycles, iterative development, focus on users, small batches of work… things that have been known and proven for decades, but are still far too rare in actual real-world software shops.
If the skeptical position is wrong and it turns out LLMs truly become indispensable coding tools in the long term, well, the available literature says you’ll be set up to take the greatest possible advantage of them. And if it turns out they don’t, you’ll still be in much better shape than you were, and you’ll have an advantage over everyone who chased after wild promises of huge productivity gains by ordering their teams to just chew through tokens and generate code without working on fundamentals, and who likely wrecked their development processes by doing so.
Or as Fred Brooks put it:
The first step toward the management of disease was replacement of demon theories and humours theories by the germ theory. That very step, the beginning of hope, in itself dashed all hopes of magical solutions. It told workers that progress would be made stepwise, at great effort, and that a persistent, unremitting care would have to be paid to a discipline of cleanliness. So it is with software engineering today.
April 08, 2026
Real Python
Dictionaries in Python
Python dictionaries are a powerful built-in data type that allows you to store key-value pairs for efficient data retrieval and manipulation. Learning about them is essential for developers who want to process data efficiently. In this tutorial, you’ll explore how to create dictionaries using literals and the dict() constructor, as well as how to use Python’s operators and built-in functions to manipulate them.
By learning about Python dictionaries, you’ll be able to access values through key lookups and modify dictionary content using various methods. This knowledge will help you in data processing, configuration management, and dealing with JSON and CSV data.
By the end of this tutorial, you’ll understand that:
- A dictionary in Python is a mutable collection of key-value pairs that allows for efficient data retrieval using unique keys.
- Both
dict()and{}can create dictionaries in Python. Use{}for concise syntax anddict()for dynamic creation from iterable objects. dict()is a class used to create dictionaries. However, it’s commonly called a built-in function in Python..__dict__is a special attribute in Python that holds an object’s writable attributes in a dictionary.- Python
dictis implemented as a hashmap, which allows for fast key lookups.
To get the most out of this tutorial, you should be familiar with basic Python syntax and concepts such as variables, loops, and built-in functions. Some experience with basic Python data types will also be helpful.
Get Your Code: Click here to download the free sample code that you’ll use to learn about dictionaries in Python.
Take the Quiz: Test your knowledge with our interactive “Dictionaries in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Dictionaries in PythonTest your knowledge of Python's dict data type: how to create, access, and modify key-value pairs using built-in methods and operators.
Getting Started With Python Dictionaries
Dictionaries are one of Python’s most important and useful built-in data types. They provide a mutable collection of key-value pairs that lets you efficiently access and mutate values through their corresponding keys:
>>> config = {
... "color": "green",
... "width": 42,
... "height": 100,
... "font": "Courier",
... }
>>> # Access a value through its key
>>> config["color"]
'green'
>>> # Update a value
>>> config["font"] = "Helvetica"
>>> config
{
'color': 'green',
'width': 42,
'height': 100,
'font': 'Helvetica'
}
A Python dictionary consists of a collection of key-value pairs, where each key corresponds to its associated value. In this example, "color" is a key, and "green" is the associated value.
Dictionaries are a fundamental part of Python. You’ll find them behind core concepts like scopes and namespaces as seen with the built-in functions globals() and locals():
>>> globals()
{
'__name__': '__main__',
'__doc__': None,
'__package__': None,
...
}
The globals() function returns a dictionary containing key-value pairs that map names to objects that live in your current global scope.
Python also uses dictionaries to support the internal implementation of classes. Consider the following demo class:
>>> class Number:
... def __init__(self, value):
... self.value = value
...
>>> Number(42).__dict__
{'value': 42}
The .__dict__ special attribute is a dictionary that maps attribute names to their corresponding values in Python classes and objects. This implementation makes attribute and method lookup fast and efficient in object-oriented code.
You can use dictionaries to approach many programming tasks in your Python code. They come in handy when processing CSV and JSON files, working with databases, loading configuration files, and more.
Python’s dictionaries have the following characteristics:
- Mutable: The dictionary values can be updated in place.
- Dynamic: Dictionaries can grow and shrink as needed.
- Efficient: They’re implemented as hash tables, which allows for fast key lookup.
- Ordered: Starting with Python 3.7, dictionaries keep their items in the same order they were inserted.
The keys of a dictionary have a couple of restrictions. They need to be:
- Hashable: This means that you can’t use unhashable objects like lists as dictionary keys.
- Unique: This means that your dictionaries won’t have duplicate keys.
In contrast, the values in a dictionary aren’t restricted. They can be of any Python type, including other dictionaries, which makes it possible to have nested dictionaries.
Dictionaries are collections of pairs. So, you can’t insert a key without its corresponding value or vice versa. Since they come as a pair, you always have to insert a key with its corresponding value.
Note: In some situations, you may want to add keys to a dictionary without deciding what the associated value should be. In those cases, you can use the .setdefault() method to create keys with a default or placeholder value.
Read the full article at https://realpython.com/python-dicts/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Implementing the Factory Method Pattern in Python
In this quiz, you’ll test your understanding of Factory Method Pattern.
This quiz guides you through the Factory Method pattern: how it separates object creation from use, the roles of clients and products, when to apply it, and how to implement flexible, maintainable Python classes.
Test your ability to spot opportunities for the pattern and build reusable, decoupled object creation solutions.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Armin Ronacher
Mario and Earendil
Today I’m very happy to share that Mario Zechner is joining Earendil.
First things first: I think you should read Mario’s post. This is his news more than it is ours, and he tells his side of it better than I could. What I want to do here is add a more personal note about why this matters so much to me, how the last months led us here, and why I am so excited to have him on board.
Last year changed the way many of us thought about software. It certainly changed the way I did. I spent much of 2025 building, probing, and questioning how to build software, and in many more ways what I want to do. If you are a regular reader of this blog you were along for the ride. I wrote a lot, experimented a lot, and tried to get a better sense for what these systems can actually do and what kinds of companies make sense to build around them. There was, and continues to be, a lot of excitement in the air, but also a lot of noise. It has become clear to me that it’s not a question of whether AI systems can be useful but what kind of software and human-machine interactions we want to bring into the world with them.
That is one of the reasons I have been so drawn to Mario’s work and approaches.
Pi is, in my opinion, one of the most thoughtful coding agents and agent infrastructure libraries in this space. Not because it is trying to be the loudest or the fastest, but because it is clearly built by someone who cares deeply about software quality, taste, extensibility, and design. In a moment where much of the industry is racing to ship ever more quickly, often at the cost of coherence and craft, Mario kept insisting on making something solid. That matters to me a great deal.
I have known Mario for a long time, and one of the things I admire most about him is that he does not confuse velocity with progress. He has a strong sense for what good tools should feel like. He cares about details. He cares about whether something is well made. And he cares about building in a way that can last. Mario has been running Pi in a rather unusual way. He exerts back-pressure on the issue tracker and the pull requests through OSS vacations and other means.
The last year has also made something else clearer to me: these systems are not only exciting, they are also capable of producing a great deal of damage. Sometimes that damage is obvious; sometimes it looks like low-grade degradation everywhere at once. More slop, more noise, more disingenuous emails in my inbox. There is a version of this future that makes people more distracted, more alienated, and less careful with one another.
That is not a future I want to help build.
At Earendil, Colin and I have been trying to think very carefully about what a different path might look like. That is a big part of what led us to Lefos.
Lefos is our attempt to build a machine entity that is more thoughtful and more deliberate by design. Not an agent whose main purpose is to make everything a little more efficient so that we can produce even more forgettable output, but one that can help people communicate with more care, more clarity, and joy.
Good software should not aim to optimize every minute of your life, but should create room for better and more joyful experiences, better relationships, and better ways of relating to one another. Especially in communication and software engineering, I think we should be aiming for more thought rather than more throughput. We should want tools that help people be more considerate, more present, and more human. If all we do is use these systems to accelerate the production of slop, we will have missed the opportunity entirely.
This is also why Mario joining Earendil feels so meaningful to me. Pi and Lefos come from different starting points. There was a year of distance collaboration, but they are animated by a similar instinct: that quality matters, that design matters, and that trust is earned through care rather than captured through hype.
I am very happy that Pi is coming along for the ride. Me and Colin care a lot about it, and we want to be good stewards of it. It has already played an important role in our own work over the last months, and I continue to believe it is one of the best foundations for building capable agents. We will have more to say soon about how we think about Pi’s future and its relationship to Lefos, but the short version is simple: we want Pi to continue to exist as a high-quality, open, extensible piece of software, and we want to invest in making that future real. As for our thoughts of Pi’s license, read more here and our company post here.
April 07, 2026
PyCoder’s Weekly
Issue #729: NumPy Music, Ollama, Iterables, and More (April 7, 2026)
#729 – APRIL 7, 2026
View in Browser »
NumPy as Synth Engine
Kenneth has “recorded” a song in a Python script. The catch? No sampling, no recording, no pre-recorded sound. Everything was done through generating wave functions in NumPy. Learn how to become a mathematical musician.
KENNETH REITZ
How to Use Ollama to Run Large Language Models Locally
Learn how to use Ollama to run large language models locally. Install it, pull models, and start chatting from your terminal without needing API keys.
REAL PYTHON
Ship AI Agents With Accurate, Fresh Web Search Data
Stop building scrapers just to feed your AI app web search data. SerpApi returns structured JSON from Google and 100+ search engines via a simple GET request. No proxy management, no CAPTCHAs. Power product research, price tracking, or agentic search in minutes. Used by Shopify, NVIDIA, and Uber →
SERPAPI sponsor
Indexable Iterables
Learn how objects are automatically iterable if you implement integer indexing.
RODRIGO GIRÃO SERRÃO
Claude Code for Python Developers (Live Course)
“This is one of the best training sessions I’ve joined in the last year across multiple platforms.” Two-day course where you build a complete Python project with an AI agent in your terminal. Next session April 11–12.
REAL PYTHON
Articles & Tutorials
Fire and Forget at Textual
In this follow up to a previous article (Fire and forget (or never) with Python’s asyncio, Michael discusses a similar article by Will McGugan as it relates to Textual. He found the problematic pattern in over 500K GitHub files.
MICHAEL KENNEDY'
pixi: One Package Manager for Python Libraries
uv is great for pure Python projects, but it can’t install compiled system libraries like GDAL or CUDA. pixi fills that gap by managing both PyPI and conda-forge packages in one tool, with fast resolution, automatic lockfiles, and project-level environments.
CODECUT.AI • Shared by Khuyen Tran
Dignified Python: Pytest for Agent-Generated Code
Learn how to define clear pytest patterns for agent-generated tests: separate fast unit vs integration, use fakes, constrain generation, and avoid brittle patterns to keep tests reliable and maintainable →
DAGSTER LABS sponsor
Learning Rust Made Me a Better Python Developer
Bob thinks that learning Rust made him a better Python developer. Not because Rust is better, but because it made him think differently about how he has been writing Python. The compiler forced him to confront things he’d been ignoring.
BOB BELDERBOS • Shared by Bob Belderbos
Django bulk_update Memory Issue
Recently, Anže had to write a Django migration to update hundreds of thousands of database objects. With some paper-napkin math he calculated it could fit in memory, but that turned out not to be the case. Read on to find out why.
ANŽE'S BLOG
Catching Up With the Python Typing Council
Talk Python interviews Carl Meyer, Jelle Zijstra, and Rebecca Chen, three members of the Python Typing Council. They talk about how the typing system is governed and just how much is the right amount of type hinting in your code.
TALK PYTHON podcast
Python 3.3: The Version That Quietly Rewired Everything
yield from, venv, and namespace packages are three features from Python 3.3 that looked minor when they came out in 2012, but turned out to be the scaffolding modern Python is built on.
TUREK SENTURK
Incident Report: LiteLLM/Telnyx Supply-Chain Attacks
This post from the PyPI blog outlines two recent supply chain attacks, how they were different, and how you can protect yourself from future incidents.
PYPI.ORG
Python Classes: The Power of Object-Oriented Programming
Learn how to define and use Python classes to implement object-oriented programming. Dive into attributes, methods, inheritance, and more.
REAL PYTHON
Timesliced Reservoir Sampling for Profilers
Reservoir sampling lets you pick a sample from an unlimited stream of events; learn how it works, and a new variant useful for profilers.
ITAMAR TURNER-TRAURING
Adding Python to PATH
Learn how to add Python to your PATH environment variable on Windows, macOS, and Linux so you can run Python from the command line.
REAL PYTHON course
Projects & Code
OracleTrace: Visualize Function Flows
GITHUB.COM/KAYKCAPUTO • Shared by Kayk Aparecido de Paula Caputo
pywho: Explain Your Python Environment and Detect Shadows
GITHUB.COM/AHSANSHERAZ • Shared by Ahsan Sheraz
Events
Weekly Real Python Office Hours Q&A (Virtual)
April 8, 2026
REALPYTHON.COM
PyCon Lithuania 2026
April 8 to April 11, 2026
PYCON.LT
Python Atlanta
April 9 to April 10, 2026
MEETUP.COM
DFW Pythoneers 2nd Saturday Teaching Meeting
April 11, 2026
MEETUP.COM
PyCon DE & PyData 2026
April 14 to April 18, 2026
PYCON.DE
DjangoCon Europe 2026
April 15 to April 20, 2026
DJANGOCON.EU
PyTexas 2026
April 17 to April 20, 2026
PYTEXAS.ORG
Happy Pythoning!
This was PyCoder’s Weekly Issue #729.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Python Engineering at Microsoft
Write SQL Your Way: Dual Parameter Style Benefits in mssql-python
Reviewed by: Sumit Sarabhai
If you’ve been writing SQL in Python, you already know the debate: positional parameters (?) or named parameters (%(name)s)? Some developers swear by the conciseness of positional. Others prefer the clarity of named. With mssql-python, you no longer need to choose – we support both.
We’ve added dual parameter style support to mssql-python, enabling both qmark and pyformat parameter styles in Python applications that interact with SQL Server and Azure SQL. This feature is especially useful if you’re building complex queries, dynamically assembling filters, or migrating existing code that already uses named parameters with other DBAPI drivers.
Try it here
You can install driver using pip install mssql-pythonCalling all Python + SQL developers! We invite the community to try out mssql-python and help us shape the future of high-performance SQL Server connectivity in Python.!
What Are Parameter Styles?
The DB-API 2.0 specification (PEP 249) defines several ways to pass parameters to SQL queries. The two most popular are:
- qmark – Positional ? placeholders with a tuple/list of values.
- pyformat – Named %(name)s placeholders with a dictionary of values.
# qmark style
cursor.execute("SELECT * FROM users WHERE id = ? AND status = ?", (42, "active"))
# pyformat style
cursor.execute("SELECT * FROM users WHERE id = %(id)s AND status = %(status)s",
{"id": 42, "status": "active"})
Business Requirement
Previously, mssql-python only supported qmark. It works fine for simple queries, but as parameters multiply, tracking their order becomes error-prone:
# Which ? corresponds to which value?
cursor.execute(
"UPDATE users SET name=?, email=?, age=? WHERE id=? AND status=?",
(name, email, age, user_id, status)
)
Mix up the order and it’s easy to introduce subtle, hard to spot bugs.
Why Named Parameters?
- Self-documenting queries – No more guessing which ? maps to what:
qmark — 6 parameters, which is which?
cursor.execute( """INSERT INTO employees (first_name, last_name, email, department, salary, hire_date) VALUES (?, ?, ?, ?, ?, ?)""", ("Jane", "Doe", "jane.doe@company.com", "Engineering", 95000, "2025-03-01") )
pyformat — every value is labeled
cursor.execute( """INSERT INTO employees (first_name, last_name, email, department, salary, hire_date) VALUES (%(first_name)s, %(last_name)s, %(email)s, %(dept)s, %(salary)s, %(hire_date)s)""", {"first_name": "Jane", "last_name": "Doe", "email": "jane.doe@company.com", "dept": "Engineering", "salary": 95000, "hire_date": "2025-03-01"} )
- Parameter reuse – Use the same value multiple times without repeating it:
Audit log: record who made the change and when
cursor.execute( """UPDATE orders SET status = %(new_status)s, modified_by = %(user)s, approved_by = %(user)s, modified_at = %(now)s, approved_at = %(now)s WHERE order_id = %(order_id)s""", {"new_status": "approved", "user": "admin@company.com", "now": datetime.now(), "order_id": 5042} )
3 unique values, used 5 times — no duplication needed
- Dynamic query building – Add filters without tracking parameter positions:
def search_orders(customer=None, status=None, min_total=None, date_from=None):
query_parts = ["SELECT * FROM orders WHERE 1=1"]
params = {}
if customer:
query_parts.append("AND customer_id = %(customer)s")
params["customer"] = customer
if status:
query_parts.append("AND status = %(status)s")
params["status"] = status
if min_total is not None:
query_parts.append("AND total >= %(min_total)s")
params["min_total"] = min_total
if date_from:
query_parts.append("AND order_date >= %(date_from)s")
params["date_from"] = date_from
query_parts.append("ORDER BY order_date DESC")
cursor.execute(" ".join(query_parts), params)
return cursor.fetchall()
# Callers use only the filters they need
recent_big_orders = search_orders(min_total=500, date_from="2025-01-01")
pending_for_alice = search_orders(customer=42, status="pending")
- Dictionary Reuse Across Queries
The same parameter dictionary can drive multiple queries:
report_params = {"region": "West", "year": 2025, "status": "active"}
# Summary count
cursor.execute(
"""SELECT COUNT(*) FROM customers
WHERE region = %(region)s AND status = %(status)s""",
report_params
)
total = cursor.fetchone()[0]
# Revenue breakdown
cursor.execute(
"""SELECT department, SUM(revenue)
FROM sales
WHERE region = %(region)s AND fiscal_year = %(year)s
GROUP BY department
ORDER BY SUM(revenue) DESC""",
report_params
)
breakdown = cursor.fetchall()
# Top performers
cursor.execute(
"""SELECT name, revenue
FROM sales_reps
WHERE region = %(region)s AND fiscal_year = %(year)s AND status = %(status)s
ORDER BY revenue DESC""",
report_params
)
top_reps = cursor.fetchall()
# Same dict, three different queries — change the filters once, all queries update
The Solution: Automatic Detection
mssql-python now detects which style you’re using based on the parameter type:
- tuple/list → qmark (?)
- dict → pyformat (%(name)s)
No configuration needed. Existing qmark code requires zero changes.
from mssql_python import connect
# qmark - works exactly as before
cursor.execute("SELECT * FROM users WHERE id = ?", (42,))
# pyformat - just pass a dict!
cursor.execute("SELECT * FROM users WHERE id = %(id)s", {"id": 42})
How It Works
When you pass a dict to execute(), the driver:
- Scans the SQL for %(name)s placeholders (context-aware – skips string literals, comments, and bracketed identifiers).
- Validates that every placeholder has a matching key in the dict.
- Builds a positional tuple in placeholder order (duplicating values for reused parameters).
- Replaces each %(name)s with ? and sends the rewritten query to ODBC.
User Code ODBC Layer
───────── ──────────
cursor.execute( SQLBindParameter(1, "active")
"WHERE status = %(status)s SQLBindParameter(2, "USA")
AND country = %(country)s", → SQLExecute(
{"status": "active", "WHERE status = ?
"country": "USA"} AND country = ?"
) )
The ODBC layer always works with positional ? placeholders. The pyformat conversion is purely a developer-facing convenience with zero overhead to database communication.
Clear Error Messages
Mismatched styles or missing parameters produce actionable errors – not cryptic database exceptions:
cursor.execute("WHERE id = %(id)s AND name = %(name)s", {"id": 42})
# KeyError: Missing required parameter(s): 'name'.
cursor.execute("WHERE id = ?", {"id": 42})
# TypeError: query uses positional placeholders (?), but dict was provided.
cursor.execute("WHERE id = %(id)s", (42,))
# TypeError: query uses named placeholders (%(name)s), but tuple was provided.
Real-World Examples
Example 1: Web Application
def add_user(name, email):
with connect(connection_string) as conn:
with conn.cursor() as cursor:
cursor.execute(
"INSERT INTO users (name, email) VALUES (%(name)s, %(email)s)",
{"name": name, "email": email}
)
Example 2: Batch Operations
cursor.executemany(
"INSERT INTO users (name, age) VALUES (%(name)s, %(age)s)",
[{"name": "Alice", "age": 30}, {"name": "Bob", "age": 25}]
)
Example 3: Financial Transactions
def transfer_funds(from_acct, to_acct, amount):
with connect(connection_string) as conn:
with conn.cursor() as cursor:
cursor.execute(
"UPDATE accounts SET balance = balance - %(amount)s WHERE id = %(id)s",
{"amount": amount, "id": from_acct}
)
cursor.execute(
"UPDATE accounts SET balance = balance + %(amount)s WHERE id = %(id)s",
{"amount": amount, "id": to_acct}
)
# Automatic commit on success, rollback on failure
Things to Keep in Mind
- Don’t mix styles in one query. Use either ? or %(name)s, not both. The driver determines which style you’re using from the parameter type (tuple vs dict), not from the SQL text. If placeholders don’t match the parameter type, you’ll get a clear TypeError explaining the mismatch. If both placeholder types appear in the SQL, only one set gets substituted, leading to parameter count mismatches at execution time.
# Mixing styles - raises TypeError
cursor.execute( "SELECT * FROM users WHERE id = ? AND name = %(name)s", {"name": "Alice"} # Driver finds %(name)s but also sees unmatched ? )
# ODBC error: parameter count mismatch (2 placeholders, 1 value)
# Pick one style and use it consistently
cursor.execute( "SELECT * FROM users WHERE id = %(id)s AND name = %(name)s", {"id": 42, "name": "Alice"} )
- Extra dict keys are OK. Unused parameters are silently ignored, this is by design to enable parameter dictionary reuse across different queries.
- SQL injection safe. Both styles use ODBC parameter binding under the hood. Values are never interpolated into the SQL string, they are always safely bound by the driver.
- Literal % in SQL. Use %% to escape if you need a literal %(…)s pattern in your query text.
cursor.execute(
"SELECT * FROM users WHERE name LIKE %(pattern)s",
{"pattern": "%alice%"} # The % inside the VALUE is fine
)
# But if you need a literal %(...)s in SQL text itself, use %%
cursor.execute(
"SELECT '%%(example)s' AS literal WHERE id = %(id)s",
{"id": 42}
)
- mssql_python.paramstyle reports “pyformat”. The DB-API 2.0 spec only allows a single value for this module-level constant. We set it to pyformat because it’s the more expressive style and the one we recommend for new code. But qmark is fully supported at runtime, the driver accepts both styles transparently based on whether you pass a tuple or a dict. Think of paramstyle = “pyformat” as the advertised default, not a limitation.
Compatibility at a Glance
| Feature | qmark (?) | pyformat (%(name)s) |
| cursor.execute() | |
|
| cursor.executemany() | |
|
| connection.execute() | |
|
| Parameter reuse | |
|
| Stored procedures | |
|
| All SQL data types | |
|
| Backward compatible with qmark paramstyle | |
N/A (new) |
Takeaway
Use ? for quick, simple queries. Use %(name)s for complex, multi-parameter queries where clarity and reuse matter. You don’t have to pick a side – use whichever fits the situation. The driver handles the rest.
Whether you’re building dynamic queries, or simply want more readable SQL, dual paramstyle support makes mssql-python work the way you already think.
Try It and Share Your Feedback!
We invite you to:
- Check-out the mssql-python driver and integrate it into your projects.
- Share your thoughts: Open issues, suggest features, and contribute to the project.
- Join the conversation: GitHub Discussions | SQL Server Tech Community.
Use Python Driver with Free Azure SQL Database
You can use the Python Driver with the free version of Azure SQL Database!
Deploy Azure SQL Database for free
Deploy Azure SQL Managed Instance for free Perfect for testing, development, or learning scenarios without incurring costs.
The post Write SQL Your Way: Dual Parameter Style Benefits in mssql-python appeared first on Microsoft for Python Developers Blog.
Django Weblog
Django security releases issued: 6.0.4, 5.2.13, and 4.2.30
In accordance with our security release policy, the Django team is issuing releases for Django 6.0.4, Django 5.2.13, and Django 4.2.30. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.
Django 4.2 has reached the end of extended support
Note that with this release, Django 4.2 has reached the end of extended support. All Django 4.2 users are encouraged to upgrade to Django 5.2 or later to continue receiving fixes for security issues.
See the downloads page for a table of supported versions and the future release schedule.
CVE-2026-3902: ASGI header spoofing via underscore/hyphen conflation
ASGIRequest normalizes header names following WSGI conventions, mapping hyphens to underscores. As a result, even in configurations where reverse proxies carefully strip security-sensitive headers named with hyphens, such a header could be spoofed by supplying a header named with underscores.
Under WSGI, it is the responsibility of the server or proxy to avoid ambiguous mappings. (Django's runserver was patched in CVE-2015-0219.) But under ASGI, there is not the same uniform expectation, even if many proxies protect against this under default configuration (including nginx via underscores_in_headers off;).
Headers containing underscores are now ignored by ASGIRequest, matching the behavior of Daphne, the reference server for ASGI.
This issue has severity "low" according to the Django Security Policy.
Thanks to Tarek Nakkouch for the report.
CVE-2026-4277: Privilege abuse in GenericInlineModelAdmin
Add permissions on inline model instances were not validated on submission of forged POST data in GenericInlineModelAdmin.
This issue has severity "low" according to the Django Security Policy.
Thanks to N05ec@LZU-DSLab for the report.
CVE-2026-4292: Privilege abuse in ModelAdmin.list_editable
Admin changelist forms using ModelAdmin.list_editable incorrectly allowed new instances to be created via forged POST data.
This issue has severity "low" according to the Django Security Policy.
CVE-2026-33033: Potential denial-of-service vulnerability in MultiPartParser via base64-encoded file upload
When using django.http.multipartparser.MultiPartParser, multipart uploads with Content-Transfer-Encoding: base64 that include excessive whitespace may trigger repeated memory copying, potentially degrading performance.
This issue has severity "moderate" according to the Django Security Policy.
Thanks to Seokchan Yoon for the report.
CVE-2026-33034: Potential denial-of-service vulnerability in ASGI requests via memory upload limit bypass
ASGI requests with a missing or understated Content-Length header could bypass the DATA_UPLOAD_MAX_MEMORY_SIZE limit when reading HttpRequest.body, potentially loading an unbounded request body into memory and causing service degradation.
This issue has severity "low" according to the Django Security Policy.
Thanks to Superior for the report.
Affected supported versions
- Django main
- Django 6.0
- Django 5.2
- Django 4.2
Resolution
Patches to resolve the issue have been applied to Django's main, 6.0, 5.2, and 4.2 branches. The patches may be obtained from the following changesets.
CVE-2026-3902: ASGI header spoofing via underscore/hyphen conflation
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
CVE-2026-4277: Privilege abuse in GenericInlineModelAdmin
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
CVE-2026-4292: Privilege abuse in ModelAdmin.list_editable
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
CVE-2026-33033: Potential denial-of-service vulnerability in MultiPartParser via base64-encoded file upload
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
CVE-2026-33034: Potential denial-of-service vulnerability in ASGI requests via memory upload limit bypass
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
The following releases have been issued
- Django 6.0.4 (download Django 6.0.4 | 6.0.4 checksums)
- Django 5.2.13 (download Django 5.2.13 | 5.2.13 checksums)
- Django 4.2.30 (download Django 4.2.30 | 4.2.30 checksums)
The PGP key ID used for this release is Jacob Walls: 131403F4D16D8DC7
General notes regarding security reporting
As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum. Please see our security policies for further information.
Real Python
Using Loguru to Simplify Python Logging
Logging is a vital programming practice that helps you track, understand, and debug your application’s behavior. Loguru is a Python library that provides simpler, more intuitive logging compared to Python’s built-in logging module.
Good logging gives you insights into your program’s execution, helps you diagnose issues, and provides valuable information about your application’s health in production. Without proper logging, you risk missing critical errors, spending countless hours debugging blind spots, and potentially undermining your project’s overall stability.
By the end of this video course, you’ll understand that:
- Logging in Python can be simple and intuitive with the right tools.
- Using Loguru lets you start logging immediately without complex configuration.
- You can customize log formats and send logs to multiple destinations like files, the standard error stream, or external services.
- Loguru provides powerful debugging capabilities that make troubleshooting easier.
- Loguru supports structured logging with JSON formatting for modern applications.
After watching this course, you’ll be able to quickly implement better logging in your Python applications. You’ll spend less time wrestling with logging configuration and more time using logs effectively to debug issues. This will help you build production-ready applications that are easier to troubleshoot when problems occur.
To get the most from this course, you should be familiar with Python concepts like functions, decorators, and context managers. You might also find it helpful to have some experience with Python’s built-in logging module, though this isn’t required.
Don’t worry if you’re new to logging in Python. This course will guide you through everything you need to know to get started with Loguru and implement effective logging in your applications.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
Could you host DjangoCon Europe 2027? Call for organizers
We are looking for the next group of organizers to own and lead the 2027 DjangoCon Europe conference. Could your town's football stadium, theatre, cinema, city hall, circus tent or a private island host this wonderful community event?
DjangoCon Europe is a major pillar of the Django community, as people from across the world meet and share. Many qualities make it a unique event: Unconventional and conventional venues, creative happenings, a feast of talks and a dedication to inclusion and diversity.
Hosting a DjangoCon is an ambitious undertaking. It's hard work, but each year it has been successfully run by a team of community volunteers, not all of whom have had previous experience - more important is enthusiasm, organizational skills, the ability to plan and manage budgets, time and people - and plenty of time to invest in the project.
For 2027, rest assured that we will be there to answer questions and put you in touch with previous organizers through the brand new DSF Events Support Working Group (a reboot of the previous DjangoCon Europe Support Working Group).
Step 1: Submit your expression of interest
If you're considering organizing DjangoCon Europe (🙌 great!), fill in our DjangoCon Europe 2027 expression of interest form with your contact details. No need to fill in all the information at this stage if you don't have it all already, we'll reach out and help you figure it out.
Express your interest in organizing
Step 2: We're here to help!
We've set up a DjangoCon Europe support working group of previous organizers that you can reach out to with questions about organizing and running a DjangoCon Europe.
The group will be in touch with everyone submitting the expression of interest form, or you can reach out to them directly: events-support@djangoproject.com
We'd love to hear from you as soon as possible, so your proposal can be finalized and sent to the DSF board by June 1st 2026.
Step 3: Submitting the proposal
The more detailed and complete your final proposal is, the better. Basic details include:
- Organizing committee members: You won't have a full team yet, probably, naming just some core team members is enough.
- The legal entity that is intended to run the conference: Even if the entity does not exist yet, please share how you are planning to set it up.
- Dates: See "What dates are possible in 2027?" below. We must avoid conflicts with major holidays, EuroPython, DjangoCon US, and PyCon US.
- Venue(s), including size, number of possible attendees, pictures, accessibility concerns, catering, etc.
- Transport links and accommodation: Can your venue be reached by international travelers?
- Budgets and ticket prices: Talk to the DjangoCon Europe Support group to get help with this, including information on past event budgets.
We also like to see:
- Timelines
- Pictures
- Plans for online participation, and other ways to make the event more inclusive and reduce its environmental footprint
- Draft agreements with providers
- Alternatives you have considered
Have a look at our proposed (draft, feedback welcome) DjangoCon Europe 2027 Licensing Agreement for the fine print on contractual requirements and involvement of the Django Software Foundation.
Submit your completed proposal by June 1st 2026 via our DjangoCon Europe 2027 expression of interest form, this time filling in as many fields as possible. We look forward to reviewing great proposals that continue the excellence the whole community associates with DjangoCon Europe.
Q&A
Can I organize a conference alone?
We strongly recommend that a team of people submit an application.
I/we don't have a legal entity yet, is that a problem?
Depending on your jurisdiction, this is usually not a problem. But please share your plans about the entity you will use or form in your application.
Do I/we need experience with organizing conferences?
The support group is here to help you succeed. From experience, we know that many core groups of 2-3 people have been able to run a DjangoCon with guidance from previous organizers and help from volunteers.
What is required in order to announce an event?
Ultimately, a contract with the venue confirming the dates is crucial, since announcing a conference makes people book calendars, holidays, buy transportation and accommodation etc. This, however, would only be relevant after the DSF board has concluded the application process. Naturally, the application itself cannot contain any guarantees, but it's good to check concrete dates with your venues to ensure they are actually open and currently available, before suggesting these dates in the application.
Do we have to do everything ourselves?
No. You will definitely be offered lots of help by the community. Typically, conference organizers will divide responsibilities into different teams, making it possible for more volunteers to join. Local organizers are free to choose which areas they want to invite the community to help out with, and a call will go out through a blog post announcement on djangoproject.com and social media.
What kind of support can we expect from the Django Software Foundation?
The DSF regularly provides grant funding to DjangoCon organizers, to the extent of $6,000 in recent editions. We also offer support via specific working groups:
- The dedicated DjangoCon Europe support working group.
- The social media working group can help you promote the event.
- The Code of Conduct working group works with all event organizers.
In addition, a lot of Individual Members of the DSF regularly volunteer at community events. If your team aren't Individual Members, we can reach out to them on your behalf to find volunteers.
What dates are possible in 2027?
For 2027, DjangoCon Europe should happen between January 4th and April 26th, or June 3rd and June 27th. This is to avoid the following community events' provisional dates:
- PyCon US 2027: May 2027
- EuroPython 2027: July 2027
- DjangoCon US 2027: September - October 2027
- DjangoCon Africa 2027: August - September 2027
We also want to avoid the following holidays:
- New Year's Day: Friday 1st January 2027
- Chinese New Year: Saturday 6th February 2027
- Eid Al-Fitr: Tuesday 9th March 2027
- Easter: Sunday 28th March 2027
- Passover: Wednesday 21st - Thursday 29th April 2027
- Eid Al-Adha: Monday 17th - Thursday 20th May 2027
- Rosh Hashanah: Saturday 2nd - Monday 4th October 2027
- Yom Kippur: Monday 11th - Tuesday 12th October 2027
What cities or countries are possible?
Any city in Europe. This can be a city or country where DjangoCon Europe has happened in the past (Athens, Vigo, Edinburgh, Porto, Copenhagen, Heidelberg, Florence, Budapest, Cardiff, Toulon, Warsaw, Zurich, Amsterdam, Berlin), or a new locale.
References
Past calls
- Interested in organizing DjangoCon Europe 2016? | Weblog | Django
- Could you host DjangoCon Europe 2017? | Weblog | Django
- DjangoCon Europe 2019 - where will it be? | Weblog | Django
- Could you host DjangoCon Europe 2023? | Weblog | Django
- Last Chance for a DjangoCon Europe 2023 | Weblog | Django
- Want to host DjangoCon Europe 2024? | Weblog | Django
- DjangoCon Europe 2025 Call for Proposals | Weblog | Django
- Last call for DjangoCon Europe 2025 organizers | Weblog | Django
- Could you host DjangoCon Europe 2026? Call for organizers | Weblog | Django
Real Python
Quiz: Building a Python GUI Application With Tkinter
In this quiz, you’ll test your understanding of Building a Python GUI Application With Tkinter.
Test your Tkinter knowledge by identifying core widgets, managing layouts, handling text with Entry and Text widgets, and connecting buttons to Python functions.
This quiz also covers event loops, widget sizing, and file dialogs, helping you solidify the essentials for building interactive, cross-platform Python GUI apps.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Using Loguru to Simplify Python Logging
In this quiz, you’ll test your understanding of Using Loguru to Simplify Python Logging.
By working through this quiz, you’ll revisit key concepts like the pre-configured logger, log levels, format placeholders, adding context with .bind() and .contextualize(), and saving logs to files.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm
How to Train Your First TensorFlow Model in PyCharm
This is a guest post from Iulia Feroli, founder of the Back To Engineering community on YouTube.
TensorFlow is a powerful open-source framework for building machine learning and deep learning systems. At its core, it works with tensors (a.k.a multi‑dimensional arrays) and provides high‑level libraries (like Keras) that make it easy to transform raw data into models you can train, evaluate, and deploy.
TensorFlow helps you handle the full pipeline: loading and preprocessing data, assembling models from layers and activations, training with optimizers and loss functions, and exporting for serving or even running on edge devices (including lightweight TensorFlow Lite models on Raspberry Pi and other microcontrollers).
If you want to make data-driven applications, prototyping neural networks, or ship models to production or devices, learning TensorFlow gives you a consistent, well-supported toolkit to go from idea to deployment.
If you’re brand new to TensorFlow, start by watching the short overview video where I explain tensors, neural networks, layers, and why TensorFlow is great for taking data → model → deployment, and how all of this can be explained with a LEGO-style pieces sorting example.
In this blog post, I’ll walk you through a first, stripped-down TensorFlow implementation notebook so we can get started with some practical experience. You can also watch the walkthrough video to follow along.
We’ll be exploring a very simple use case today: load the Fashion MNIST dataset, build two very simple Keras models, train and compare them, then dig into visualizations (predictions, confidence bars, confusion matrix). I kept the code minimal and readable so you can focus on the ideas – and you’ll see how PyCharm helps along the way.
Training TensorFlow models step by step
Getting started in PyCharm
We’ll be leveraging PyCharm’s native Notebook integration to build out our project. This way, we can inspect each step of the pipeline and use some supporting visualization along the way. We’ll create a new project and generate a virtual environment to manage our dependencies.
If you’re running the code from the attached repo, you can install directly from the requirements file. If you wish to expand this example with additional visualizations for further models, you can easily add more packages to your requirements as you go by using the PyCharm package manager helpers for installing and upgrading.
Load Fashion MNIST and inspect the data
Fashion MNIST is a great starter because the images are small (28×28 pixels), visually meaningful, and easy to interpret. They represent various garment types as pixelated black-and-white images, and provide the relevant labels for a well-contained classification task. We can first take a look at our data sample by printing some of these images with various matplotlib functions:
```
fig, axes = plt.subplots(2, 5, figsize=(10, 4))
for i, ax in enumerate(axes.flat):
ax.imshow(x_train[i], cmap='gray')
ax.set_title(class_names[y_train[i]])
ax.axis('off')
plt.show()
```
# Two simple models (a quick experiment)
```
model1 = models.Sequential([
layers.Flatten(input_shape=(28, 28)),
layers.Dense(128, activation='relu'),
layers.Dense(10, activation='softmax')
])
model2 = models.Sequential([
layers.Flatten(input_shape=(28, 28)),
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(10, activation='softmax')
])
```
Compile and train your first model
From here, we can compile and train our first TensorFlow model(s). With PyCharm’s code completion features and documentation access, you can get instant suggestions for building out these simple code blocks.
For a first try at TensorFlow, this allows us to spin up a working model with just a few presses of Tab in our IDE. We’re using the recommended standard optimizer and loss function, and we’re tracking for accuracy. We can choose to build multiple models by playing around with the number or type of layers, along with the other parameters.
```
model1.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
model1.fit(x_train, y_train, epochs=10)
model2.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
model2.fit(x_train, y_train, epochs=15)
```
Evaluate and compare your TensorFlow model performance
```
loss1, accuracy1 = model1.evaluate(x_test, y_test)
print(f'Accuracy of model1: {accuracy1:.2f}')
loss2, accuracy2 = model2.evaluate(x_test, y_test)
print(f'Accuracy of model2: {accuracy2:.2f}')
```
Once the models are trained (and you can see the epochs progressing visually as each cell is run), we can immediately evaluate the performance of the models.
In my experiment, model1 sits around ~0.88 accuracy, and while model2 is a little higher than that, it took 50% longer to train. That’s the kind of trade‑off you should be thinking about: Is a tiny accuracy gain worth the additional compute and complexity?
We can dive further into the results of the model run by generating a DataFrame instance of our new prediction dataset. Here we can also leverage built-in functions like `describe` to quickly get some initial statistical impressions:
``` predictions = model1.predict(x_test) import pandas as pd df_pred = pd.DataFrame(predictions, columns=class_names) df_pred.describe() ```
However, the most useful statistics will compare our model’s prediction with the ground truth “real” labels of our dataset. We can also break this down by item category:
```
y_pred = model1.predict(x_test).argmax(axis=1)
cm = confusion_matrix(y_test, y_pred)
plt.figure(figsize=(8,6))
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=class_names, yticklabels=class_names)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.title('Confusion Matrix')
plt.show()
print('Classification report:')
print(classification_report(y_test, y_pred, target_names=class_names))
```
From here, we can notice that the accuracy differs quite a bit by type of garment. A possible interpretation of this is that trousers are quite a distinct type of clothing from, say, t-shirts and shirts, which can be more commonly confused.
This is, of course, the type of nuance that, as humans, we can pick up by looking at the images, but the model only has access to a matrix of pixel values. The data does seem, however, to confirm our intuition. We can further build a more comprehensive visualization to test this hypothesis.
```
import numpy as np
import matplotlib.pyplot as plt
# pick 8 wrong examples
y_pred = predictions.argmax(axis=1)
wrong_idx = np.where(y_pred != y_test)[0][:8] # first 8 mistakes
n = len(wrong_idx)
fig, axes = plt.subplots(n, 2, figsize=(10, 2.2 * n), constrained_layout=True)
for row, idx in enumerate(wrong_idx):
p = predictions[idx]
pred = int(np.argmax(p))
true = int(y_test[idx])
axes[row, 0].imshow(x_test[idx], cmap="gray")
axes[row, 0].axis("off")
axes[row, 0].set_title(
f"WRONG P:{class_names[pred]} ({p[pred]:.2f}) T:{class_names[true]}",
color="red",
fontsize=10
)
bars = axes[row, 1].bar(range(len(class_names)), p, color="lightgray")
bars[pred].set_color("red")
axes[row, 1].set_ylim(0, 1)
axes[row, 1].set_xticks(range(len(class_names)))
axes[row, 1].set_xticklabels(class_names, rotation=90, fontsize=8)
axes[row, 1].set_ylabel("conf", fontsize=9)
plt.show()
```
This table generates a view where we can explore the confidence our model had in a prediction: By exploring which weight each class was given, we can see where there was doubt (i.e. multiple classes with a higher weight) versus when the model was certain (only one guess). These examples further confirm our intuition: top-types appear to be more commonly confused by the model.
Conclusion
And there we have it! We were able to set up and train our first model and already drive some data science insights from our data and model results. Using some of the PyCharm functionalities at this point can speed up the experimentation process by providing access to our documentation and applying code completion directly in the cells. We can even use AI Assistant to help generate some of the graphs we’ll need to further evaluate the TensorFlow model performance and investigate our results.
You can try out this notebook yourself, or better yet, try to generate it with these same tools for a more hands-on learning experience.
Where to go next
This notebook is a minimal, teachable starting point. Here are some practical next steps to try afterwards:
- Replace the dense baseline with a small CNN (Conv2D → MaxPooling → Dense).
- Add dropout or batch normalization to reduce overfitting.
- Apply data augmentation (random shifts/rotations) to improve generalization.
- Use callbacks like
EarlyStoppingandModelCheckpointso training is efficient, and you keep the best weights. - Export a
SavedModelfor server use or convert to TensorFlow Lite for edge devices (Raspberry Pi, microcontrollers).
Frequently asked questions
When should I use TensorFlow?
TensorFlow is best used when building machine learning or deep learning models that need to scale, go into production, or run across different environments (cloud, mobile, edge devices).
TensorFlow is particularly well-suited for large-scale models and neural networks, including scenarios where you need strong deployment support (TensorFlow Serving, TensorFlow Lite). For research prototypes, TensorFlow is viable, but it’s more commonplace to use lightweight frameworks for easier experimentation.
Can TensorFlow run on a GPU?
Yes, TensorFlow can run GPUs and TPUs. Additionally, using a GPU can significantly speed up training, especially for deep learning models with large datasets. The best part is, TensorFlow will automatically use an available GPU if it’s properly configured.
What is loss in TensorFlow?
Loss (otherwise known as loss function) measures how far a model’s predictions are from the actual target values. Loss in TensorFlow is a numerical value representing the distance between predictions and actual target values. A few examples include:
- MSE (mean squared error), used in regression tasks.
- Cross-entropy loss, often used in classification tasks.
How many epochs should I use?
There’s no set number of epochs to use, as it depends on your dataset and model. Typical approaches cover:
- Starting with a conservative number (10–50 epochs).
- Monitoring validation loss/accuracy and adjusting based on the results you see.
- Using early stopping to halt training when improvements decrease.
An epoch is one full pass through your training data. Not enough passes through leads to underfitting, and too many can cause overfitting. The sweet spot is where your model generalizes best to unseen data.
About the author
PyCon
Stories from the PyCon US Hotels
Friendships, collaborations, and breakthroughs
The fun, the learning, and the inspiration don't stop when you walk out of the convention center. Some of the most memorable moments from PyCon US happen in the lobby at 10 pm, laughing with someone you only knew as a username until an hour ago; over breakfast, where a casual conversation turns into a collaboration that lasts years; and on the walks to and from the conference. PyCon US hotels have their own lore.
We asked people about their experiences and were overwhelmed, it turns out that everyone has a story!
"One story stands out to me beyond getting to know each other and sharing ideas. When I was getting ready to give my first PyCon talk in Montreal, Selena Deckelmann offered to help review my slides and listen to me practice. We spent a few hours on the floor of her hotel room prepping while her very young daughter crawled around on the floor and chewed on my PyCon badge since she was teething. It's still one of my favorite PyCon and PyLadies memories.” - Carol Willing, Willing Consulting
“The hotel lobby last year turned into a makeshift meetup after the PyLadies Auction. People were having a great time at the auction and kept the energy going in the lobby afterward. Everyone was there, even those who hadn't attended the auction. Luckily, the hotel also sold my favorite chocolate milk in the lobby, so I got to end my evening drinking milk and chatting with Python friends.” - Cheuk Ting Ho (the PyLady who loves the auction and karaoke)
"In Pittsburgh a couple of years ago I was having breakfast at the hotel, when a guy I didn't know spotted my Python T-shirt and introduced himself. It was his first PyCon and my 21st, and we ended up having breakfast together. I gave him a few tips on enjoying a PyCon, but it turned out he was also a guitarist, so we spent most of breakfast talking about music and playing guitar.” - Naomi Ceder, former board chair and loooong time PyCon goer
"I ran into Trey Hunner during my first PyCon US in the hotel lobby as a PSF employee. He was running a Cabo game. He immediately welcomed me and showed me how to play. (He’s a great teacher, so I won three rounds in a row!) I also met a bunch of lovely people who have been attending PyCon US for years and years, and I learned that there is almost always a Cabo game in the hotel lobby." - Deb Nicholson (PSF Executive Director & resident Cabo shark)
“One of my most memorable hotel lobby moments was a chance encounter with Thomas Wouters. We fell into a natural conversation about his work and his deep, genuine pride in the Python Software Foundation community. He spoke warmly about the people who make the community what it is and what it means to him to be part of it. What I had no idea at the time was that just three days later, he would be called up on stage and announced as a Distinguished Service Award recipient — one of the highest honors the Python Software Foundation gives.” - Abigail Dogbe, PSF Board Member
“Juggling in the hotel lobby turned into an unexpected highlight of the conference. We had started teaching each other — my fault entirely for bringing the juggling balls — when a teenager and his mom wandered through on their way to see Pearl Jam. The kid's eyes lit up the moment he saw us, so I waved them over and started teaching him. Turns out they'd booked that very hotel hoping to cross paths with the band. He was excited about everything, and she was right there with him, every bit as thrilled.” - Ned Batchelder, Python Core Team and Netflix, Software Engineer
And this year, instead of sitting in LA-to-Long Beach traffic, consider staying in the official conference hotel block because there's too much to miss if you're too far away.
Real Talk: Why booking a room via PyCon US matters
If you're planning to attend PyCon US, please consider booking your stay within the official conference hotel block.
When attendees reserve rooms through the block, it helps the conference meet its contractual commitments with the venue, which directly impacts the overall cost of hosting the event.
Strong participation in the hotel block helps PyCon US:
Keep registration prices as low as possible while continuing to invest in programs that support our community, like travel grants, accessibility services, and community events.
When rooms go unfilled in the block, the conference incurs major financial penalties that ultimately make the event more expensive to run for everyone.
By booking in the hotel block, you are giving back and helping keep PyCon US sustainable and affordable for the entire Python community.
PSST! Exclusive swag when you book a room. We can't say more.
Attendees who book within the official hotel block this year will receive a special mystery swag item. We can't tell you what it is. That's why it's called mystery swag. But we can tell you the only way to get it is to book in the official PyCon US hotel block.
Where to stay: official PyCon US 2026 hotel block
All hotels are in Long Beach, within easy reach of the Long Beach Convention Center.
The Westin Long Beach Spacious rooms and great amenities, and the block still has availability. Book here
Hyatt Regency Long Beach is the conference headquarters hotel. Closest to the convention center–just about connected. Book here
Marriott Long Beach Downtown A solid choice with easy access to the convention center and the waterfront. Book here
Courtyard by Marriott Long Beach Downtown A comfortable, more affordable option still within the block. Book hereStéphane Wirtel
Ce livre Python que je voulais juste mettre à jour
Ce livre Python que je voulais juste mettre à jour
En août dernier, j’annonçais la relance de ce livre avec une certaine naïveté : j’avais retrouvé mon PDF de 2014, extrait le Markdown avec Docling, et assemblé un pipeline Longform → Pandoc → Typst. Je me disais que ce serait l’affaire de quelques semaines — mettre à jour les versions, ajouter quelques chapitres, boucler.
Huit mois plus tard, le périmètre a triplé, la chaîne d’outils a été réécrite, et la façon dont je travaille a complètement changé. Ce n’est pas ce que j’avais prévu. C’est mieux.
April 06, 2026
ListenData
How to Use Gemini API in Python
In this tutorial, you will learn how to use Google's Gemini AI model through its API in Python.
Follow the steps below to access the Gemini API and then use it in python.
- Visit Google AI Studio website.
- Sign in using your Google account.
- Create an API key.
- Install the Google AI Python library for the Gemini API using the command below :
pip install google-genai
.


