skip to navigation
skip to content

Planet Python

Last update: April 14, 2026 04:43 PM UTC

April 14, 2026


Real Python

Vector Databases and Embeddings With ChromaDB

The era of large language models (LLMs) is here, bringing with it rapidly evolving libraries like ChromaDB that help augment LLM applications. You’ve most likely heard of chatbots like OpenAI’s ChatGPT, and perhaps you’ve even experienced their remarkable ability to reason about natural language processing (NLP) problems.

Modern LLMs, while imperfect, can accurately solve a wide range of problems and provide correct answers to many questions. However, due to the limits of their training and the number of text tokens they can process, LLMs aren’t a silver bullet for all tasks.

You wouldn’t expect an LLM to deliver relevant responses about topics that don’t appear in its training data. For example, if you asked ChatGPT to summarize information in confidential company documents, you’d be out of luck. You could show some of these documents to ChatGPT, but there’s a limit to how many documents you can upload before you exceed ChatGPT’s maximum token count. How would you select which documents to show ChatGPT?

To address these limitations and scale your LLM applications, a great option is to use a vector database like ChromaDB. A vector database allows you to store encoded unstructured objects, like text, as lists of numbers that can be compared to one another. For instance, you can find a collection of documents relevant to a question you’d like an LLM to answer.

In this video course, you’ll learn about:

After watching, you’ll have the foundational knowledge to use ChromaDB in your NLP or LLM applications. Before watching, you should be comfortable with the basics of Python and high school math.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 14, 2026 02:00 PM UTC

Quiz: Explore Your Dataset With pandas

In this quiz, you’ll test your understanding of Explore Your Dataset With pandas.

By working through this quiz, you’ll revisit pandas core data structures, reading CSV files, indexing and filtering data, grouping and aggregating results, understanding dtypes, and combining DataFrames.

This quiz helps you apply the core techniques from the course so you can turn a large dataset into clear, reproducible insights.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 14, 2026 12:00 PM UTC

Quiz: Altair: Declarative Charts With Python

In this quiz, you’ll test your understanding of Altair: Declarative Charts With Python.

By working through this quiz, you’ll revisit Altair’s core grammar of Data, Mark, and Encode, encoding channels and type shorthands, interactive selections with brushing and linked views, and common limitations to watch out for.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 14, 2026 12:00 PM UTC

Quiz: Vector Databases and Embeddings With ChromaDB

In this quiz, you’ll test your understanding of Embeddings and Vector Databases With ChromaDB.

By working through this quiz, you’ll revisit key concepts like vectors, cosine similarity, word and text embeddings, ChromaDB collections, metadata filtering, and retrieval-augmented generation (RAG).


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 14, 2026 12:00 PM UTC


Python Software Foundation

PyCon US 2026: Why we're asking you to think about your hotel reservation

The PyCon US 2026 team has already covered some of the fun, unexpected, and meaningful reasons you’ll want to stay in the PyCon US hotel block. The PSF wants to use our blog to give a different angle, to keep being transparent with you, and share a little bit of real talk on the economics of holding a conference in the US at this moment in time. The short version is, if you’re joining us in Long Beach, please book the official PyCon US hotels through your PyCon US 2026 dashboard, because bookings in our hotel block are critical to the economic viability of the event.

Context on hotel bookings & PyCon US

For many years, PyCon US has relied on hotel booking commissions to help pay for our conference space. This helps us keep the event tickets affordable and to continue offering Travel Grants to community members who might not otherwise be able to attend PyCon US. Once your event outgrows academic spaces, donated conference rooms, or theatre spaces, working with the hotels is the industry’s standard way to pay for a professional convention center space. You commit to a certain number of hotel nights blocked off at nearby hotels, based on your event’s numbers from previous years, and in return, you get a reduced rental charge at the convention center. If you sell enough rooms, you additionally earn a small percentage of the revenue from those rooms, i.e. a commission. If, on the other hand, you don’t sell enough rooms, you owe damages to the hotels–essentially paying the full rate for the rooms they reserved for your event but didn’t sell. 


This system has worked well for the PSF and PyCon US until this year. At the height of the pre-pandemic years, we brought in over $200,000 in hotel commissions. Even last year in Pittsburgh, we fully sold out one hotel and our total commission in 2025 was a healthy $95,909. Unfortunately, this year our hotel bookings are far behind the level they need to avoid damages, let alone earn any commission. We attribute this largely to the sad but understandable decline in willingness of international attendees, as well as some vulnerable domestic attendees, to travel to PyCon US, given the current environment. The bottom line is, if PyCon US hotel booking trends continue at their current pace, the PSF is on track to owe over $200,000 in damages under our hotel contracts.

We are not alone in this. The travel industry has been talking about the slump in foreign visitors to the US for months. The decline in foreign tourism revenue is also making the hotels less interested in being generous with our rates, contracts, and deadlines, since most hotels have seen declines in their bookings all year, not just during our event. Everyone is feeling the squeeze.

Where we’re at now

PyCon US ticket sales are only lagging by a bit. Local attendees buy their tickets later, which is something we anticipate, but this year’s hotel bookings are lagging by a lot compared to last year:

PyCon US Ticket sales as of April 10, 2025: 1,565

​​​​PyCon US Ticket sales as of April 12, 2026: 1,333


Hotel nights sold as of April 10th, 2025: 3,155 

Hotel nights sold as of April 12th, 2026: 2,192


Hotel nights we need to sell by April 20th, 2026 to avoid damages: 3,338

Additional Hotel nights needed by April 20th, 2026 to avoid damages: 1,146

The PSF signed a contract for the Long Beach venue back in July of 2023. At that time we couldn‘t have foreseen this current situation where interest in coming to the US has sharply declined due to increased risk. In response, we have focused on attracting more domestic attendees, and that has been going pretty well, but it hasn’t made up for the macroeconomic and geopolitical impacts on our attendance. 

How you can help

We’ll need as many of our attendees as possible to book the official conference hotel before the deadline: The first hotel block closes on April 20th, and the last block closes April 24th. 

Booking the official conference hotel helps us keep PyCon US running and affordable and it’s also a lot of fun to stay where the action is. If you are planning to join us at PyCon US this year (and we hope you can because there are a lot of great things happening at the event this year!) then we hope you will consider booking an official conference hotel

To book in our hotel block, first register for the conference, and then book your room directly from your attendee dashboard. If you need help or would like to reserve a group of rooms, please contact our housing partner Orchid: 1-877-505-0689 or help@orchid.events. Our hotels page has a full list of the four hotel options and their deadlines.

A final note

We want to thank you for your commitment to the community that makes PyCon US the special event it is. We hope to see you there to learn, collaborate, and share lots of fun moments. 

For all those who can’t be at PyCon US this year for whatever reason: you will be sorely missed and we hope to see you at a future edition of the event!

 

April 14, 2026 10:13 AM UTC


Seth Michael Larson

Add Animal Crossing events to your digital calendar

Animal Forest (“Dƍbutsu no Mori” or â€œă©ă†ă¶ă€ăźæŁźâ€) was released in Japan for the Nintendo 64 on April 14th, 2001: exactly 25 years ago today! To celebrate this beloved franchise I have created calendars for each of the “first-generation” Animal Crossing games that you can load into calendar apps like Google Calendar or Apple Calendars to see events from your town.

These calendars include holidays, special events, igloo and summer campers, and more. Additionally, I've created a tool which can generate importable calendars for the birthdays of villagers in your town using data from future titles and star signs from e-Reader cards.

Select which game, region, and language you are interested in and then scan the QR code or copy the URL and import the calendar manually into your calendar application. Note that calendars are only available for a valid “game + region + language” combinations such as: “Animal Forest e+ + NTSC-J + Japanese”.


Continue reading on sethmlarson.dev ...



Thanks for keeping RSS alive! ♄

April 14, 2026 12:00 AM UTC

April 13, 2026


Python Software Foundation

Reflecting on Five Years as the PSF’s First CPython Developer in Residence

After nearly five wonderful years at the Python Software Foundation as the inaugural CPython Developer in Residence, it's time for me to move on. I feel honored and honestly so lucky to have had the opportunity to kick off the program that now includes several wonderful full-time engineers. I'm glad to see the program left in good hands. The vacancy created by my departure will be filled after PyCon US as the PSF is currently focused on delivering a strong event. I'm happy to share that Meta will continue to sponsor the CPython Developer in Residence role at least through mid-2027. The program is safe.

Ɓukasz with PSF's Security Developer in Residence Seth Larson and PyPI Safety & Security Engineer Mike Fielder at PyCon US 2025


As a member of the Python Steering Council during Ɓukasz’s tenure as Developer in Residence, I express my personal gratitude for his dedication to the CPython project and the larger Python community. I know I echo the sentiment of everyone who has served on the Council during his time as DiR. He has defined what it means to be a Developer in Residence - a position that is incredibly important to the smooth operation of the CPython project, in large and small ways, visible and hidden. Our bi-weekly meetings gave the Steering Council a detailed, unique, and invaluable contemporaneous perspective on what’s happening in CPython. Ɓukasz leaves big shoes to fill, and we wish him all the best in his next endeavor. It’s comforting to know that he will continue to be a Python leader and member of the core team.


-- Barry Warsaw; Python Steering Council member 2026


In my time as a developer in residence, I personally touched some pretty amazing projects like the transition to GitHub issues from bugs.python.org, the replacement of the mostly manual CLA process with an automated system, the introduction of free threading to Python, and the replacement of the interactive shell in the interpreter. And between the thousands of pull requests I've reviewed or authored, and the many less glamorous tasks like content moderation and keeping the lights on when it comes to core workflow, I've interacted with some amazing individuals. Some of them are core developers now. I've witnessed the full-time paid developer in residence roster at the Python Software Foundation grow from one person to five.


As for me, ever since seeing it for the first time in 2013, I had dreamed about moving permanently to Vancouver BC. This dream is coming true soon. As part of that move, I'm joining Meta as a software engineer on the Python Language Foundation team. In any case, I'm not disappearing from the open-source Python community. I'll be seeing you online and maybe even in person at Python-related conferences.


April 13, 2026 02:01 PM UTC


Real Python

How to Add Features to a Python Project With Codex CLI

After reading this guide, you’ll be able to use Codex CLI to add features to a Python project directly from your terminal. Codex CLI is an AI-powered coding assistant that runs inside your terminal. It understands your project structure, reads your files, and proposes multi-file changes using natural language instructions.

Instead of copying code from a browser or relying on an IDE plugin, you’ll use Codex CLI to implement a real feature in a multi-file Python project directly from your terminal:

Example of Using Codex CLI to Implement a Project Feature

In the following steps, you’ll install and configure Codex CLI, use it to implement a deletion feature in a contact book app, and then refine that feature through iterative prompting.

Take the Quiz: Test your knowledge with our interactive “How to Add Features to a Python Project With Codex CLI” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

How to Add Features to a Python Project With Codex CLI

Test your knowledge of Codex CLI, the AI-powered terminal tool for adding features to Python projects with natural language.

Prerequisites

To follow this guide, you should be familiar with the Python language. You’ll also need an OpenAI account with either a paid ChatGPT subscription or a valid API key, which you’ll connect to Codex CLI once you install it. Additionally, you’ll need to have Node.js installed, since Codex CLI is distributed as an npm package.

To make it easier for you to experiment with Codex CLI, download the RP Contacts project by clicking the link below:

The project RP Contacts is a text-based interface (TUI) that allows you to manage contacts directly in the terminal through a Textual app. It’s an adapted version of the project from Real Python’s tutorial Build a Contact Book App With Python, Textual, and SQLite. It differs from the original in that it uses uv to manage the project, and the TUI buttons Delete and Clear All haven’t been implemented—that’s what you’ll use Codex CLI for.

Once you’ve downloaded the project, you want to check that you can run it. As mentioned, the project uses uv for dependency management—you can tell by the uv.lock file in the project root—so make sure you have uv installed. If you don’t have uv yet, follow the official installation instructions.

Once you have uv installed and you’re at the root directory of the project, you can run the project:

Shell
$ uv run rpcontacts

When you run the command rpcontacts through uv for the first time, uv will create a virtual environment, install the dependencies of your project, and start the RP Contacts TUI. If all goes well, you should see a TUI in your terminal with a couple of buttons and an empty contact listing:

Contact Book Planned Main TUI

Once the TUI opens, create some test contacts by using the Add button and filling in the form that pops up. After creating a couple of fake contacts, quit the application by pressing Q.

Finally, you’ll want to initialize a Git repository at the root of your project and commit all your files:

Shell
$ git init .
$ git add .
$ git commit -m "First commit."

Codex CLI will modify your code, and you can never tell whether the changes will be good or not. Versioning your code makes it straightforward to roll back any changes made by LLMs if you don’t like them.

If you want to explore other AI-powered coding tools alongside Codex CLI, Real Python’s Python Coding With AI learning path brings together tutorials and video courses on AI-assisted coding, prompt engineering, and LLM development.

Step 1: Install and Configure Codex CLI

With all the accessory setup out of the way, it’s now time to install Codex CLI. For that, you’ll want to check the official OpenAI documentation to see the most up-to-date installation instructions. As of now, OpenAI recommends using npm:

Read the full article at https://realpython.com/codex-cli/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 13, 2026 02:00 PM UTC


PyCon

How to Build Your PyCon US 2026 Schedule

Six Pathways Through the Talks

Finding your way through three days of world-class Python content

PyCon US 2026 runs May 13–19 in Long Beach, California, and with over 100 talks across five rooms over three days, the schedule can feel like a lot to navigate. The good news: whether you came to go deep on Python performance, level up your security knowledge, get practical Python insights for agentic AI, or finally understand what all the async fuss is about, there's a clear path through the content that's built for you. Register now to get in on the full experience. 

We mapped six attendee pathways through the full talks schedule with a bonus tutorial to pair with it, each one a curated sequence of sessions that focuses on a core Python topic. Think of them less as tracks and more as through-lines. Pick the one that matches where you are and what you want to walk away with to integrate into your work.

Python Performance: From Memory to Metal

If you want to understand why your Python is slow and what to actually do about it, this is your path. It runs across all three days and takes you from memory profiling fundamentals all the way to CPython internals with one of the core developers who is actually changing the way the runtime works.

Friday 

Saturday

Sunday

Pair it with a tutorial: Start the week with Arthur Pastel and Adrien Cacciaguerra's Wednesday tutorial Python Performance Lab: Sharpening Your Instincts. It's a hands-on lab designed to build the kind of performance intuition that makes everything in this pathway land harder.

Debugging and Observability: Finding What's Wrong and Why

This pathway is for engineers who spend too much time in production fires and want better tools for preventing and diagnosing them. It moves from memory leak storytelling through the brand new profiling and debugging interfaces, landing in Python 3.14 and 3.15.

Friday

Saturday

Pair it with a tutorial: Catherine Nelson and Robert Masson's Thursday tutorial Going from Notebooks to Production Code is a natural warm-up, it covers the gap between exploratory code and production systems, which is exactly where most debugging pain lives.

Concurrency and Async: Making Python Do More at Once

The concurrency story in Python is changing faster than it has in years. This pathway traces the thread from hardware-level parallelism through the GIL removal to practical async patterns for the systems people are actually building in 2026.

Friday 

Saturday

Pair it with a tutorial: Trey Hunner's Wednesday tutorial Lazy Looping in Practice: Building and Using Generators and Iterators is a perfect primer. Generators and iterators are the building blocks of Python's async model, and Hunner is one of the best teachers in the community at making these concepts click.

AI and Machine Learning: From Inference to Agents

The dedicated Future of AI with Python track runs all day Friday, May 15th, and it's one of the strongest single-day lineups in the schedule. This pathway threads the AI content across the full conference, from hardware fundamentals to production-grade inference.

Friday

Saturday

Pair it with a tutorial: Two tutorials are worth your attention here. Pamela Fox's Wednesday Build Your First MCP Server in Python is the fastest way to understand how agentic systems actually work under the hood — MCP is quickly becoming the standard way to give AI agents access to tools and data. And Isabel Michel's Wednesday Implementing RAG in Python: Build a Retrieval-Augmented Generation System gives you the hands-on foundation underneath most modern LLM applications.

Security: A Full Day Worth Taking Seriously

Saturday, May 16th, is the first-ever dedicated security track at PyCon US, and if security is anywhere near your professional concerns, you should plan to spend most of Saturday in Room 103ABC. Eleven experts. One room. A full day.

Saturday

Pair it with a tutorial: Paul Zuradzki's Wednesday tutorial, Practical Software Testing with Python is a strong complement: the discipline of writing tests and the discipline of writing secure code overlap more than most developers realize, and this tutorial gives you the testing foundation that makes security practices easier to implement and verify.

New to Python and Packaging: A First-Timer's Path Through the Conference

Not every pathway is about going deep. This one is for attendees who are newer to Python or who want to level up on tooling, packaging, and writing code that other people can actually use. It runs gently across all three days and ends with a satisfying arc.

Friday

Saturday

Sunday 

Pair it with a tutorial: Mason Egger's Thursday tutorial, Writing Pythonic Code: Features That Make Python Powerful, is the ideal warm-up for this entire pathway. It covers the idioms and language features that separate code that works from code that feels like Python, which is exactly the mindset the rest of this track builds on. Or if you are just getting started with no experience at all, try Python for Absolute Beginners.  If you've started and stopped learning to code before, or never got around to starting at all, sign up for this tutorial and start PyCon on a strong step.

However you come to PyCon US 2026, there's a path through the schedule built for you. The full talks schedule is at us.pycon.org/2026/schedule/talks, the full tutorials schedule is at https://us.pycon.org/2026/schedule/tutorials/, and registration is open now.

We'll see you in Long Beach.


PyCon US 2026 takes place May 13–19 in Long Beach, California. Talks run Friday, May 15th, through Sunday, May 17th.


April 13, 2026 12:21 PM UTC


Real Python

Quiz: Gemini CLI vs Claude Code: Which to Choose for Python Tasks

In this quiz, you’ll test your understanding of Gemini CLI vs Claude Code: Which to Choose for Python Tasks.

By working through this quiz, you’ll revisit key differences between Gemini CLI and Claude Code, including installation requirements, model selection, performance benchmarks, and pricing models.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 13, 2026 12:00 PM UTC

Quiz: Python Continuous Integration and Deployment Using GitHub Actions

This quiz helps you review the key steps for setting up continuous integration and delivery using GitHub Actions. You’ll practice how to organize workflow files, choose common triggers, and use essential Git and YAML features.

Whether you’re just getting started or brushing up, these questions draw directly from Python Continuous Integration and Deployment Using GitHub Actions. Test your understanding before building your next workflow.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 13, 2026 12:00 PM UTC

April 12, 2026


Ned Batchelder

Linklint

I wrote a Sphinx extension to eliminate excessive links: linklint. It started as a linter to check and modify .rst files, but it grew into a Sphinx extension that works without changing the source files.

It all started with a topic in the discussion forums: Should not underline links, which argued that the underlining was distracting from the text. Of course we did not remove underlines, they are important for accessibility and for seeing that there are links at all.

But I agreed that there were places in the docs that had too many links. In particular, there are two kinds of link that are excessive:

Linklint is a Sphinx extension that suppresses these two kinds of links during the build process. It examines the doctree (the abstract syntax tree of the documentation) and finds and modifies references matching our criteria for excessiveness. It’s running now in the CPython documentation, where it suppressed 3612 links. Nice.

I had another idea for a kind of link to suppress: “obvious” references. For example, I don’t think it’s useful to link every instance of “str” to the str() constructor. Is there anyone who needs that link because they don’t know what “str” means? And if they don’t know, is that the right place to take them?

There are three problems with that idea: first, not everyone agrees that “obvious” links should be suppressed at all. Second, even among those who do, people won’t agree on what is obvious. Sure, int and str. But what about list, dict, set? Third, there are some places where a link to str() needs to be kept, like “See str() for details.” Sphinx has a syntax for references to suppress the link, but there’s no syntax to force a link when linklint wants to suppress it.

So linklint doesn’t suppress obvious links. Maybe we can do it in the future once there’s been some more thought about it.

In the meantime, linklint is working to stop many excessive links. It was a small project that turned out much better than I expected when I started on it. A Sphinx extension is a really powerful way to adjust or enhance documentation without causing churn in the .rst source files. Sphinx itself can be complex and mysterious, but with a skilled code reading assistant, I was able to build this utility and improve the documentation.

April 12, 2026 06:22 PM UTC

April 11, 2026


Rodrigo GirĂŁo SerrĂŁo

Personal highlights of PyCon Lithuania 2026

In this article I share my personal highlights of PyCon Lithuania 2026.

Shout out to the organisers and volunteers

This was my second time at PyCon Lithuania and, for the second time in a row, I leave with the impression that everything was very well organised and smooth. Maybe the organisers and volunteers were stressed out all the time — organising a conference is never easy — but everything looked under control all the time and well thought-through.

Thank you for an amazing experience!

And by the way, congratulations for 15 years of PyCon Lithuania. To celebrate, they even served a gigantic cake during the first networking event. The cake was at least 80cm by 30cm:

A picture of a large rectangular cake with the PyCon Lithuania logo in the middle.The PyCon Lithuania cake.

I'll be honest with you: I didn't expect the cake to be good. The quality of food tends to degrade when it's cooked at a large scale... But even the taste was great and the cake had three coloured layers in yellow, green, and red.

Social activities

The organisers prepared two networking events, a speakers' dinner, and three city tours (one per evening) for speakers. There was always something for you to do.

The city tour is a brilliant idea and I wonder why more conferences don't do it:

I had taken the city tour last time I had been at PyCon Lithuania and taking it again was not a mistake. Here's our group at the end of the tour, immediately before the speakers' dinner:

Some PyCon Lithuania speakers smile at the camera in front of Gediminas's castle.Some PyCon Lithuania speakers at the city tour.

The conference organisers even made sure that the city tour ended close to the location of the speakers' dinner and that the tour ended at the same time as the dinner started. Another small detail that was carefully planned.

The atmosphere of the restaurant was very pleasant and the staff there was helpful and kind, so we had a wonderful night. At some point, at our table, we noticed that the folks at the other two tables were projecting something on a big screen. There was a large curtain that partially separated our table from the other two, so we took some time to realise that an impromptu Python quiz was about to take place.

I'm (way too) competitive and immediately got up to play. After six questions, which included learning about the existence of the web framework Falcon and correctly reordering the first four sentences of the Zen of Python, I was crowned the winner:

A slanted picture of a blue screen showing the player RGS at the top of the quiz podium.The final score for the quiz.

The top three players got a free spin on the PyCon Lithuania wheel of fortune.

Egg hunt and swag

On each day of the conference there was an egg hunt running...

April 11, 2026 12:23 PM UTC


Armin Ronacher

The Center Has a Bias

Whenever a new technology shows up, the conversation quickly splits into camps. There are the people who reject it outright, and there are the people who seem to adopt it with religious enthusiasm. For more than a year now, no topic has been more polarising than AI coding agents.

What I keep noticing is that a lot of the criticism directed at these tools is perfectly legitimate, but it often comes from people without a meaningful amount of direct experience with them. They are not necessarily wrong. In fact, many of them cite studies, polls and all kinds of sources that themselves spent time investigating and surveying. And quite legitimately they identified real issues: the output can be bad, the security implications are scary, the economics are strange and potentially unsustainable, there is an environmental impact, the social consequences are unclear, and the hype is exhausting.

But there is something important missing from that criticism when it comes from a position of non-use: it is too abstract.

There is a difference between saying “this looks flawed in principle” and saying “I used this enough to understand where it breaks, where it helps, and how it changes my work.” The second type of criticism is expensive. It costs time, frustration, and a genuine willingness to engage.

The enthusiast camp consists of true believers. These are the people who have adopted the technology despite its shortcomings, sometimes even because they enjoy wrestling with them. They have already decided that the tool is worth fitting into their lives, so they naturally end up forgiving a lot. They might not even recognize the flaws because for them the benefits or excitement have already won.

But what does the center look like? I consider myself to be part of the center: cautiously excited, but also not without criticism. By my observation though that center is not neutral in the way people imagine it to be. Its bias is not towards endorsement so much as towards engagement, because the middle ground between rejecting a technology outright and embracing it fully is usually occupied by people willing to explore it seriously enough to judge it.

Bias on Both Sides

The compositions of the groups of people in the discussions about new technology are oddly shaped because one side has paid the cost of direct experience and the other has not, or not to the same degree. That alone creates an asymmetry.

Take coding agents as an example. If you do not use them, or at least not for productive work, you can still criticize them on many grounds. You can say they generate sloppy code, that they lower your skills, etc. But if you have not actually spent serious time with them, then your view of their practical reality is going to be inherited from somewhere else. You will know them through screenshots, anecdotes, the most annoying users on Twitter, conference talks, company slogans, and whatever filtered back from the people who did use them. That is not nothing, but it is not the same as contact.

The problem is not that such criticism is worthless. The problem is that people often mistake non-use for neutrality. It is not. A serious opinion on a new language, framework, device, or way of working usually has some minimum buy-in. You have to cross a threshold of use before your criticism becomes grounded in the thing itself rather than in its reputation.

That threshold is inconvenient. It asks you to spend time on something that may not pay off, and to risk finding yourself at least partially won over. It is a lot to ask of people. But because that threshold exists, the measured middle is rarely populated by people who are perfectly indifferent to change. It is populated by people who were willing to move toward it enough in order to evaluate it properly.

Simultaneously, it’s important to remember that usage does not automatically create wisdom. The enthusiastic adopter might have their own distortions. They may enjoy the novelty, feel a need to justify the time they invested, or overgeneralize from the niche where the technology works wonderfully. They may simply like progress and want to be associated with it.

This is particularly visible with AI. There are clearly people who have decided that the future is here, all objections are temporary, and every workflow must now be rebuilt around agents. What makes AI weirder is that it’s such a massive shift in capabilities that has triggered a tremendous injection of money, and a meaningful number of adopters have bet their future on that technology.

So if one pole is uninformed abstraction and the other is overcommitted enthusiasm, then surely the center must sit right in the middle between them?

Engagement Is Not Endorsement

The center, I would argue, naturally needs to lean towards engagement. The reason is simple: a genuinely measured opinion on a new technology requires real engagement with it.

You do not get an informed view by trying something for 15 minutes, getting annoyed once, and returning to your previous tools. You also do not get it by admiring demos, listening to podcasts or discussing on social media. You have to use it enough to get past both the first disappointment and the honeymoon phase. Seemingly with AI tools, true understanding is not a matter of hours but weeks of investment.

That means the people in the center are selected from a particular group: people who were willing to give the thing a fair chance without yet assuming it deserved a permanent place in their lives.

That willingness is already a bias towards curiosity and experimentation which makes the center look more like adopters in behavior, because exploration requires use, but it does not make the center identical to enthusiasts in judgment.

This matters because from the perspective of the outright rejecter, all of these people can look the same. If someone spent serious time with coding agents, found them useful in some areas, harmful in others, and came away with a nuanced view, they may still be thrown into the same bucket as the person who thinks agents can do no wrong.

But those are not the same position at all. It’s important to recognize that engagement with those tools does not automatically imply endorsement or at the very least not blanket endorsement.

The Center Looks Suspicious

This is why discussions about new technology, and AI in particular feel so polarized. The actual center is hard to see because it does not appear visually centered. From the outside, serious exploration can look a lot like adoption.

If you map opinions onto a line, you might imagine the middle as the point equally distant from rejection and enthusiasm. But in practice that is not how it works. The middle is shifted toward the side of the people who have actually interacted with the technology enough to say something concrete about it. That does not mean the middle has accepted the adopter’s conclusion. It means the middle has adopted some of the adopter’s behavior, because investigation requires contact.

That creates a strange effect because the people with the most grounded criticism are often also adopters. I would argue some of the best criticism of coding agents right now comes from people who use them extensively. Take Mario: he created a coding agent, yet is also one of the most vocal voices of criticism in the space. These folks can tell you in detail how they fail and they can tell you where they waste time, where they regress code quality, where they need carefully designed tooling, where they only work well in some ecosystems, and where the whole thing falls apart.

But because those people kept using the tools long enough to learn those lessons, they can appear compromised to outsiders. And worse: if they continue to use them, contribute thoughts and criticism back, they are increasingly thrown in with the same people who are devoid of any criticism.

Failure Is Possible

This line of thinking could be seen as an inherent “pro-innovation bias.” That would be wrong, as plenty of technology deserves resistance. Many people are right to resist, and sometimes the people who never gave a technology a chance saw problems earlier than everyone else. Crypto is a good reminder: plenty of projects looked every bit as exciting as coding agents do now, and still collapsed when the economics no longer worked.

What matters here is a narrower point. The center is not biased towards novelty so much as towards contact with the thing that creates potential change. The middle ground is not between use and non-use, but between refusal and commitment and the people in the center will often look more like adopters than skeptics, not because they have already made up their minds, but because getting an informed view requires exploration.

If you want to criticize a new thing well, you first have to get close enough to dislike it for the right reasons. And for some technologies, you also have to hang around long enough to understand what, exactly, deserves criticism.

April 11, 2026 12:00 AM UTC

April 10, 2026


Talk Python to Me

#544: Wheel Next + Packaging PEPs

When you pip install a package with compiled code, the wheel you get is built for CPU features from 2009. Want newer optimizations like AVX2? Your installer has no way to ask for them. GPU support? You're on your own configuring special index URLs. The result is fat binaries, nearly gigabyte-sized wheels, and install pages that read like puzzle books. A coalition from NVIDIA, Astral, and QuantSight has been working on Wheel Next: A set of PEPs that let packages declare what hardware they need and let installers like uv pick the right build automatically. Just uv pip install torch and it works. I sit down with Jonathan Dekhtiar from NVIDIA, Ralf Gommers from QuantSight and the NumPy and SciPy teams, and Charlie Marsh, founder of Astral and creator of uv, to dig into all of it.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code talkpython26</a><br> <a href='https://talkpython.fm/temporal'>Temporal</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guests</strong><br/> <strong>Charlie Marsh</strong>: <a href="https://github.com/charliermarsh?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Ralf Gommers</strong>: <a href="https://github.com/rgommers?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Jonathan Dekhtiar</strong>: <a href="https://github.com/DEKHTIARJonathan?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>CPU dispatcher</strong>: <a href="https://numpy.org/doc/stable/reference/simd/how-it-works.html?featured_on=talkpython" target="_blank" >numpy.org</a><br/> <strong>build options</strong>: <a href="https://numpy.org/doc/stable/reference/simd/build-options.html?featured_on=talkpython" target="_blank" >numpy.org</a><br/> <strong>Red Hat RHEL</strong>: <a href="https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux?featured_on=talkpython" target="_blank" >www.redhat.com</a><br/> <strong>Red Hat RHEL AI</strong>: <a href="https://www.redhat.com/en/products/ai?featured_on=talkpython" target="_blank" >www.redhat.com</a><br/> <strong>RedHats presentation</strong>: <a href="https://wheelnext.dev/summits/2025_03/assets/WheelNext%20Community%20Summit%20-%2006%20-%20Red%20Hat.pdf?featured_on=talkpython" target="_blank" >wheelnext.dev</a><br/> <strong>CUDA release</strong>: <a href="https://developer.nvidia.com/cuda/toolkit?featured_on=talkpython" target="_blank" >developer.nvidia.com</a><br/> <strong>requires a PEP</strong>: <a href="https://discuss.python.org/t/pep-proposal-platform-aware-gpu-packaging-and-installation-for-python/91910?featured_on=talkpython" target="_blank" >discuss.python.org</a><br/> <strong>WheelNext</strong>: <a href="https://wheelnext.dev/?featured_on=talkpython" target="_blank" >wheelnext.dev</a><br/> <strong>Github repo</strong>: <a href="https://github.com/wheelnext?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>PEP 817</strong>: <a href="https://peps.python.org/pep-0817/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>PEP 825</strong>: <a href="https://discuss.python.org/t/pep-825-wheel-variants-package-format-split-from-pep-817/106196?featured_on=talkpython" target="_blank" >discuss.python.org</a><br/> <strong>uv</strong>: <a href="https://docs.astral.sh/uv/?featured_on=talkpython" target="_blank" >docs.astral.sh</a><br/> <strong>A variant-enabled build of uv</strong>: <a href="https://astral.sh/blog/wheel-variants?featured_on=talkpython" target="_blank" >astral.sh</a><br/> <strong>pyx</strong>: <a href="https://astral.sh/blog/introducing-pyx?featured_on=talkpython" target="_blank" >astral.sh</a><br/> <strong>pypackaging-native</strong>: <a href="https://pypackaging-native.github.io?featured_on=talkpython" target="_blank" >pypackaging-native.github.io</a><br/> <strong>PEP 784</strong>: <a href="https://peps.python.org/pep-0784/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=761htncGZpU" target="_blank" >youtube.com</a><br/> <strong>Episode #544 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/544/wheel-next-packaging-peps#takeaways-anchor" target="_blank" >talkpython.fm/544</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/544/wheel-next-packaging-peps" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>đŸ„ Served in a Flask 🎾</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

April 10, 2026 04:56 PM UTC


PyCharm

How (Not) to Learn Python

While listening to Mark Smith’s inspirational talk for Python Unplugged on PyTV about How to Learn Python, what caught my attention was that Mark suggested turning off some of PyCharm’s AI features to help you learn Python more effectively.

As a PyCharm user myself, I’ve found the AI-powered features beneficial in my day-to-day work; however, I never considered that I could turn certain features on or off to customize my experience. This can be done from the settings menu under Editor | General | Code Completion | Inline.

While we are at it, let’s have a look at these features and investigate in more detail why they are great for professional developers but may not be ideal for learners.

Local full line code completion suggestions

JetBrains AI credits are not consumed when you use local line completion. The completion prediction is performed using a built-in local deep learning model. To use this feature, make sure the box for Enable inline completion using language models is checked, and choose either Local or Cloud and local in the options. To show the complete results using the local model alone, we will look at the predictions when only Local is selected.

When it’s selected, you see that the only code completion available out of the box in PyCharm is for Python. To make suggestions available for CSS or HTML, you need to download additional models.

When you are writing code, you will see suggestions pop up in grey with a hint for you to use Tab to complete the line. 

After completing that line, you can press Enter to go to the next one, where there may be a new suggestion that you can again use Tab to complete. As you see, this can be very convenient for developers in their daily coding, as it saves time that would otherwise be spent typing obvious lines of code that follow the flow naturally. 

However, for beginners, mindlessly hitting Tab and letting the model complete lines may discourage them from learning how to use the functions correctly. An alternative is to use the hint provided by PyCharm to help you choose an appropriate method from the available list, determine which parameters are needed, check the documentation if necessary, and write the code yourself. Here is what the hint looks like when code completion is turned off:

Cloud-based completion suggestions

Let’s have a look at cloud-based completion in contrast to local completion. When using cloud-based completion, next-edit suggestions are also available (which we will look at in more detail in the next section).

Cloud-based completion comes with support for multiple languages by default, and you can switch it on or off for each language individually.

Cloud-based completion provides more functionality than local model completion, but you need a JetBrains AI subscription to use it.

You may also connect to a third-party AI provider for your cloud-based completion. Since this support is still in Beta in PyCharm 2026.1, it is highly recommended to keep your JetBrains AI subscription active as a backup to ensure all features are available.

After switching to cloud-based completion, one of the differences I noticed was that it is better at multiple-line completion, which can be more convenient. However, I have also encountered situations where the completion provided too much for me, and I had to jump in to make my own modifications after accepting the suggestions.

For learners of Python, again, you may want to disable this functionality or have to audit all the suggestions in detail yourself. In addition to the danger of relying too heavily on code completion, which removes opportunities to learn, cloud code completion poses another risk for learners. Because larger suggestions require active review from the developer, learners may not be equipped to fully audit the wholesale suggestions they are accepting. Disabling this feature for learners not only encourages learning, but it can also help prevent mistakes.

Next edit suggestions

In addition to cloud-based completion, JetBrains AI Pro, Ultimate, and Enterprise users are able to take advantage of next edit suggestions.

When they are enabled, every time you make changes to your code, for example, renaming a variable, you will be given suggestions about other places that need to be changed.

And when you press Tab, the changes will be made automatically. You can also customize this behavior so you can see previews of the changes and jump continuously to the next edit until no more are suggested.

This is, no doubt, a very handy feature. It can help you avoid some careless mistakes, like forgetting to refactor your code when you make changes. However, for learners, thinking about what needs to be done is a valuable thought exercise, and using this feature can deprive them of some good learning opportunities.

Conclusion

PyCharm offers a lot of useful features to smooth out your day-to-day development workflow. However, these features may be too powerful, and even too convenient, for those who have just started working with Python and need to learn by making mistakes. It is good to use AI features to improve our work, but we also need to double-check the results and make sure that we want what the AI suggests.

To learn more about how to level up your Python skills, I highly recommend watching Mark’s talk on PyTV and checking out all the AI features that JetBrains AI has to offer. I hope you will find the perfect way to integrate them into your work while remaining ready to turn them off when you plan to learn something new.

April 10, 2026 02:21 PM UTC


Ahmed Bouchefra

Build Your Own AI Meme Matcher: A Beginner's Guide to Computer Vision with Python

Have you ever wondered how Snapchat filters know exactly where your eyes and mouth are? Or how your phone can unlock just by looking at your face? The magic behind this is called Computer Vision, a field of Artificial Intelligence that allows computers to “see” and understand digital images.

Today, we are going to build something incredibly fun using Computer Vision: a Real-Time Meme Matcher.

Point your webcam at yourself, make a shocked face, and watch as the app instantly matches you with the “Overly Attached Girlfriend” meme. Smile and raise your hand, and Leonardo DiCaprio raises a glass right back at you.

But this isn’t just a fun project. We are going to build this using Object-Oriented Programming (OOP). OOP is a professional coding style that makes your code clean, organized, and easy to upgrade. By the end of this tutorial, you will have a working AI app and a solid understanding of how professional software is structured.

Let’s dive in!

Prerequisites

Before we start coding, make sure you have the following ready:

You will also need to install a few Python libraries. Open your terminal or command prompt and run:

pip install mediapipe opencv-python numpy

The Theory: How Does It Work?

Before we look at the code, let’s understand the two main concepts powering our application: Computer Vision (Facial Landmarks) and Object-Oriented Programming.

1. Facial Landmarks (How the AI “Sees” You)

We are using a Google library called MediaPipe. When you feed an image to MediaPipe, it places a virtual “mesh” of 478 invisible dots (called landmarks) over your face.

To figure out your expression, we use simple math. For example, how do we know if your mouth is open in surprise?

We measure the vertical distance between the dot on your top lip and the dot on your bottom lip.

If the distance is large, your mouth is open! We do the same for your eyes and eyebrows to calculate “scores” for surprise, smiling, or concern.

2. Object-Oriented Programming (OOP)

Instead of writing one massive, confusing block of code, OOP allows us to break our program into separate components called Classes.

Think of a Class as a blueprint.

For our Meme Matcher, we will create three distinct classes, each with a “Single Responsibility” (a golden rule of coding):

  1. ExpressionAnalyzer (The Brain): Handles the AI math and MediaPipe.
  2. MemeLibrary (The Database): Loads the images and compares the user’s face to the memes.
  3. MemeMatcherApp (The UI): Opens the webcam and draws the pictures on the screen.

Step 1: Building the Brain

Let’s start by creating the class that does all the heavy lifting. Create a file named meme_matcher.py and import the necessary tools. Then, we will define our first class.

import cv2
import numpy as np
import mediapipe as mp
from pathlib import Path
from concurrent.futures import ThreadPoolExecutor
import pickle
import os
import subprocess

class ExpressionAnalyzer:
    """
    The ExpressionAnalyzer class acts as the 'Brain' of our project.
    It encapsulates (hides away) the complex MediaPipe machine learning logic.
    """
    
    # Class Variables: Landmark indices for eyes, eyebrows, and mouth
    LEFT_EYE_UPPER = [159, 145, 158]
    LEFT_EYE_LOWER = [23, 27, 133]
    RIGHT_EYE_UPPER = [386, 374, 385]
    RIGHT_EYE_LOWER = [253, 257, 362]
    LEFT_EYEBROW = [70, 63, 105, 66, 107]
    RIGHT_EYEBROW = [300, 293, 334, 296, 336]
    MOUTH_OUTER = [61, 291, 39, 181, 0, 17, 269, 405]
    MOUTH_INNER = [78, 308, 95, 88]
    NOSE_TIP = 4

    def __init__(self, frame_skip: int = 2):
        self.last_features = None  
        self.frame_counter = 0     
        self.frame_skip = frame_skip 

        # Download the required AI models automatically
        self.face_model_path = self._download_model(
            "face_landmarker.task",
            "[https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task](https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task)"
        )
        self.hand_model_path = self._download_model(
            "hand_landmarker.task",
            "[https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/1/hand_landmarker.task](https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/1/hand_landmarker.task)"
        )

        # Initialize MediaPipe objects for both video and images
        
        self.face_mesh_video = self._init_face_landmarker(video_mode=True)
        self.hand_detector_video = self._init_hand_landmarker(video_mode=True)
        self.face_mesh_image = self._init_face_landmarker(video_mode=False)
        self.hand_detector_image = self._init_hand_landmarker(video_mode=False)

Understanding the Brain

In the code above, we define lists of numbers like LEFT_EYE_UPPER. These are the exact dot numbers (out of the 478) that outline the eye.

The __init__ method is a special function called a constructor. Whenever we create an ExpressionAnalyzer, this code runs automatically to set everything up. It downloads the MediaPipe AI models from Google’s servers and loads them into memory so they are ready to process faces.

Next, we add the logic to extract features:

    # ... (Add this inside the ExpressionAnalyzer class) ...

    def extract_features(self, image: np.ndarray, is_static: bool = False) -> dict:
        """Analyzes an image and returns facial/hand features as a dictionary."""
        
        face_landmarker = self.face_mesh_image if is_static else self.face_mesh_video
        
        hand_landmarker = self.hand_detector_image if is_static else self.hand_detector_video

        rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
        
        mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=rgb)

        if is_static:
            face_res = face_landmarker.detect(mp_image)
            hand_res = hand_landmarker.detect(mp_image)
        else:
            self.frame_counter += 1
            if self.frame_counter % self.frame_skip != 0:
                return getattr(self, "last_features", None)
            
            face_res = face_landmarker.detect_for_video(mp_image, self.frame_counter)
            hand_res = hand_landmarker.detect_for_video(mp_image, self.frame_counter)

        if not face_res.face_landmarks:
            return None

        landmarks = face_res.face_landmarks[0]
        landmark_array = np.array([[l.x, l.y] for l in landmarks])
        
        # Calculate the mathematical features
        features = self._compute_features(landmark_array, hand_res)
        self.last_features = features
        return features

    def _compute_features(self, landmark_array: np.ndarray, hand_res) -> dict:
        """Helper function to calculate Eye Aspect Ratio (How open the eye is)"""
        
        def ear(upper, lower):
            vert = np.linalg.norm(landmark_array[upper] - landmark_array[lower], axis=1).mean()
            horiz = np.linalg.norm(landmark_array[upper[0]] - landmark_array[upper[-1]])
            return vert / (horiz + 1e-6) 

        left_ear = ear(self.LEFT_EYE_UPPER, self.LEFT_EYE_LOWER)
        right_ear = ear(self.RIGHT_EYE_UPPER, self.RIGHT_EYE_LOWER)
        avg_ear = (left_ear + right_ear) / 2.0

        # Mouth calculations
        
        mouth_top, mouth_bottom = landmark_array[13], landmark_array[14]
        mouth_height = np.linalg.norm(mouth_top - mouth_bottom)
        mouth_left, mouth_right = landmark_array[61], landmark_array[291]
        mouth_width = np.linalg.norm(mouth_left - mouth_right)
        mouth_ar = mouth_height / (mouth_width + 1e-6)

        # Eyebrow calculations
        
        left_brow_y = landmark_array[self.LEFT_EYEBROW][:, 1].mean()
        right_brow_y = landmark_array[self.RIGHT_EYEBROW][:, 1].mean()
        left_eye_center = landmark_array[self.LEFT_EYE_UPPER + self.LEFT_EYE_LOWER][:, 1].mean()
        right_eye_center = landmark_array[self.RIGHT_EYE_UPPER + self.RIGHT_EYE_LOWER][:, 1].mean()
        
        avg_brow_h = ((left_eye_center - left_brow_y) + (right_eye_center - right_brow_y)) / 2.0

        # Check for hands
        
        num_hands = len(hand_res.hand_landmarks) if hand_res.hand_landmarks else 0
        hand_raised = 1.0 if num_hands > 0 else 0.0

        return {
            'eye_openness': avg_ear,
            'mouth_openness': mouth_ar,
            'eyebrow_height': avg_brow_h,
            'hand_raised': hand_raised,
            'surprise_score': avg_ear * avg_brow_h * mouth_ar,
            'smile_score': (1.0 - mouth_ar),
        }

This section might look heavily mathematical, but it’s just measuring distances! For instance, mouth_height calculates the distance from the top lip to the bottom lip. We bundle all these measurements into a neat little package (a Python dictionary) and return it.


Step 2: Building the Database

Now that our brain can understand expressions, we need a library to hold our memes.

class MemeLibrary:
    """
    Acts as a database for our memes. 
    It 'has-a' relationship with ExpressionAnalyzer (Dependency Injection).
    """
    
    CACHE_FILE = "meme_features_cache.pkl"

    def __init__(self, analyzer: ExpressionAnalyzer, assets_folder: str = "assets", meme_height: int = 480):
        self.analyzer = analyzer 
        self.assets_folder = assets_folder
        self.meme_height = meme_height

        self.memes = []
        self.meme_features = []

        self.feature_keys = ['surprise_score', 'smile_score', 'hand_raised', 'eye_openness', 'mouth_openness', 'eyebrow_height']
        self.feature_weights = np.array([25, 20, 25, 20, 25, 20])
        self.feature_factors = np.array([10, 10, 15, 5, 5, 5])

        self.load_memes()

    def load_memes(self):
        """Loads memes from disk or a cache file to save time."""
        if os.path.exists(self.CACHE_FILE):
            with open(self.CACHE_FILE, "rb") as f:
                self.memes, self.meme_features = pickle.load(f)
            return

        assets_path = Path(self.assets_folder)
        image_files = list(assets_path.glob("*.jpg")) + list(assets_path.glob("*.png"))

        # Analyze multiple memes at the same time
        with ThreadPoolExecutor() as executor:
            results = list(executor.map(self._process_single_meme, image_files))

        for r in results:
            if r:
                meme, features = r
                self.memes.append(meme)
                self.meme_features.append(features)

        with open(self.CACHE_FILE, "wb") as f:
            pickle.dump((self.memes, self.meme_features), f)

    def _process_single_meme(self, img_file: Path) -> tuple:
        img = cv2.imread(str(img_file))
        if img is None: return None
        
        h, w = img.shape[:2]
        scale = self.meme_height / h
        img_resized = cv2.resize(img, (int(w * scale), self.meme_height))
        
        features = self.analyzer.extract_features(img_resized, is_static=True)
        if features is None: return None
            
        return {'image': img_resized, 'name': img_file.stem.replace('_', ' ').title(), 'path': str(img_file)}, features

    def compute_similarity(self, features1: dict, features2: dict) -> float:
        """Mathematical formula to compare two dictionaries of facial features."""
        if features1 is None or features2 is None: return 0.0
        
        vec1 = np.array([features1.get(k, 0) for k in self.feature_keys])
        vec2 = np.array([features2.get(k, 0) for k in self.feature_keys])
        
        diff = np.abs(vec1 - vec2)
        similarity = np.exp(-diff * self.feature_factors)
        return float(np.sum(self.feature_weights * similarity))

    def find_best_match(self, user_features: dict) -> tuple:
        if user_features is None or not self.memes: return None, 0.0
            
        scores = np.array([self.compute_similarity(user_features, mf) for mf in self.meme_features])
        if len(scores) == 0: return None, 0.0
            
        best_idx = int(np.argmax(scores)) 
        return self.memes[best_idx], scores[best_idx]

The Magic of Dependency Injection

Did you notice how the __init__ method takes analyzer: ExpressionAnalyzer as an argument?

This is a concept called Dependency Injection.

Instead of the Library trying to build its own AI model, we just hand it the Brain we already built. This keeps our code completely separate and organized!

The find_best_match function is where the matching happens. It takes the dictionary of your face (how wide your eyes are, etc.) and compares it to the dictionaries of all the memes. The meme with the closest numbers wins!


Step 3: Building the App Controller

With our AI brain and meme database built, it’s time to bring them to life! We need an application class to turn on your webcam, capture the video, and draw the results on your screen.

class MemeMatcherApp:
    """
    The main Application class. 
    It initializes the other classes and contains the main while loop.
    """
    
    def __init__(self, assets_folder="assets"):
        self.analyzer = ExpressionAnalyzer()
        self.library = MemeLibrary(analyzer=self.analyzer, assets_folder=assets_folder)

    def run(self):
        cap = cv2.VideoCapture(0)
        cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
        cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)

        print("\nđŸŽ„ Camera started! Press 'q' to quit\n")

        while cap.isOpened():
            ret, frame = cap.read()
            if not ret: break
            frame = cv2.flip(frame, 1) # Mirror effect

            # 1. Ask the Analyzer to look at the webcam frame
            user_features = self.analyzer.extract_features(frame)
            
            # 2. Ask the Library to find the best matching meme
            best_meme, score = self.library.find_best_match(user_features)

            # 3. Handle the User Interface (Displaying the result)
            h, w = frame.shape[:2]
            
            if best_meme:
                meme_img = best_meme['image']
                meme_h, meme_w = meme_img.shape[:2]
                
                scale = h / meme_h
                new_w = int(meme_w * scale)
                meme_resized = cv2.resize(meme_img, (new_w, h))

                display = np.zeros((h, w + new_w, 3), dtype=np.uint8)
                display[:, :w] = frame               
                display[:, w:w + new_w] = meme_resized 

                # Draw UI Text boxes
                cv2.rectangle(display, (5, 5), (200, 45), (0, 0, 0), -1)
                cv2.putText(display, "YOU", (10, 35), cv2.FONT_HERSHEY_SIMPLEX, 1.2, (0, 255, 0), 2)
                
                cv2.rectangle(display, (w + 5, 5), (w + new_w - 5, 75), (0, 0, 0), -1)
                cv2.putText(display, best_meme['name'], (w + 10, 35), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 255), 2)
            else:
                display = frame
                cv2.putText(display, "No face detected!", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)

            cv2.imshow("Meme Matcher - Press Q to quit", display)
            if cv2.waitKey(1) & 0xFF == ord("q"):
                break

        cap.release()
        cv2.destroyAllWindows()

The Infinite Loop

The core of any video application is a while loop. The application reads one picture from your webcam, asks the ExpressionAnalyzer for the features, asks the MemeLibrary for a match, glues the webcam picture and the meme picture together side-by-side using NumPy, and displays it. Then, it repeats this instantly for the next frame!


Step 4: Putting it All Together

Finally, we just need to start the application. At the very bottom of your file, add the entry point:

if __name__ == "__main__":
    print("Meme Matcher Starting...\n")
    # Create the application object and run it
    app = MemeMatcherApp(assets_folder="assets")
    app.run()

Conclusion

Congratulations! You have just built a complex Artificial Intelligence application using advanced Computer Vision techniques.

More importantly, you built it the right way. By structuring your code using Object-Oriented Programming, your project is scalable. Want to add a Graphical User Interface (GUI) with buttons later? You don’t have to touch the math inside the Brain or the Database; you only have to modify the App class.

To see the real magic, download a few distinct meme images, put them in an assets folder next to your script, and run it. Try raising your eyebrows, opening your mouth wide, or throwing up a peace sign.

Happy coding!

Check out all our books that you can read for free from this page https://10xdev.blog/library

April 10, 2026 12:00 PM UTC


Real Python

The Real Python Podcast – Episode #290: Advice on Managing Projects & Making Python Classes Friendly

What goes into managing a major project? What techniques can you employ for a project that's in crisis? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 10, 2026 12:00 PM UTC

Quiz: Exploring Protocols in Python

In this quiz, you’ll test your understanding of Exploring Protocols in Python.

The questions review Python protocols, how they define required methods and attributes, and how static type checkers use them. You’ll also explore structural subtyping, generic protocols, and subprotocols.

This quiz helps you confirm the concepts covered in the course and shows you where to focus further study. If you want to review the material, the course covers these topics in depth at the link above.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 10, 2026 12:00 PM UTC

April 09, 2026


Rodrigo GirĂŁo SerrĂŁo

Who wants to be a millionaire: iterables edition

Play this short quiz to test your Python knowledge!

At PyCon Lithuania 2026 I did a lightning talk where I presented a “Who wants to be a millionaire?” Python quiz, themed around iterables. There's a whole performance during the lightning talk which was recorded and will be eventually linked to from here. This article includes only the four questions, the options presented, and a basic system that allows you to check whether you got it right or not.

Question 1

This is an easy one to get you started. It makes more sense if you watch the performance of the lightning talk.

What is the output of the following Python program?

print("Hello, world!")
  • Hello, world!
  • Hello world!
  • Hello world
  • Hello world!!

Question 2

What is the output of the following Python program?

squares = (x ** 2 for x in range(3))
print(type(squares))
  • <class 'generator'>
  • <class 'gen_expr'>
  • <class 'list'>
  • <class 'tuple'>

Question 3

This was a reference to the talk I'd given earlier today, where I talked about tee. The only object in itertools that is not an iterable.

Out of the 20, how many objects in itertools are iterables?

  • 19
  • 20
  • 1
  • 0

Question 4

What is the output of the following Python program?

from itertools import *

print(sum(chain.from_iterable(chain(*next(
islice(permutations(islice(batched(pairwise(
count()),5),3,9)),15,None)))))
  • 1800
  • 0
  • đŸ‡±đŸ‡čâ€ïžđŸ
  • SyntaxError

April 09, 2026 09:17 PM UTC

uv skills for coding agents

This article shares two skills you can add to your coding agents so they use uv workflows.

I have fully adopted uv into my workflows and most of the time I want my coding agents to use uv workflows as well, like when running any Python code or managing and running scripts that may or may not have dependencies.

To make this more convenient for me, I created two SKILL.md files for two of the most common workflows that the coding agents get wrong on the first few tries:

  1. python-via-uv: this skill tells the agent that it should use uv whenever it wants to run any piece of Python code, be it one-liners or scripts. This is relevant because I don't even have the command python/python3 in the shell path, so whenever the LLM tries running something with python ..., it fails.
  2. uv-script-workflow: this skill is specifically for when the agent wants to create and run a script. It instructs the LLM to initalise the script with uv init --script ... and then tells it about the relevant commands to manage the script dependencies.

The two skills also add a note about sandboxing, since uv's default cache directory will be outside your sandbox. When that's the case, the agent is already instructed to use a valid temporary location for the uv cache.

Installing a skill usually just means dropping a Markdown file in the correct folder, but you should check the documentation for the tools you use.

Here are the two skills for you to download:

  1. Skill for python-via-uv
  2. Skill for uv-script-workflow

I also included the skills verbatim here, for your convenience:

Skill for python-via-uv
---
name: python-via-uv
description: Enforce Python execution through `uv` instead of direct interpreter calls. Use when Codex needs to run Python scripts, modules, one-liners, tools, test runners, or package commands in a workspace and should avoid invoking `python` or `python3` directly.
---

# Python Via Uv

Use `uv` for every Python command.

Do not run `python`.
Do not run `python3`.
Do not suggest `python` or `python3` in instructions unless the user explicitly requires them and the constraint must be called out as a conflict.

## Execution Rules

When sandboxed, set `UV_CACHE_DIR` to a temporary directory the agent can write to before running `uv` commands.

Prefer these patterns:

- Run a script: `UV_CACHE_DIR=/tmp/uv-cache uv run path/to/script.py`
- Run a module: `UV_CACHE_DIR=/tmp/uv-cache uv run -m package.module`
- Run a one-liner: `UV_CACHE_DIR=/tmp/uv-cache uv run python -c "print('hello')"`
- Run a tool exposed by dependencies: `UV_CACHE_DIR=/tmp/uv-cache uv run tool-name`
- Add a dependency for an ad hoc command: `UV_CACHE_DIR=/tmp/uv-cache uv run --with <package> python -c "..."`

## Notes

Using `python` inside `uv run ...` is acceptable because `uv` is still the entrypoint controlling interpreter selection and environment setup.

If the workspace already defines a project-specific temporary cache directory, prefer that over `/tmp/uv-cache`.

If a command example or existing documentation uses `python` or `python3` directly, translate it to the closest `uv` form before executing it....

April 09, 2026 12:19 PM UTC


Real Python

Quiz: Reading Input and Writing Output in Python

In this quiz, you’ll test your understanding of Reading Input and Writing Output in Python.

By working through this quiz, you’ll revisit taking keyboard input with input(), showing results with print(), formatting output, and handling basic input types.

This quiz helps you practice building simple interactive scripts and reinforces best practices for clear console input and output.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 09, 2026 12:00 PM UTC


James Bennett

Let’s talk about LLMs

Everybody seems to agree we’re in the middle of something, though what, exactly, seems to be up for debate. It might be an unprecedented revolution in productivity and capabilities, perhaps even the precursor to a technological “singularity” beyond which it’s impossible to guess what the world might look like. It might be just another vaporware hype cycle that will blow over. It might be a dot-com-style bubble that will lead to a big crash but still leave us with something useful (the way the dot-com bubble drove mass adoption of the web). It might be none of those things.

Many thousands of words have already been spent arguing variations of these positions. So of course today I’m going to throw a few thousand more words at it, because that’s what blogs are for. At least all the ones you’ll read here were written by me (and you can pry my em-dashes from my cold, dead hands).

Terminology, and picking a lane

But first, a couple quick notes:

I’m going to be using the terms “LLM” and “LLMs” almost exclusively in this post, because I think the precision is useful. “AI” is a vague and overloaded term, and it’s too easy to get bogged down in equivocations and debates about what exactly someone means by “AI”. And virtually everything that’s contentious right now about programming and “AI” is really traceable specifically to the advent of large language models. I suppose a slightly higher level of precision might come from saying “GPT” instead, but OpenAI keeps trying to claim that one as their own exclusive term, which is a different sort of unwelcome baggage. So “LLMs” it is.

And when I talk about “LLM coding”, I mean use of an LLM to generate code in some programming language. I use this as an umbrella term for all such usage, whether done under human supervision or not, whether used as the sole producer of code (with no human-generated code at all) or not, etc.

I’m also going to try to limit my comments here to things directly related to technology and to programming as a profession, because that’s what I know (I have a degree in philosophy, so I’m qualified to comment on some other aspects of LLMs, but I’m deliberately staying away from them in this post because I find a lot of those debates tedious and literally sophomoric, as in reminding me of things I was reading and discussing when I was a sophomore).

If you’re using an LLM in some other field, well, I probably don’t know that field well enough to usefully comment on it. Having seen some truly hot takes from people who didn’t follow this principle, I’ve thought several times that we really need some sort of cute portmanteau of “LLM” and “Gell-Mann Amnesia” for the way a lot of LLM-related discourse seems to be people expecting LLMs to take over every job and field except their own.

No silver bullet

A few years ago I wrote about Fred Brooks’ No Silver Bullet, and said I think it may have been the best thing Brooks ever wrote. If you’ve never read No Silver Bullet, I strongly recommend you do so, and I recommend you read the whole thing for yourself (rather than just a summary of it).

No Silver Bullet was published at a time when computing hardware was advancing at an incredible rate, but our ability to build software was not even close to keeping up. And so Brooks made a bold prediction about software:

There is no single development, in either technology or management technique, which by itself promises even a single order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.

To support this he looked at sources of difficulty in software development, and assigned them to two broad categories (emphasis as in the original):

Following Aristotle, I divide them into essence—the difficulties inherent in the nature of the software—and accidents—those difficulties that today attend its production but that are not inherent.

A classic example is memory management: some programming languages require the programmer to manually allocate, keep track of, and free memory, which is a source of difficulty. And this is accidental difficulty, because there’s nothing which inherently requires it; plenty of other programming languages have automatic memory management.

But other sources of difficulty are different, and seem to be inherent to software development itself. Here’s one of the ways Brooks summarizes it (emphasis matches what’s in my copy of No Silver Bullet):

The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions. This essence is abstract, in that the conceptual construct is the same under many different representations. It is nonetheless highly precise and richly detailed.

I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared to the conceptual errors in most systems.

If this is true, building software will always be hard. There is inherently no silver bullet.

And to drive the point home, he also explains the diminishing returns of only addressing accidental difficulty:

How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement.

This is a straightforward mathematical argument. If its two empirical premises—that the accidental/essential distinction is real and that the accidental difficulty remaining today does not represent 90%+ of total—are true, then the conclusion which rules out an order-of-magnitude gain from reducing accidental difficulty follows automatically.

I think most programmers believe the first premise, at least implicitly, and once the first premise is accepted it becomes very difficult to argue against the second. In fact, I’d personally go further than the minimum required for Brooks’ argument. His math holds up as long as accidental difficulty doesn’t reach that 90%+ mark, since anything lower makes a 10x improvement from eliminating accidental difficulty impossible. But I suspect accidental difficulty, today, is a vastly smaller proportion of the total than that. In a lot of mature domains of programming I’d be surprised if there’s even a doubling of productivity still available from a complete elimination of remaining accidental difficulty.

There’s also a section in No Silver Bullet about potential “hopes for the silver” which addresses “AI”, though what Brooks considered to be “AI” (and there is a tangent about clarifying exactly what the term means) was significantly different from what’s promoted today as “AI”. The most apt comparison to LLMs in No Silver Bullet is actually not the discussion of “AI”, it’s the discussion of automatic programming, which has meant a lot of different things over the years, but was defined by Brooks at the time as “the generation of a program for solving a problem from a statement of the problem specifications”. That’s pretty much the task for which LLMs are currently promoted to programmers.

But Brooks quotes David Parnas on the topic: “automatic programming always has been a euphemism for programming with a higher-level language than was presently available to the programmer.” And Brooks did not believe higher-level languages on their own could be a silver bullet. As he put it in a discussion of the Ada language:

It is, after all, just another high-level language, and the biggest payoff from such languages came from the first transition, up from the accidental complexities of the machine into the more abstract statement of step-by-step solutions. Once those accidents have been removed, the remaining ones are smaller, and the payoff from their removal will surely be less.

Many people are currently promoting LLMs as a revolutionary step forward for software development, but are doing so based almost exclusively on claims about LLMs’ ability to generate code at high speed. The No Silver Bullet argument poses a problem for these claims, since it sets a limit on how much we can gain from merely generating code more quickly.

In chapter 2 of The Mythical Man-Month, Brooks suggested as a scheduling guideline that five-sixths (83%) of time on a “software task” would be spent on things other than coding, which puts a pretty low cap on productivity gains from speeding up just the coding. And even if we assume LLMs reduce coding time to zero, and go with the more generous No Silver Bullet formulation which merely predicts no order-of-magnitude gain from a single development, that’s still less than the gain Brooks himself believed could come from hiring good human programmers. From chapter 3 of The Mythical Man-Month:

Programming managers have long recognized wide productivity variations between good programmers and poor ones. But the actual measured magnitudes have astounded all of us. In one of their studies, Sackman, Erikson, and Grant were measuring performances of a group of experienced programmers. Within just this group the ratios between best and worst performances averaged about 10:1 on productivity measurements and an amazing 5:1 on program speed and space measurements!

(although I’m personally skeptical of the “10x programmer” concept, the software industry overall does seem to accept it as true)

Anecdote time: much of what I’ve done over my career as a professional programmer is building database-backed web applications and services, and I don’t see much of a gain from LLMs. I suppose it looks impressive, if you’re not familiar with this field of programming, to auto-generate the skeleton of an entire application and the basic create/retrieve/update/delete HTTP handlers from no more than a description of the data you want to work with. But that capability predates LLMs: Rails’ scaffolding, for example, could do it twenty years ago.

And not just raw code generation, but also the abstractions available to work with, have progressed to the point where I basically never feel like the raw speed of production of code is holding me back. Just as Fred Brooks would have predicted, the majority of my time is spent elsewhere: talking to people who want new software (or who want existing software to be changed); finding out what it is they want and need; coming up with an initial specification; breaking it down into appropriately-sized pieces for programmers (maybe me, maybe someone else) to work on; testing the first prototype and getting feedback; preparing the next iteration; reviewing or asking for review, etc. I haven’t personally tracked whether it matches Brooks’ five-sixths estimate, but I wouldn’t be at all surprised if it did.

Given all that, just having an LLM churn out code faster than I would have myself is not going to offer me an order of magnitude improvement, or anything like it. Or as a recent popular blog post by the CEO of Tailscale put it:

AI’s direct impact on this problem is minimal. Okay, so Claude can code it in 3 minutes instead of 30? That’s super, Claude, great work.

Now you either get to spend 27 minutes reviewing the code yourself in a back-and-forth loop with the AI (this is actually kinda fun); or you save 27 minutes and submit unverified code to the code reviewer, who will still take 5 hours like before, but who will now be mad that you’re making them read the slop that you were too lazy to read yourself. Little of value was gained.

More simply: throwing more patches into the review queue, when the review queue still drains at the same rate as before, is not a recipe for increased velocity. Real software development involves not just a review queue but all the other steps and processes I outlined above, and more, and having an LLM generate code more quickly does not increase the speed or capacity of all those other things.

So as someone who accepts Brooks’ argument in No Silver Bullet, I am committed to believe on theoretical grounds that LLMs cannot offer “even a single order-of-magnitude improvement … in productivity, in reliability, in simplicity”. And my own experience matches up with that prediction.

Practice makes (im)perfect

But enough theory. What about the empirical actual reality of LLM coding?

Every fan of LLMs for coding has an anecdote about their revolutionary qualities, but the non-anecdotal data points we have are a lot more mixed. For example, several times now I’ve been linked to and asked to read the DORA report on the “State of AI-assisted Software Development”. And initially it certainly seems like it’s declaring the effects of LLMs are settled, in favor of the LLMs. From its executive summary (page 3):

[T]he central question for technology leaders is no longer if they should adopt AI, but how to realize its value.

And elsewhere it makes claims like (page 34) “AI is the new normal in software development”.

But then, going back to the executive summary, things start sounding less uniformly positive:

The research reveals a critical truth: AI’s primary role in software development is that of an amplifier. It magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones.

And then (still on page 3):

The greatest returns on AI investment come not from the tools themselves, but from a strategic focus on the underlying organizational system: the quality of the internal platform, the clarity of workflows, and the alignment of teams. Without this foundation, AI creates localized pockets of productivity that are often lost to downstream chaos.

Continuing on to page 4:

AI adoption now improves software delivery throughput, a key shift from last year. However, it still increases delivery instability. This suggests that while teams are adapting for speed, their underlying systems have not yet evolved to safely manage AI-accelerated development.

“Delivery instability” is defined (page 13) in terms of two factors:

Later parts of the report get into more detail on this. Page 38 charts the increase in delivery instability, for example. And elsewhere in the section containing that chart, there’s a discussion of whether increases in throughput (defined by DORA as a combination of lead time for changes, deployment frequency, and failed deployment recovery time) are enough to offset or otherwise make up for this increase in instability (page 41, emphasis added by me):

Some might argue that instability is an acceptable trade-off for the gains in development throughput that AI-assisted development enables.

The reasoning is that the volume and speed of AI-assisted delivery could blunt the detrimental effects of instability, perhaps by enabling such rapid bug fixes and updates that the negative impact on the end-user is minimized.

However, when we look beyond pure software delivery metrics, this argument does not hold up. To assess this claim, we checked whether AI adoption weakens the harms of instability on our outcomes which have been hurt historically by instability.

We found no evidence of such a moderating effect. On the contrary, instability still has significant detrimental effects on crucial outcomes like product performance and burnout, which can ultimately negate any perceived gains in throughput.

And the chart on page 38 appears to show the increase in instability as quite a bit larger than the increase in throughput, in any case.

Curiously, that chart also claims a significant increase in “code quality”, and other parts of the report (page 30, for example) claim a significant increase in “productivity”, alongside the significant increase in delivery instability, which seems like it ought to be a contradiction. As far as I can tell, DORA’s source for both “productivity” and “code quality” is perceived impact as self-reported by survey respondents. Other studies and reports have designed less subjective and more quantitative ways to measure these things. For example, this much-discussed study on adoption of the Cursor LLM coding tool used the results of static analysis of the code to measure quality and complexity. And self-reported productivity impacts, in particular, ought to be a deeply suspect measure. From (to pick one relevant example) the METR early-2025 study (emphasis added by me):

This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.

LLM coding advocates have often criticized this particular study’s finding of slower development for being based on older generations of LLMs (more on that argument in a bit), but as far as I’m aware nobody’s been able to seriously rebut the finding that developers are not very effective at self-estimating their productivity. So to see DORA relying on self-estimated productivity is disappointing.

The DORA report goes on to provide a seven-part “AI capabilities model” for organizations (begins on page 49), which consists of recommendations like: strong version control practices, working in small batches, quality internal platforms, user-centric focus… all of which feel like they should be table stakes for any successful organization regardless of whether they also happen to be using LLMs.

Suppose, for sake of a silly example, that someone told you a new technology is revolutionizing surgery, but the gains are not uniformly distributed, and the best overall outcomes are seen in surgical teams where in addition to using the new thing, team members also wash their hands prior to operating. That’s not as extreme a comparison as it might sound: the sorts of practices recommended for maximizing LLM-related gains in the DORA report, and in many other similar whitepapers and reports and studies, are or ought to be as fundamental to software development as hand-washing is to surgery. The Joel Test was recommending quite a few of these practices a quarter-century ago, the Agile Manifesto implied several of them, and even back then they weren’t really new; if you dig into the literature on effective software development you can find variations of much of the DORA advice going all the way back to the 1970s and even earlier.

For a more recent data point, I’ve seen a lot of people talking about and linking me to CircleCI’s 2026 “State of Software Delivery” which, like the DORA report, claims an uneven distribution of benefits from LLM adoption, and even says (page 8) “the majority of teams saw little to no increase in overall throughput”. The CircleCI report also raises a worrying point that echoes the increase in “delivery instability” seen in the DORA report (CircleCI executive summary, page 3):

Key stability indicators show that AI-driven changes are breaking more often and taking teams longer to fix, making validation and integration the primary bottleneck.

CircleCI further reports (page 11) that, year-over-year, they see a 13% increase in recovery time for a broken main branch, and a 25% increase for broken feature branches. And (page 12) they also say failures are increasing:

[S]uccess rates on the main branch fell to their lowest level in over 5 years, to 70.8%. In other words, attempts at merging changes into production code bases now fail 30% of the time.

For comparison, their own recommended benchmark of success for main branches is 90%.

The cost of these increasing failures and the increasing time to resolve them is quantified (emphasis matches the report, page 14):

For a team pushing 5 changes to the main branch per day, going from a 90% success rate to 70% is the difference between one showstopping breakage every two days to 1.5 every single day (a 3x increase).

At just 60 minutes recovery time per failure, you’re looking at an additional 250 hours in debugging and blocked deployments every year. And that’s at a relatively modest scale. Teams pushing 500 changes per day would lose the equivalent of 12 full-time engineers.

The usual response to reports like these is to claim they’re based on people using older LLMs, and the models coming out now are the truly revolutionary ones, which won’t have any of those problems. For example, this is the main argument that’s been leveled against the METR study I mentioned above. But that argument was flimsy to begin with (since it’s rarely accompanied by the kind of evidence needed to back up the claim), and its repeated usage is self-discrediting: if the people claiming “this time is the world-changing revolutionary leap, for sure” were wrong all the prior times they said that (as they have to have been, since if any prior time had actually been the revolutionary leap they wouldn’t need to say this time will be), why should anyone believe them this time?

Also, I’ve read a lot of studies and reports on LLM coding, and these sorts of findings—uneven or inconsistent impact, quality/stability declines, etc.—seem to be remarkably stable, across large numbers of teams using a variety of different models and different versions of those models, over an extended period of time (DORA does have a bit of a messy situation with contradictory claims that “code quality” is increasing while “delivery instability” is increasing even more, but as noted above that seems to be a methodological problem). The two I’ve quoted most extensively in this post (the DORA and CircleCI reports) were chosen specifically because they’re often recommended to me by advocates of LLM coding, and seem to be reasonably pro-LLM in their stances.

The other expected response to these findings is a claim that it’s not necessarily older models but older workflows which have been obsoleted, that the state of the art is no longer to just prompt an LLM and accept its output directly, but rather involves one LLM (or LLM-powered agent) generating code while one or more layers of “adversarial” ones review and fix up the code and also review each other’s reviews and responses and fixes, thus introducing a mechanism by which the LLM(s) will automatically improve the quality of the output.

I’m unaware of rigorous studies on these approaches (yet), but several well-publicized early examples do not inspire confidence. I’ll pick on Cloudflare here since they’ve been prominent advocates for using LLMs in this fashion. In their LLM rebuild of Next.js:

We wired up AI agents for code review too. When a PR was opened, an agent reviewed it. When review comments came back, another agent addressed them. The feedback loop was mostly automated.

But their public release of it, vetted through this process and, apparently, some amount of human review on top, was initially unable to run even the basic default Next.js application, and also was apparently riddled with security issues. From one disclosure post (emphasis added by me):

AI is now very good at getting a system to the point where it looks complete.

One specific problem cited was that the LLM rebuild simply did not pull in all the original tests, and therefore could miss security-critical cases those tests were checking. From the same disclosure post:

The process was feature-first: decide which viNext features existed, then port the corresponding Next.js tests. That is a sensible way to move quickly. It gives you broad happy-path coverage.

But it does not guarantee that you bring over the ugly regression tests, missing-export cases, and fail-open behavior checks that mature frameworks accumulate over years.

So middleware could look “covered” while the one test that proves it fails safely never made it over.

For example, Next.js has a dedicated test directory (test/e2e/app-dir/proxy-missing-export/) that validates what happens when middleware files lack required exports. That test was never ported because middleware was already considered “covered” by other tests.

On the whole, that post is somewhat optimistic, but considering that the Next.js rebuild was carried out by presumably knowledgeable people who presumably were following good modern practices and prompting good modern LLMs to perform a type of task those LLMs are supposed to be extremely good at—a language and framework well-represented in training data, well-documented, with a large existing test suite written in the target language to assist automated verification—I have a hard time being that optimistic.

And though I haven’t personally read through the recent alleged leak of the Claude Code source, I’ve read some commentary and analysis from people who have, and again it seems like a team that should be as well-positioned as anyone to take maximum advantage of the allegedly revolutionary capabilities of LLM coding isn’t managing to do so.

So the consistent theme here, in the studies and reports and in more recent public examples, is that being able to generate code much more quickly than before, even in 2026 with modern LLMs and modern practices, is still no guarantee of being able to deliver software much more quickly than before. As the CircleCI report puts it (page 3):

The data points to a clear conclusion: success in the AI era is no longer determined by how fast code can be written. The decisive factor is the ability to validate, integrate, and recover at scale.

And if that sounds like the kind of thing Fred Brooks used to say, that’s because it is the kind of thing Fred Brooks used to say. Raw speed of generating code is not and was not the bottleneck in software development, and speeding that up or even reducing the time to generate code to effectively zero does not have the effect of making all the other parts of software development go away or go faster.

So at this point it seems clear to me that in practice as well as in theory LLM coding does not represent a silver bullet, and it seems highly unlikely to transform into one at any point in the near future.

On being left behind

When expressing skepticism about LLM coding, a common response is that not adopting it, or even just delaying slightly in adopting it, will inevitably result in being “left behind”, or even stronger effects (for example, words like “obliterated” have been used, more than once, by acquaintances of mine who really ought to know better). LLMs are the future, it’s going to happen whether you like it or not, so get with the program before it’s too late!

I said I’ll stick to the technical mode here, but I’ll just mention in passing that the “it’s going to happen whether you like it or not” framing is something I’ve encountered a lot and found to be pretty disturbing and off-putting, and not at all conducive to changing my mind. And milder forms like “It’s undeniable that…” are rhetorically suspect. The burden of proof ought to be on the person making the claim that LLMs truly are revolutionary, but framing like this tries to implicitly shift that burden and is a rare example of literally begging the question: it assumes as given the conclusion (LLMs are in fact revolutionary) that it needs to prove.

Meanwhile, I see two possible outcomes:

  1. The skeptical position wins. LLM coding tools do not achieve revolutionary silver-bullet status. Perhaps they become another tool in the toolbox, like TDD or pair programming, where some people and companies are really into them. Perhaps they become just another feature of IDEs, providing functionality like boilerplate generators to bootstrap a new project (if your favorite library/framework doesn’t provide its own bootstrap anyway).
  2. The skeptical position loses. LLM coding tools do achieve true revolutionary silver-bullet status or beyond (consistently delivering one or more orders of magnitude improvement in software development productivity), and truly become a mandatory part of every working programmer’s tools and workflows, taking over all or nearly all generation of code.

In the first case, delayed adoption has no downside unless someone happens to be working at one of the companies that decide to mandate LLM use. And they can always pick it up at that point, if they don’t mind or if they don’t feel like looking for a new job.

As to the second case: based on what I’ve argued above about the status and prospects of LLMs up to now, I obviously think that continuing the type of progress in models and practices that’s been seen to date does not offer any viable path to a silver bullet. Which means a truly revolutionary breakthrough will have to be something sufficiently different from the current state of the art that it will necessarily invalidate many (or perhaps even all) prior LLM-based workflows in addition to invalidating non-LLM-based workflows.

And even if that doesn’t result in a completely clean-slate starting point with everyone equal—even if experience with older LLM workflows is still an advantage in the post-silver-bullet world—I don’t think it can ever be the sort of insurmountable advantage it’s often assumed to be. For one thing, even with vastly higher average productivity, there likely would not be sufficient people with sufficient pre-existing LLM experience to fill the vastly expanded demand for software that would result (this is why a lot of LLM advocates, across many fields, spend so much time talking about the Jevons paradox). For another, any true silver-bullet breakthrough would have to attack and reduce the essential difficulty of building software, rather than the accidental difficulty. Let us return once again to Brooks:

I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation.

Much of the skill required of human LLM users today consists of exactly this: specifying and designing the software as a “conceptual construct”, albeit in specific ways that can be placed into an LLM’s context window in order to have it generate code. In any true silver-bullet world, much or all of that skillset would have to be rendered obsolete, which significantly reduces the penalty for late adoption if and when the silver bullet is finally achieved.

Power to the people?

Aside from impact on professional programmers and professional software-development teams, another claim often made in favor of LLM coding is that it will democratize access to software development. With LLM coding tools, people who aren’t experienced professional programmers can produce software that solves problems they face in their day-to-day jobs and lives. Surely that’s a huge societal benefit, right? And it’s tons of fun, too!

Setting aside that the New York Times piece linked above was written by someone who is an experienced professional, I’m not convinced of this use case either.

Mostly I think this is a situation where you can’t have it both ways. It seems to be widely agreed among advocates of LLM coding that it’s a skill which requires significant understanding, practice, and experience before one is able to produce consistent useful results (this is the basis of the “adopt now or be left behind” claim dealt with in the previous section); strong prior knowledge of how to design and build good software is also generally recommended or assumed. But that’s very much at odds with the democratized-software claim: that someone with no prior programming knowledge or experience will simply pick up an LLM, ask it in plain non-technical natural language to build something, and receive a sufficiently functional result.

I think the most likely result is that a non-technical user will receive something that’s obviously not fit for purpose, since they won’t have the necessary knowledge to prompt the LLM effectively. They won’t know how to set up directories of Markdown files containing instructions and skill definitions and architectural information for their problem. They won’t have practice at writing technical specifications (whether for other humans or for LLMs) to describe what they want in sufficient detail. They won’t know how to design and architect good software. They won’t know how to orchestrate multiple LLMs or LLM-powered agents to adversarially review each other. In short, they won’t have any of the skills that are supposed to be vital for successful LLM coding use.

There’s also the possibility that “natural” human language alone will never be sufficient to specify programs, even to much more advanced LLMs or other future “AI” systems, due to inherent ambiguity and lack of precision. In that case, some type of specialized formal language for specifying programs would always be necessary. Edsger W. Dijkstra, for example, took this position and famously derided what he called “the foolishness of ‘natural language programming’”, which is worth reading for some classic Dijkstra-isms like:

When all is said and told, the “naturalness” with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious.

Another possible outcome for LLM coding by non-programmers is the often-mentioned analogy to 3D printing, which also was hyped up as a great democratizer that would let anyone design and make anything, but never delivered on that promise and, at the individual level, became a niche hobby for the small number of enthusiasts who were willing and able to put in the time, money, and effort to get moderately good at it.

But the nightmare result is that non-programmer LLM users will receive something that seems to work, and only reveals its shortcomings much later on. Given how often I see it argued that LLMs will democratize coding and write utility programs for people working in fields where privacy and confidentiality are both vital and legally mandated, I’m terrified by that potential failure mode. And I think one of the worst possible things that could happen for advocates of LLM adoption is to have the news full of stories of well-meaning non-technical people who had their lives ruined by, say, accidentally enabling a data breach with their LLM-coded helper programs, or even “just” turning loose a subtly-incorrect financial model on their business. So even if I were an advocate of LLM coding, I’d be very wary of pushing it to non-programmers.

But ultimately, the only situation in which LLMs could meaningfully democratize access to software development is one where they achieve a true silver bullet, by significantly reducing or removing essential difficulty from the software development process. And as noted above, LLM advocates seem to believe that even in the silver-bullet situation there would still be such a gap between those with pre-existing LLM usage skills and those without, that those without could never meaningfully catch up. Although I happen to disagree with that belief, it remains the case that advocates can’t have it both ways: either LLM coding will be an exclusive club for those who built up the necessary skills, XOR it will be a great democratizer and do away with the need for those skills.

Takeaways

I’m already over 6,000 words in this post, and though I could easily write many more, I should probably wrap it up.

If I had to summarize my position on LLM coding in one sentence, it would be “Please go read No Silver Bullet”. I think Brooks’ argument there is both theoretically correct and validated by empirical results, and sets some pretty strong limits on the impact LLM coding, or any other tool or technique which solely or primarily attacks accidental difficulty, can have.

Of course, limits on what we can do or gain aren’t necessarily the end of the world. Many of the foundations of computer science, from On Computable Numbers to Rice’s theorem and beyond, place inflexible limits on what we can do, but we still write software nonetheless, and we still work to advance the state of our art. So the No Silver Bullet argument is not the same as arguing that LLMs are necessarily useless, or that no gains can possibly be realized from them. But it is an argument that any gains we do realize are likely going to be incremental and evolutionary, rather than the world-changing revolution many people seem to be expecting.

Correspondingly, I think there is not a huge downside, right now, to slow or delayed adoption of LLM coding. Very few organizations have the strong fundamentals needed to absorb even a relatively moderate, incremental increase in the amount of code they generate, which I suspect is why so many studies and reports find mixed results and lots of broken CI pipelines. Not only is there no silver bullet, there especially is no quick or magical gain to be had from rushing to adopt LLM coding without first working on those fundamentals. In fact, the evidence we have says you’re more likely to hurt than help your productivity by doing so.

I also don’t think LLMs are going to meaningfully democratize coding any time soon; even if they become indispensable tools for programmers, they are likely to continue requiring users to “think like a programmer” when specifying and prompting. We would be much better served by teaching many more people how to think rigorously and reason about abstractions (and they would be much better served, too) than we would by just plopping them as-is in front of LLMs.

As for what you should be doing instead of rushing to adopt LLM coding out of fear that you’ll be left behind: I think you should be listening to what all those whitepapers and reports and studies are actually telling you, and working on fundamentals. You should be adopting and perfecting solid foundational software development practices like version control, comprehensive test suites, continuous integration, meaningful documentation, fast feedback cycles, iterative development, focus on users, small batches of work… things that have been known and proven for decades, but are still far too rare in actual real-world software shops.

If the skeptical position is wrong and it turns out LLMs truly become indispensable coding tools in the long term, well, the available literature says you’ll be set up to take the greatest possible advantage of them. And if it turns out they don’t, you’ll still be in much better shape than you were, and you’ll have an advantage over everyone who chased after wild promises of huge productivity gains by ordering their teams to just chew through tokens and generate code without working on fundamentals, and who likely wrecked their development processes by doing so.

Or as Fred Brooks put it:

The first step toward the management of disease was replacement of demon theories and humours theories by the germ theory. That very step, the beginning of hope, in itself dashed all hopes of magical solutions. It told workers that progress would be made stepwise, at great effort, and that a persistent, unremitting care would have to be paid to a discipline of cleanliness. So it is with software engineering today.

April 09, 2026 06:27 AM UTC

April 08, 2026


Real Python

Dictionaries in Python

Python dictionaries are a powerful built-in data type that allows you to store key-value pairs for efficient data retrieval and manipulation. Learning about them is essential for developers who want to process data efficiently. In this tutorial, you’ll explore how to create dictionaries using literals and the dict() constructor, as well as how to use Python’s operators and built-in functions to manipulate them.

By learning about Python dictionaries, you’ll be able to access values through key lookups and modify dictionary content using various methods. This knowledge will help you in data processing, configuration management, and dealing with JSON and CSV data.

By the end of this tutorial, you’ll understand that:

  • A dictionary in Python is a mutable collection of key-value pairs that allows for efficient data retrieval using unique keys.
  • Both dict() and {} can create dictionaries in Python. Use {} for concise syntax and dict() for dynamic creation from iterable objects.
  • dict() is a class used to create dictionaries. However, it’s commonly called a built-in function in Python.
  • .__dict__ is a special attribute in Python that holds an object’s writable attributes in a dictionary.
  • Python dict is implemented as a hashmap, which allows for fast key lookups.

To get the most out of this tutorial, you should be familiar with basic Python syntax and concepts such as variables, loops, and built-in functions. Some experience with basic Python data types will also be helpful.

Get Your Code: Click here to download the free sample code that you’ll use to learn about dictionaries in Python.

Take the Quiz: Test your knowledge with our interactive “Dictionaries in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Dictionaries in Python

Test your knowledge of Python's dict data type: how to create, access, and modify key-value pairs using built-in methods and operators.

Getting Started With Python Dictionaries

Dictionaries are one of Python’s most important and useful built-in data types. They provide a mutable collection of key-value pairs that lets you efficiently access and mutate values through their corresponding keys:

Python
>>> config = {
...     "color": "green",
...     "width": 42,
...     "height": 100,
...     "font": "Courier",
... }

>>> # Access a value through its key
>>> config["color"]
'green'

>>> # Update a value
>>> config["font"] = "Helvetica"
>>> config
{
    'color': 'green',
    'width': 42,
    'height': 100,
    'font': 'Helvetica'
}

A Python dictionary consists of a collection of key-value pairs, where each key corresponds to its associated value. In this example, "color" is a key, and "green" is the associated value.

Dictionaries are a fundamental part of Python. You’ll find them behind core concepts like scopes and namespaces as seen with the built-in functions globals() and locals():

Python
>>> globals()
{
    '__name__': '__main__',
    '__doc__': None,
    '__package__': None,
    ...
}

The globals() function returns a dictionary containing key-value pairs that map names to objects that live in your current global scope.

Python also uses dictionaries to support the internal implementation of classes. Consider the following demo class:

Python
>>> class Number:
...     def __init__(self, value):
...         self.value = value
...

>>> Number(42).__dict__
{'value': 42}

The .__dict__ special attribute is a dictionary that maps attribute names to their corresponding values in Python classes and objects. This implementation makes attribute and method lookup fast and efficient in object-oriented code.

You can use dictionaries to approach many programming tasks in your Python code. They come in handy when processing CSV and JSON files, working with databases, loading configuration files, and more.

Python’s dictionaries have the following characteristics:

  • Mutable: The dictionary values can be updated in place.
  • Dynamic: Dictionaries can grow and shrink as needed.
  • Efficient: They’re implemented as hash tables, which allows for fast key lookup.
  • Ordered: Starting with Python 3.7, dictionaries keep their items in the same order they were inserted.

The keys of a dictionary have a couple of restrictions. They need to be:

  • Hashable: This means that you can’t use unhashable objects like lists as dictionary keys.
  • Unique: This means that your dictionaries won’t have duplicate keys.

In contrast, the values in a dictionary aren’t restricted. They can be of any Python type, including other dictionaries, which makes it possible to have nested dictionaries.

Dictionaries are collections of pairs. So, you can’t insert a key without its corresponding value or vice versa. Since they come as a pair, you always have to insert a key with its corresponding value.

Note: In some situations, you may want to add keys to a dictionary without deciding what the associated value should be. In those cases, you can use the .setdefault() method to create keys with a default or placeholder value.

Read the full article at https://realpython.com/python-dicts/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 08, 2026 02:00 PM UTC

Quiz: Implementing the Factory Method Pattern in Python

In this quiz, you’ll test your understanding of Factory Method Pattern.

This quiz guides you through the Factory Method pattern: how it separates object creation from use, the roles of clients and products, when to apply it, and how to implement flexible, maintainable Python classes.

Test your ability to spot opportunities for the pattern and build reusable, decoupled object creation solutions.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 08, 2026 12:00 PM UTC