Planet Python
Last update: April 15, 2026 09:43 PM UTC
April 15, 2026
Django Weblog
Django Has Adopted Contributor Covenant 3
We’re excited to announce that Django has officially adopted Contributor Covenant 3 as our new Code of Conduct! This milestone represents the completion of a careful, community-driven process that began earlier this year.
What We’ve Accomplished
Back in February, we announced our plan to adopt Contributor Covenant 3 through a transparent, multi-step process. Today, we’re proud to share that we’ve completed all three steps:
Step 1 (Completed February 2026): Established a community-driven process for proposing and reviewing changes to our Code of Conduct.
Step 2 (Completed March 2026): Updated our Enforcement Manual, Reporting Guidelines, and FAQs to align with Contributor Covenant 3 and incorporate lessons learned from our working group’s experience.
Step 3 (Completed April 2026): Adopted the Contributor Covenant 3 with Django-specific enhancements.
Why Contributor Covenant 3?
Contributor Covenant 3 represents a significant evolution in community standards, incorporating years of experience from communities around the world. The new version:
- Centers impact over intent, recognizing that even unintentional harm requires accountability and repair
- Emphasizes consent and boundaries, making explicit that community members must respect stated boundaries immediately
- Addresses modern harassment patterns like sea-lioning, coordinated harassment, and microaggressions
- Includes clearer guidance on enforcement, transparency, and accountability
By adopting this widely-used standard, Django joins a global community of projects committed to fostering welcoming, inclusive spaces for everyone.
What’s New in Django’s Code of Conduct
While we’ve adopted Contributor Covenant 3 as our foundation, we’ve also made Django-specific enhancements:
- In-person event guidance: Added requirements and best practices for Code of Conduct points of contact at Django events
- Affiliated programs documentation: Clarified scope and expectations for programs that reference Django’s Code of Conduct
- Bad-faith reporting provisions: Added protections against misuse of the reporting process
- Escalation processes: Established clear procedures for handling disagreements between working groups
- Enhanced transparency: Updated our statistics and reporting to provide better visibility into how we enforce our Code of Conduct
You can view the complete changelog of changes at our Code of Conduct repository.
Community-Driven Process
This adoption represents months of collaborative work. The Code of Conduct Working Group reviewed community feedback, consulted with the DSF Board, and incorporated insights from our enforcement experience. Each step was completed through pull requests that were open for community review and discussion.
We’re grateful to everyone who participated in this process—whether by opening issues, commenting on pull requests, joining forum discussions, or simply taking the time to review and understand the changes.
Where to Find Everything
All of our Code of Conduct documentation is available on both djangoproject.com and our GitHub repository:
- Code of Conduct: djangoproject.com/conduct
- Reporting Guidelines: djangoproject.com/conduct/reporting
- Enforcement Manual: djangoproject.com/conduct/enforcement-manual
- FAQs: djangoproject.com/conduct/faq
- GitHub Repository: github.com/django/code-of-conduct
How You Can Continue to Help
The Code of Conduct is a living document that will continue to evolve with our community’s needs:
- Propose changes: Anyone can open an issue to suggest improvements
- Join discussions: Participate in community conversations on the Django forum, Discord, or DSF Slack
- Report violations: If you experience or witness a Code of Conduct violation, please report it to conduct@djangoproject.com
- Stay informed: Watch the Code of Conduct repository for updates
Thank You
Creating a truly welcoming and inclusive community is ongoing work that requires participation from all of us. Thank you for being part of Django’s community and for your commitment to making it a safe, respectful space where everyone can contribute and thrive.
If you have questions about the new Code of Conduct or our processes, please don’t hesitate to reach out to the Code of Conduct Working Group at conduct@djangoproject.com.
Posted by Dan Ryan on behalf of the Django Code of Conduct Working Group
Real Python
Variables in Python: Usage and Best Practices
In Python, variables are symbolic names that refer to objects or values stored in your computer’s memory. They allow you to assign descriptive names to data, making it easier to manipulate and reuse values throughout your code. You create a Python variable by assigning a value using the syntax variable_name = value.
By the end of this tutorial, you’ll understand that:
- Variables in Python are symbolic names pointing to objects or values in memory.
- You define variables by assigning them a value using the assignment operator.
- Python variables are dynamically typed, allowing type changes through reassignment.
- Python variable names can include letters, digits, and underscores but can’t start with a digit. You should use snake case for multi-word names to improve readability.
- Variables exist in different scopes (global, local, non-local, or built-in), which affects how you can access them.
- You can have an unlimited number of variables in Python, limited only by computer memory.
To get the most out of this tutorial, you should be familiar with Python’s basic data types and have a general understanding of programming concepts like loops and functions.
Don’t worry if you don’t have all this knowledge yet and you’re just getting started. You won’t need this knowledge to benefit from working through the early sections of this tutorial.
Get Your Code: Click here to download the free sample code that shows you how to use variables in Python.
Take the Quiz: Test your knowledge with our interactive “Variables in Python: Usage and Best Practices” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Variables in Python: Usage and Best PracticesTest your understanding of Python variables, from creation and naming conventions to dynamic typing, scopes, and type hints.
Getting to Know Variables in Python
In Python, variables are names associated with concrete objects or values stored in your computer’s memory. By associating a variable with a value, you can refer to the value using a descriptive name and reuse it as many times as needed in your code.
Variables behave as if they were the value they refer to. To use variables in your code, you first need to learn how to create them, which is pretty straightforward in Python.
Creating Variables With Assignments
The primary way to create a variable in Python is to assign it a value using the assignment operator and the following syntax:
variable_name = value
In this syntax, you have the variable’s name on the left, then the assignment (=) operator, followed by the value you want to assign to the variable at hand. The value in this construct can be any Python object, including strings, numbers, lists, dictionaries, or even custom objects.
Note: To learn more about assignments, check out Python’s Assignment Operator: Write Robust Assignments.
Here are a few examples of variables:
>>> word = "Python"
>>> number = 42
>>> coefficient = 2.87
>>> fruits = ["apple", "mango", "grape"]
>>> ordinals = {1: "first", 2: "second", 3: "third"}
>>> class SomeCustomClass: pass
>>> instance = SomeCustomClass()
In this code, you’ve defined several variables by assigning values to names. The first five examples include variables that refer to different built-in types. The last example shows that variables can also refer to custom objects like an instance of your SomeCustomClass class.
Setting and Changing a Variable’s Data Type
Apart from a variable’s value, it’s also important to consider the data type of the value. When you think about a variable’s type, you’re considering whether the variable refers to a string, integer, floating-point number, list, tuple, dictionary, custom object, or another data type.
Python is a dynamically typed language, which means that variable types are determined and checked at runtime rather than during compilation. Because of this, you don’t need to specify a variable’s type when you’re creating the variable. Python will infer a variable’s type from the assigned object.
Note: In Python, variables themselves don’t have data types. Instead, the objects that variables reference have types.
For example, consider the following variables:
>>> name = "Jane Doe"
>>> age = 19
>>> subjects = ["Math", "English", "Physics", "Chemistry"]
>>> type(name)
<class 'str'>
>>> type(age)
<class 'int'>
>>> type(subjects)
<class 'list'>
In this example, name refers to the "Jane Doe" value, so the type of name is str. Similarly, age refers to the integer number 19, so its type is int. Finally, subjects refers to a list, so its type is list. Note that you don’t have to explicitly tell Python which type each variable is. Python determines and sets the type by checking the type of the assigned value.
Read the full article at https://realpython.com/python-variables/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCon
Introducing the 7 Companies on Startup Row at PyCon US 2026
Each year at PyCon US, Startup Row highlights a select group of early-stage companies building ambitious products with Python at their core. The 2026 cohort reflects a rapidly evolving landscape, where advances in AI, data infrastructure, and developer tooling are reshaping how software is built, deployed, and secured.
This year’s companies aim to solve an evolving set of problems facing independent developers and large-scale organizations alike: securing AI-driven applications, managing multimodal data, orchestrating autonomous agents, automating complex workflows, and extracting insight from increasingly unstructured information. Across these domains, Python continues to serve as a unifying layer: encouraging experimentation, enabling systems built to scale, and connecting open-source innovation with real-world impact.
Startup Row brings these emerging teams into direct conversation with the Python community at PyCon US. Throughout the conference, attendees can meet founders, explore new tools, and see firsthand how these companies are applying Python to solve meaningful problems. For the startups in attendance, it’s an opportunity to share their work, connect with users and collaborators, and contribute back to the ecosystem that helped shape them. Register now to experience Startup Row and much more at PyCon US 2026.
Supporting Startups at PyCon US
There are many ways to support Startup Row companies, during PyCon US and long after the conference wraps:- Stop by Startup Row: Spend a few minutes with each team, ask what they’re building, and see their products in action.
- Try their tools: Whether it’s an open-source library or a hosted service, hands-on usage (alongside constructive feedback) is one of the most valuable forms of support. If a startup seems compelling, consider a pilot project and become a design partner.
- Share feedback: Early-stage teams benefit enormously from thoughtful questions, real-world use cases, and honest perspectives from the community.
- Contribute to their open source projects: Many Startup Row companies are deeply rooted in open source. Startup Row companies with open-source roots welcome bug reports, documentation improvements, and pull requests. Contributions and constructive feedback are always appreciated.
- Help spread the word: If you find something interesting, tell a friend, post about it, or share it with your team. (And if you're posting to social media, consider using tags like #PyConUS and #StartupRow to share the love.)
- Explore opportunities to work together: Many of these companies are hiring, looking for design partners, or open to collaborations; don’t hesitate to ask.
- But, most importantly, be supportive. Building a startup is hard, and every team is learning in real time. Curiosity, patience, and encouragement make a meaningful difference.
Meet Startup Row at PyCon US 2026
We’re excited to introduce the companies selected for Startup Row at PyCon US 2026.Arcjet
Embedding security directly into application code is fast becoming as indispensable as logging, especially as AI services open new attack surfaces. Arcjet offers a developer‑first platform that lets teams add bot detection, rate limiting and data‑privacy checks right where the request is processed.The service ships open‑source JavaScript and Python SDKs that run a WebAssembly module locally before calling Arcjet’s low‑latency decision API, ensuring full application context informs every security verdict. Both SDKs are released under a permissive open‑source license, letting developers integrate the primitives without vendor lock‑in while scaling usage through Arcjet’s SaaS tiered pricing.
The JavaScript SDK alone has earned ≈1.7 k GitHub stars and the combined libraries have attracted over 1,000 developers protecting more than 500 production applications. Arcjet offers a free tier and usage‑based paid plans, mirroring Cloudflare’s model to serve startups and enterprises alike.
Arcjet is rolling out additional security tools and deepening integrations with popular frameworks such as FastAPI and Flask, aiming to broaden adoption across AI‑enabled services. In short, Arcjet aims to be the security‑as‑code layer every modern app ships with.
CapiscIO
As multi‑agent AI systems become the backbone of emerging digital workflows, developers lack a reliable way to verify agent identities and enforce governance. CapiscIO steps into that gap, offering an open‑core trust layer built for the nascent agent economy.CapiscIO offers cryptographic Trust Badges, policy enforcement, and tamper‑evident chain‑of‑custody wrapped in a Python SDK. Released under Apache 2.0, it ships a CLI, LangChain integration, and an MCP SDK that let agents prove identity without overhauling existing infrastructure.
The capiscio‑core repository on GitHub hosts the open‑source core and SDKs under Apache 2.0, drawing early contributors building agentic pipelines.
Beon de Nood, Founder & CEO, brings two decades of enterprise development experience and a prior successful startup to the table. “AI governance should be practical, not bureaucratic. Organizations need visibility into what they have, confidence in what they deploy, and control over how agents behave in production,” he says.
CapiscIO is continuously adding new extensions, expanding its LangChain and MCP SDKs, and preparing a managed agent‑identity registry for enterprises. In short, CapiscIO aims to be the passport office of the agent economy, handing each autonomous component an unspoofable ID and clear permissions.
Chonkie
The explosion of retrieval‑augmented generation (RAG) is unlocking AI’s ability to reason over ever‑larger knowledge bases. Yet the first step of splitting massive texts into meaningful pieces still lags behind.Chonkie offers an open‑core suite centered on Memchunk, a Python library with Cython acceleration that delivers up to 160 GB/s throughput and ten chunking strategies under a permissive license. It also ships Catsu, a unified embeddings client for nine providers, and a lightweight ingestion layer; the commercial Chonkie Labs service combines them into a SaaS that monitors the web and synthesises insights.
Co‑founder and CEO Shreyash Nigam, who grew up in India and met his business partner in eighth grade, reflects the team’s open‑source ethos, saying “It’s fun to put a project on GitHub and see a community of developers crowd around it.” That enthusiasm underpins Chonkie’s decision to release its core tooling openly while building a commercial deep‑research service.
Backed by Y Combinator’s Summer 2025 batch, Chonkie plans to grow from four to six engineers and launch the next version of Chonkie Labs later this year, adding real‑time web crawling and multi‑modal summarization. In short, Chonkie aims to be the Google of corporate intelligence.
Pixeltable
Multimodal generative AI is turning simple datasets into sprawling collections of video, images, audio and text, forcing engineers to stitch together ad‑hoc pipelines just to keep data flowing. That complexity has created a new bottleneck for teams trying to move from prototype to production.The open‑source Python library from Pixeltable offers a declarative table API that lets developers store, query and version multimodal assets side by side while embedding custom Python functions. Built with incremental update capabilities, combined lineage and schema tracking, and a development‑to‑production mirror, the platform also provides orchestration capabilities that keep pipelines reproducible without rewriting code.
The project has earned ≈1.6 k GitHub stars and a growing contributor base, closed a $5.5 million seed round in December 2024, and is already used by early adopters such as Obvio and Variata to streamline computer‑vision workflows.
Co‑founder and CTO Marcel Kornacker, who previously founded Apache Impala and co-founded Apache Parquet, says “Just as relational databases revolutionized web development, Pixeltable is transforming AI application development.”
The company's roadmap centers on launching Pixeltable Cloud, a serverless managed service that will extend the open core with collaborative editing, auto‑scaling storage and built‑in monitoring. In short, Pixeltable aims to be the relational database of multimodal AI data.
SubImage
The sheer complexity of modern multi‑cloud environments turns security visibility into a labyrinth, and SubImage offers a graph‑first view that cuts through the noise.It builds an infrastructure graph using the open‑source Cartography library (Apache‑2.0, Python), then highlights exploit chains as attack paths and applies AI models to prioritize findings based on ownership and contextual risk.
Cartography, originally developed at Lyft and now a Cloud Native Computing Sandbox project, has ≈3.7 k GitHub stars, is used by over 70 organizations, and SubImage’s managed service already protects security teams at Veriff and Neo4j; the company closed a $4.2 million seed round in November 2025.
Co‑founder Alex Chantavy, an offensive‑security engineer, says “The most important tool was our internal cloud knowledge graph because it showed us a map of the easiest attack paths … One of the most effective ways to defend an environment is to see it the same way an attacker would.”
The startup is focusing on scaling its managed service and deepening AI integration as it targets larger enterprise customers. In short, SubImage aims to be the map of the cloud for defenders.
Tetrix
Private‑market data pipelines still rely on manual downloads and spreadsheet gymnastics, leaving analysts chasing yesterday’s numbers. Tetrix’s AI investment intelligence platform is part of a wave that brings automation to this lagging workflow.Built primarily in Python, Tetrix automates document collection from fund portals and other sources, extracts structured data from PDFs and other unstructured sources using tool-using language models, then presents exposures, cash flows, and benchmarks through an interactive dashboard that also accepts natural‑language queries.
The company is growing quickly, doubling revenue quarter over quarter and, at least so far, maintains an impressive record of zero customer churn. In the coming year or so, Tetrix plans to triple its headcount from fifteen to forty‑five employees.
TimeCopilot
Time‑series forecasting has long been a tangled mix of scripts, dashboards, and domain expertise, and the recent surge in autonomous agents is finally giving it a unified voice. Enter TimeCopilot, an open‑source framework that brings agentic reasoning to the heart of forecasting.The platform, built in Python under a permissive open‑source license, lets users request forecasts in plain English. It automatically orchestrates more than thirty models from seven families, including Chronos and TimesFM, while weaving large language model reasoning into each prediction. Its declarative API was born from co‑founder Azul Garza‑Ramírez’s economics background and her earlier work on TimeGPT for Nixtla (featured SR'23), evolving from a weekend experiment started nearly seven years ago.
The TimeCopilot/timecopilot repository has amassed roughly 420 stars on GitHub, with the release of OpenClaw marking a notable spike in community interest.
Upcoming plans include a managed SaaS offering with enterprise‑grade scaling and support, the rollout of a benchmarking suite to measure agentic forecast quality, and targeted use cases such as predicting cloud‑compute expenses for AI workloads.
Thank You's and Acknowledgements
Startup Row is a volunteer-driven program, co-led by Jason D. Rowley and Shea Tate-Di Donna (SR'15; Zana, acquired Startups.com), in collaboration with the PyCon US organizing team. Thanks to everyone who makes PyCon US possible.We also extend a gracious thank-you to all startup founders who submitted applications to Startup Row at PyCon US this year. Thanks again for taking the time to share what you're building. We hope to help out in whatever way we can.
Good luck to everyone, and see you in Long Beach, CA!
Real Python
Quiz: Design and Guidance: Object-Oriented Programming in Python
Test your understanding of the Design and Guidance: Object-Oriented Programming in Python video course.
You’ll revisit single responsibility, open-closed, Liskov substitution, interface segregation, and dependency inversion. You’ll also review when to use classes in Python and alternatives to inheritance like composition and dependency injection.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
death and gravity
Learn Python object-oriented programming with Raymond Hettinger
💢 There must be a better way.
Raymond Hettinger is a Python core developer. Even if you haven't heard of him, you've definitely used his work, bangers such as sorted(), enumerate(), collections, itertools, @lru_cache, and many others. Over the years, he's held lots of great talks, some of them on effective object-oriented programming in Python.
The talks in this article had a huge impact my development as a software engineer, are some of the best I've heard, and are the single most important reason you should not be afraid of inheritance anymore; don't trust me, look at the YouTube comments!
Note
This list is only to whet your appetite – to get the most out of it, watch the talks in full; besides being a great teacher, Raymond is quite the entertainer, too.
Contents- The Art of Subclassing
- Python's Class Development Toolkit
- Super considered super!
- Object Oriented Programming from scratch (four times)
- Beyond PEP 8 – Best practices for beautiful intelligible code
- Bonus: The Mental Game of Python
The Art of Subclassing #
Subclassing can just be viewed as a technique for code reuse.
The Art of Subclassing (2012) is about use cases, principles, and design patterns for inheritance, with examples from the standard library.
The point is to unlearn the animal examples – instead of the classic hierachical view where subclasses are specializations of the parent, there's an operational view where:
- classes are dicts of functions
- subclasses point to other dicts to reuse their code
- subclasses decide when to delegate
This view brings clarity to other related topics:
- Liskov substitution principle: allow existing code to work with your subclass1
- circle–ellipse problem: the class with the most reusable code should be the parent
- open–closed principle: subclasses should not break base class invariants
Finally, a quote about the standard library:
The best way to become a better Python programmer is to spend some time reading the source code written by great Python programmers.
Sounds familiar?
Learn by reading code: Python standard library design decisions explained
Python's Class Development Toolkit #
Each user will stretch your code in different ways.
Python's Class Development Toolkit (2013) is a hands-on exercise: build a single class, encounter common problems users have with it, come up with solutions, repeat.
Use the lean startup methodology to build an advanced circle analytic toolkit with:
- instance and class variables
- instance methods (
selfrefers to you or your children2) - class methods (use
clsfor alternative contructors) - static methods (attach functions to classes for discoverability)
- properties (transparent getters and setters)
- slots (when you have many, many instances)
Super considered super! #
Super considered super! (2015) goes deep into cooperative multiple inheritance, problems you might encounter, and how to fix them.
The main point is that
just like how self refers not to you, but to you or your children,
super() does not call your ancestors,
but your children's ancestors –
it may even call a class that isn't defined yet.
This allows you to change the inheritance chain after the fact; examples include a form of dependency injection, overriding parent behavior without changing its code, and an OrderedCounter class based on Counter and OrderedDict.
The article by the same name is worth a read too, and has different examples.
Object Oriented Programming from scratch (four times) #
Object Oriented Programming from scratch (four times) (2020) does exactly that, each time giving a new insight into what, how, and why we use objects in Python.
The first part shows OOP emerge naturally from the need for more namespaces, by iteratively improving a script that emulates dictionaries.
The second one covers the history of moving from a huge pile of data and functions to:
- data associated with functions (objects)
- groups of related functions (classes)
- related groups of functions (inheritance)
- using the same name for similar functions (polymorphism)
Sounds familiar?
When to use classes in Python? When your functions take the same arguments
When to use classes in Python? When you repeat similar sets of functions
The third part explains the mechanics of objects via ChainMap, tl;dr:
ChainMap(instance_dict, class_dict, parent_class_dict, ...)
The fourth part highlights how OOP naturally expresses entities and relationships by looking the data model of a Twitter clone and the syntax tree of a compiler.
Beyond PEP 8 – Best practices for beautiful intelligible code #
Well factored code looks like business logic.
Beyond PEP 8 – Best practices for beautiful intelligible code (2015) (code) is about how excessive focus on following PEP 8 can lead to code that is beautiful, but bad, since it distracts from the beauty that really matters:
- Pythonic
- coding beautifully in harmony with the language to get the maximum benefits from Python
Transform a bad API into a good one using an adapter class and stuff like:
- context managers for setup / teardown
- flat modules for simpler imports
- magic methods to make things iterable
- properties instead of getter methods
- custom exceptions for clearer business logic
- a good __repr__() for better debuggability
And remember:
If you don't transform bad APIs into Pythonic APIs, you're a fool!
Bonus: The Mental Game of Python #
The computer gives us words that do things; what daddy does is make new words to make computers easier to use.3
The Mental Game of Python (2019) is not just about programming, but about problem solving strategies. The most relevant one for OOP is: build classes independently and let inheritance discover itself; this is because:
A lot of real world problems aren't tic-tac-toe problems, where you can see to the end; they are chess problems, where you can't.
For a more meta discussion of this idea, check out Repeat yourself, do more than one thing, and rewrite everything by tef; it should be considered a classic at this point.
Parting Raymond Hettinger quote:
I came to here you to show you how to chunk, and how to stack one chunk on top of the other, and this is a way to reduce your cognitive load and manage complexity; it is the core of our craft; it's what we're here to do.
Anyway, that's it for now. :)
Who learned something new today? Share it with others!
Want to know when new articles come out? Subscribe here to get new stuff straight to your inbox!
But remember it's a principle, not a law, violations are fine – you may only want substitutability in some places (e.g. contructors are often not substitutable). [return]
...unless you use double underscores. [return]
In my opinion, this quote alone is reason enough to watch the talk. [return]
April 14, 2026
PyCoder’s Weekly
Issue #730: Typing Django, Dictionaries, pandas vs Polars, and More (April 14, 2026)
#730 – APRIL 14, 2026
View in Browser »
Typing Your Django Project in 2026
Django was first released 10 years before Python standardized its type hints syntax. Because of this it’s not surprising that getting type hints to work in your Django project is not trivial.
ANŽE'S BLOG
Dictionaries in Python
Learn how dictionaries in Python work: create and modify key-value pairs using dict literals, the dict() constructor, built-in methods, and operators.
REAL PYTHON
Secure Your Python MCP Servers With Auth and Access Control
Use the Descope Python MCP SDK to easily secure your MCP server with user auth, consent, OAuth 2.1 with PKCE, MCP client registration, scope-based access at the tool level, and more →
DESCOPE sponsor
pandas vs Polars: Backed by a 10 Million Row Study
A benchmark study of 10M rows comparing Pandas vs. Polars. Explore the architectural shifts, lazy execution, and Rust-based speed of modern data tools.
QUBRICA.COM • Shared by Rakshath
Articles & Tutorials
Switching All My Packages to PyPI Trusted Publishing
Matthias maintains several Python packages including the django-debug-toolbar. To help protect these projects from malicious release uploads, he’s switching to the PyPI Trusted Publishing mechanism. This article explains why and what it protects.
MATTHIAS KESTENHOLZ
Cutting Python Web App Memory Over 31%
Michael reduced Python web app memory by 3.2 GB using async workers, import isolation, the Raw+DC database pattern, and disk caching. The article includes detailed before and after numbers for each technique.
MICHAEL KENNEDY
B2B AI Agent Auth Support
Your users are asking if they can connect their AI agent to your product, but you want to make sure they can do it safely and securely. PropelAuth makes that possible →
PROPELAUTH sponsor
Understanding FSMs by Building One From Scratch
After having worked with the transitions library for a while, Bob wondered how Finite State Machines work under the hood. This article shows you how he built one from scratch, modelling GitHub pull requests.
BOB BELDERBOS • Shared by Bob Belderbos
Python for Java Developers
The article outlines how Java developers can transition to Python by building on their existing object-oriented knowledge while focusing on the key differences between the two languages.
NIKOS VAGGALIS • Shared by Andrew Solomon
Why Aren’t We uv Yet?
Reading articles on the net you’d think that uv was all the things. It is popular but not as much as you’d think. This article looks at the data.
ALEX YANKOV
Using Loguru to Simplify Python Logging
Learn how to use Loguru for simpler Python logging, from zero-config setup and custom formats to file rotation, retention, and adding context.
REAL PYTHON course
SQLite Features You Didn’t Know It Had
SQLite has evolved far beyond a simple embedded database. Explore modern features like JSON, FTS5, window functions, strict tables, and more.
SLICKER.ME
Using a ~/.pdbrc File to Customize the Python Debugger
You can customize the Python debugger (PDB) by creating custom aliases within a .pdbrc file in your home directory. Read on to learn how.
TREY HUNNER
Python: Introducing Profiling-Explorer
Adam has added another project to his list of profiling tools, this one is for examining data from Python’s built-in profilers.
ADAM JOHNSON
Projects & Code
great-docs: Documentation Site Generator for Python Packages
GITHUB.COM/POSIT-DEV • Shared by Richard Iannone
rsloop: An Event Loop for Asyncio Written in Rust
GITHUB.COM/RUSTEDBYTES • Shared by Yehor Smoliakov
S3 Commander: Python Based AWS S3 Browser
GITHUB.COM/ROMANZDK • Shared by Roman
Events
DjangoCon Europe 2026
April 15 to April 20, 2026
DJANGOCON.EU
Weekly Real Python Office Hours Q&A (Virtual)
April 15, 2026
REALPYTHON.COM
PyData Bristol Meetup
April 16, 2026
MEETUP.COM
PyLadies Dublin
April 16, 2026
PYLADIES.COM
PyTexas 2026
April 17 to April 20, 2026
PYTEXAS.ORG
Chattanooga Python User Group
April 17 to April 18, 2026
MEETUP.COM
PyCon Austria 2026
April 19 to April 21, 2026
PYCON.AT
Happy Pythoning!
This was PyCoder’s Weekly Issue #730.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Real Python
Vector Databases and Embeddings With ChromaDB
The era of large language models (LLMs) is here, bringing with it rapidly evolving libraries like ChromaDB that help augment LLM applications. You’ve most likely heard of chatbots like OpenAI’s ChatGPT, and perhaps you’ve even experienced their remarkable ability to reason about natural language processing (NLP) problems.
Modern LLMs, while imperfect, can accurately solve a wide range of problems and provide correct answers to many questions. However, due to the limits of their training and the number of text tokens they can process, LLMs aren’t a silver bullet for all tasks.
You wouldn’t expect an LLM to deliver relevant responses about topics that don’t appear in its training data. For example, if you asked ChatGPT to summarize information in confidential company documents, you’d be out of luck. You could show some of these documents to ChatGPT, but there’s a limit to how many documents you can upload before you exceed ChatGPT’s maximum token count. How would you select which documents to show ChatGPT?
To address these limitations and scale your LLM applications, a great option is to use a vector database like ChromaDB. A vector database allows you to store encoded unstructured objects, like text, as lists of numbers that can be compared to one another. For instance, you can find a collection of documents relevant to a question you’d like an LLM to answer.
In this video course, you’ll learn about:
- Representing unstructured objects with vectors
- Using word and text embeddings in Python
- Harnessing the power of vector databases
- Encoding and querying over documents with ChromaDB
- Providing context to LLMs like ChatGPT with ChromaDB
After watching, you’ll have the foundational knowledge to use ChromaDB in your NLP or LLM applications. Before watching, you should be comfortable with the basics of Python and high school math.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Explore Your Dataset With pandas
In this quiz, you’ll test your understanding of Explore Your Dataset With pandas.
By working through this quiz, you’ll revisit pandas core data structures, reading CSV files, indexing and filtering data, grouping and aggregating results, understanding dtypes, and combining DataFrames.
This quiz helps you apply the core techniques from the course so you can turn a large dataset into clear, reproducible insights.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Altair: Declarative Charts With Python
In this quiz, you’ll test your understanding of Altair: Declarative Charts With Python.
By working through this quiz, you’ll revisit Altair’s core grammar of Data, Mark, and Encode, encoding channels and type shorthands, interactive selections with brushing and linked views, and common limitations to watch out for.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Vector Databases and Embeddings With ChromaDB
In this quiz, you’ll test your understanding of Embeddings and Vector Databases With ChromaDB.
By working through this quiz, you’ll revisit key concepts like vectors, cosine similarity, word and text embeddings, ChromaDB collections, metadata filtering, and retrieval-augmented generation (RAG).
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Software Foundation
PyCon US 2026: Why we're asking you to think about your hotel reservation
The PyCon US 2026 team has already covered some of the fun, unexpected, and meaningful reasons you’ll want to stay in the PyCon US hotel block. The PSF wants to use our blog to give a different angle, to keep being transparent with you, and share a little bit of real talk on the economics of holding a conference in the US at this moment in time. The short version is, if you’re joining us in Long Beach, please book the official PyCon US hotels through your PyCon US 2026 dashboard, because bookings in our hotel block are critical to the economic viability of the event.
Context on hotel bookings & PyCon US
For many years, PyCon US has relied on hotel booking commissions to help pay for our conference space. This helps us keep the event tickets affordable and to continue offering Travel Grants to community members who might not otherwise be able to attend PyCon US. Once your event outgrows academic spaces, donated conference rooms, or theatre spaces, working with the hotels is the industry’s standard way to pay for a professional convention center space. You commit to a certain number of hotel nights blocked off at nearby hotels, based on your event’s numbers from previous years, and in return, you get a reduced rental charge at the convention center. If you sell enough rooms, you additionally earn a small percentage of the revenue from those rooms, i.e. a commission. If, on the other hand, you don’t sell enough rooms, you owe damages to the hotels–essentially paying the full rate for the rooms they reserved for your event but didn’t sell.
This system has worked well for the PSF and PyCon US until this year. At the height of the pre-pandemic years, we brought in over $200,000 in hotel commissions. Even last year in Pittsburgh, we fully sold out one hotel and our total commission in 2025 was a healthy $95,909. Unfortunately, this year our hotel bookings are far behind the level they need to avoid damages, let alone earn any commission. We attribute this largely to the sad but understandable decline in willingness of international attendees, as well as some vulnerable domestic attendees, to travel to PyCon US, given the current environment. The bottom line is, if PyCon US hotel booking trends continue at their current pace, the PSF is on track to owe over $200,000 in damages under our hotel contracts.
We are not alone in this. The travel industry has been talking about the slump in foreign visitors to the US for months. The decline in foreign tourism revenue is also making the hotels less interested in being generous with our rates, contracts, and deadlines, since most hotels have seen declines in their bookings all year, not just during our event. Everyone is feeling the squeeze.
Where we’re at now
PyCon US ticket sales are only lagging by a bit. Local attendees buy their tickets later, which is something we anticipate, but this year’s hotel bookings are lagging by a lot compared to last year:
PyCon US Ticket sales as of April 10, 2025: 1,565
PyCon US Ticket sales as of April 12, 2026: 1,333
Hotel nights sold as of April 10th, 2025: 3,155
Hotel nights sold as of April 12th, 2026: 2,192
Hotel nights we need to sell by April 20th, 2026 to avoid damages: 3,338
Additional Hotel nights needed by April 20th, 2026 to avoid damages: 1,146
The PSF signed a contract for the Long Beach venue back in July of 2023. At that time we couldn‘t have foreseen this current situation where interest in coming to the US has sharply declined due to increased risk. In response, we have focused on attracting more domestic attendees, and that has been going pretty well, but it hasn’t made up for the macroeconomic and geopolitical impacts on our attendance.
How you can help
We’ll need as many of our attendees as possible to book the official conference hotel before the deadline: The first hotel block closes on April 20th, and the last block closes April 24th.
Booking the official conference hotel helps us keep PyCon US running and affordable and it’s also a lot of fun to stay where the action is. If you are planning to join us at PyCon US this year (and we hope you can because there are a lot of great things happening at the event this year!) then we hope you will consider booking an official conference hotel.
To book in our hotel block, first register for the conference, and then book your room directly from your attendee dashboard. If you need help or would like to reserve a group of rooms, please contact our housing partner Orchid: 1-877-505-0689 or help@orchid.events. Our hotels page has a full list of the four hotel options and their deadlines.
A final note
We want to thank you for your commitment to the community that makes PyCon US the special event it is. We hope to see you there to learn, collaborate, and share lots of fun moments.
For all those who can’t be at PyCon US this year for whatever reason: you will be sorely missed and we hope to see you at a future edition of the event!
Seth Michael Larson
Add Animal Crossing events to your digital calendar
Animal Forest (“Dōbutsu no Mori” or “どうぶつの森”) was released in Japan for the Nintendo 64 on April 14th, 2001: exactly 25 years ago today! To celebrate this beloved franchise I have created calendars for each of the “first-generation” Animal Crossing games that you can load into calendar apps like Google Calendar or Apple Calendars to see events from your town.
These calendars include holidays, special events, igloo and summer campers, and more. Additionally, I've created a tool which can generate importable calendars for the birthdays of villagers in your town using data from future titles and star signs from e-Reader cards.
Select which game, region, and language you are interested in and then scan the QR code or copy the URL and import the calendar manually into your calendar application. Note that calendars are only available for a valid “game + region + language” combinations such as: “Animal Forest e+ + NTSC-J + Japanese”.
Continue reading on sethmlarson.dev ...
Thanks for keeping RSS alive! ♥
April 13, 2026
Python Software Foundation
Reflecting on Five Years as the PSF’s First CPython Developer in Residence
After nearly five wonderful years at the Python Software Foundation as the inaugural CPython Developer in Residence, it's time for me to move on. I feel honored and honestly so lucky to have had the opportunity to kick off the program that now includes several wonderful full-time engineers. I'm glad to see the program left in good hands. The vacancy created by my departure will be filled after PyCon US as the PSF is currently focused on delivering a strong event. I'm happy to share that Meta will continue to sponsor the CPython Developer in Residence role at least through mid-2027. The program is safe.
| Łukasz with PSF's Security Developer in Residence Seth Larson and PyPI Safety & Security Engineer Mike Fielder at PyCon US 2025 |
As a member of the Python Steering Council during Łukasz’s tenure as Developer in Residence, I express my personal gratitude for his dedication to the CPython project and the larger Python community. I know I echo the sentiment of everyone who has served on the Council during his time as DiR. He has defined what it means to be a Developer in Residence - a position that is incredibly important to the smooth operation of the CPython project, in large and small ways, visible and hidden. Our bi-weekly meetings gave the Steering Council a detailed, unique, and invaluable contemporaneous perspective on what’s happening in CPython. Łukasz leaves big shoes to fill, and we wish him all the best in his next endeavor. It’s comforting to know that he will continue to be a Python leader and member of the core team.
-- Barry Warsaw; Python Steering Council member 2026
In my time as a developer in residence, I personally touched some pretty amazing projects like the transition to GitHub issues from bugs.python.org, the replacement of the mostly manual CLA process with an automated system, the introduction of free threading to Python, and the replacement of the interactive shell in the interpreter. And between the thousands of pull requests I've reviewed or authored, and the many less glamorous tasks like content moderation and keeping the lights on when it comes to core workflow, I've interacted with some amazing individuals. Some of them are core developers now. I've witnessed the full-time paid developer in residence roster at the Python Software Foundation grow from one person to five.
As for me, ever since seeing it for the first time in 2013, I had dreamed about moving permanently to Vancouver BC. This dream is coming true soon. As part of that move, I'm joining Meta as a software engineer on the Python Language Foundation team. In any case, I'm not disappearing from the open-source Python community. I'll be seeing you online and maybe even in person at Python-related conferences.
Real Python
How to Add Features to a Python Project With Codex CLI
After reading this guide, you’ll be able to use Codex CLI to add features to a Python project directly from your terminal. Codex CLI is an AI-powered coding assistant that runs inside your terminal. It understands your project structure, reads your files, and proposes multi-file changes using natural language instructions.
Instead of copying code from a browser or relying on an IDE plugin, you’ll use Codex CLI to implement a real feature in a multi-file Python project directly from your terminal:
Example of Using Codex CLI to Implement a Project FeatureIn the following steps, you’ll install and configure Codex CLI, use it to implement a deletion feature in a contact book app, and then refine that feature through iterative prompting.
Take the Quiz: Test your knowledge with our interactive “How to Add Features to a Python Project With Codex CLI” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Add Features to a Python Project With Codex CLITest your knowledge of Codex CLI, the AI-powered terminal tool for adding features to Python projects with natural language.
Prerequisites
To follow this guide, you should be familiar with the Python language. You’ll also need an OpenAI account with either a paid ChatGPT subscription or a valid API key, which you’ll connect to Codex CLI once you install it. Additionally, you’ll need to have Node.js installed, since Codex CLI is distributed as an npm package.
To make it easier for you to experiment with Codex CLI, download the RP Contacts project by clicking the link below:
Get Your Code: Click here to download the free source code for the RP Contacts sample project used in this tutorial.
The project RP Contacts is a text-based interface (TUI) that allows you to manage contacts directly in the terminal through a Textual app. It’s an adapted version of the project from Real Python’s tutorial Build a Contact Book App With Python, Textual, and SQLite. It differs from the original in that it uses uv to manage the project, and the TUI buttons Delete and Clear All haven’t been implemented—that’s what you’ll use Codex CLI for.
Once you’ve downloaded the project, you want to check that you can run it. As mentioned, the project uses uv for dependency management—you can tell by the uv.lock file in the project root—so make sure you have uv installed. If you don’t have uv yet, follow the official installation instructions.
Once you have uv installed and you’re at the root directory of the project, you can run the project:
$ uv run rpcontacts
When you run the command rpcontacts through uv for the first time, uv will create a virtual environment, install the dependencies of your project, and start the RP Contacts TUI. If all goes well, you should see a TUI in your terminal with a couple of buttons and an empty contact listing:
Once the TUI opens, create some test contacts by using the Add button and filling in the form that pops up. After creating a couple of fake contacts, quit the application by pressing Q.
Finally, you’ll want to initialize a Git repository at the root of your project and commit all your files:
$ git init .
$ git add .
$ git commit -m "First commit."
Codex CLI will modify your code, and you can never tell whether the changes will be good or not. Versioning your code makes it straightforward to roll back any changes made by LLMs if you don’t like them.
If you want to explore other AI-powered coding tools alongside Codex CLI, Real Python’s Python Coding With AI learning path brings together tutorials and video courses on AI-assisted coding, prompt engineering, and LLM development.
Step 1: Install and Configure Codex CLI
With all the accessory setup out of the way, it’s now time to install Codex CLI. For that, you’ll want to check the official OpenAI documentation to see the most up-to-date installation instructions. As of now, OpenAI recommends using npm:
Read the full article at https://realpython.com/codex-cli/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCon
How to Build Your PyCon US 2026 Schedule
Six Pathways Through the Talks
Finding your way through three days of world-class Python content
PyCon US 2026 runs May 13–19 in Long Beach, California, and with over 100 talks across five rooms over three days, the schedule can feel like a lot to navigate. The good news: whether you came to go deep on Python performance, level up your security knowledge, get practical Python insights for agentic AI, or finally understand what all the async fuss is about, there's a clear path through the content that's built for you. Register now to get in on the full experience.
We mapped six attendee pathways through the full talks schedule with a bonus tutorial to pair with it, each one a curated sequence of sessions that focuses on a core Python topic. Think of them less as tracks and more as through-lines. Pick the one that matches where you are and what you want to walk away with to integrate into your work.
Python Performance: From Memory to Metal
If you want to understand why your Python is slow and what to actually do about it, this is your path. It runs across all three days and takes you from memory profiling fundamentals all the way to CPython internals with one of the core developers who is actually changing the way the runtime works.
Friday
Goutam Tiwari's I Accidentally Built a Monitoring System While Trying to Debug a Memory Leak: a grounded, story-driven entry point into how memory and profiling interact in real systems.
Wenxin Jiang and Jian Yin's Breaking the Speed Limit: Fast Statistical Models with Python 3.14, Numba, and JAX gives you hands-on acceleration tools you can take home and use immediately.
Thomas Wouters, a CPython core developer and Steering Council member, delivers Free-threaded Python: past, present and future, the definitive account of GIL removal from the people doing the work.
Matthew Johnson's Lock-Free Multi-Core Performance with Behavior-Oriented Concurrency
Bruce Eckel's Demystifying the GIL to close out Friday with a complete mental model of where Python concurrency has been and where it's going.
Saturday
Larry Hastings' Conquer multithreaded Python with Blanket, practical multithreading tooling that builds directly on Friday's foundation
Jukka Lehtosalo on Making Python Faster with Free Threading and Mypyc
Yineng Zhang's High-Performance LLM Inference in Pure Python with PyTorch Custom Ops, which applies everything you've learned to one of the most demanding production workloads in the industry right now.
Sunday
Mark Shannon's Memory management in CPython, fast or slow? is as close to the source as it gets: a look at the engine underneath all your performance gains, from a core contributor who has spent years making it faster.
Pair it with a tutorial: Start the week with Arthur Pastel and Adrien Cacciaguerra's Wednesday tutorial Python Performance Lab: Sharpening Your Instincts. It's a hands-on lab designed to build the kind of performance intuition that makes everything in this pathway land harder.
Debugging and Observability: Finding What's Wrong and Why
This pathway is for engineers who spend too much time in production fires and want better tools for preventing and diagnosing them. It moves from memory leak storytelling through the brand new profiling and debugging interfaces, landing in Python 3.14 and 3.15.
Friday
Goutam Tiwari's memory leak talk is your opening again; it's the most relatable entry point in the schedule for anyone who has ever stared at a climbing memory graph, wondering what went wrong.
Puneet Khushwani's Demystifying Python's Generational Garbage Collector gives you a clear foundation
Anshul Jannumahanti's Debugging Python in Production: Practical Techniques Beyond Print Statements gives you the toolkit to act on it.
Saturday
Pablo Galindo Salgado (keynote speaker and Steering Council member) and Laszlo Kiss Kollar present Tachyon: Python 3.15's sampling profiler is faster than your code — brand new profiling infrastructure in the language itself, from one of the people who built it.
Running concurrently, fellow Steering Council member Savannah Ostrowski's The art of live process manipulation with Python 3.14's zero-overhead debugging interface demonstrates how to inspect live Python processes with no performance penalty at all. These two together represent a genuine step change in what's possible for Python observability.
Pair it with a tutorial: Catherine Nelson and Robert Masson's Thursday tutorial Going from Notebooks to Production Code is a natural warm-up, it covers the gap between exploratory code and production systems, which is exactly where most debugging pain lives.
Concurrency and Async: Making Python Do More at Once
The concurrency story in Python is changing faster than it has in years. This pathway traces the thread from hardware-level parallelism through the GIL removal to practical async patterns for the systems people are actually building in 2026.
Friday
Benjamin Glick's GPU Communications for Python: hardware context before software patterns.
Thomas Wouters on Free-threaded Python gives you the foundational GIL story.
Aditya Mehra's Don't Block the Loop: Python Async Patterns for AI Agents provides a real-world application of event loop patterns in production systems.
Matthew Johnson's Lock-Free Multi-Core Performance
Bruce Eckel's Demystifying the GIL rounds out Friday
Saturday
Conquer multithreaded Python with Blanket by Larry Hastings brings it home with practical tooling for production multithreaded Python.
Pair it with a tutorial: Trey Hunner's Wednesday tutorial Lazy Looping in Practice: Building and Using Generators and Iterators is a perfect primer. Generators and iterators are the building blocks of Python's async model, and Hunner is one of the best teachers in the community at making these concepts click.
AI and Machine Learning: From Inference to Agents
The dedicated Future of AI with Python track runs all day Friday, May 15th, and it's one of the strongest single-day lineups in the schedule. This pathway threads the AI content across the full conference, from hardware fundamentals to production-grade inference.
Friday
Benjamin Glick's GPU Communications for Python sets the hardware context.
Aayush Kumar JVS's Running Large Language Models on Laptops: Practical Quantization Techniques in Python is one of the most immediately practical talks in the schedule, if you have ever wanted to run a model locally and not known where to start, this is your session.
Aditya Mehra's Don't Block the Loop covers the async foundations that make reliable agents possible.
Santosh Appachu Devanira Poovaiah's What Python Developers Need to Know About Hardware demystifies GPU memory and execution models in a way that's genuinely useful for anyone writing inference code, and
Camila Hinojosa Añez and Elizabeth Fuentes close out Friday's AI track with How to Build Your First Real-Time Voice Agent in Python.
Saturday
Yineng Zhang on High-Performance LLM Inference in Pure Python: production-grade optimization for teams shipping at scale.
Pair it with a tutorial: Two tutorials are worth your attention here. Pamela Fox's Wednesday Build Your First MCP Server in Python is the fastest way to understand how agentic systems actually work under the hood — MCP is quickly becoming the standard way to give AI agents access to tools and data. And Isabel Michel's Wednesday Implementing RAG in Python: Build a Retrieval-Augmented Generation System gives you the hands-on foundation underneath most modern LLM applications.
Security: A Full Day Worth Taking Seriously
Saturday, May 16th, is the first-ever dedicated security track at PyCon US, and if security is anywhere near your professional concerns, you should plan to spend most of Saturday in Room 103ABC. Eleven experts. One room. A full day.
Saturday
The day opens with Ian's FastAPI Security Patterns: OAuth 2.0, JWTs, and API Keys Done Right — the fundamentals every Python web developer should have down.
PSF’s own PyPI Safety & Security Engineer Mike Fiedler's Anatomy of a Phishing Campaign flips the lens and gives you the attacker's perspective before you go back to building defenses.
Tristan McKinnon's Zero Trust in 200ms covers modern identity architecture, and
Emma Smith's Rust for CPython looks at the language-level safety improvements coming to CPython itself.
Sanchit Sahay and Abhishek Reddypalle on SBOMs for Python Builds
Andrew Nesbitt on GitHub Actions Security,
Hala Ali and Andrew Case on Post-Incident Runtime SBOM Generation from Python Memory
Shelby Cunningham and Madison Ficorilli are closing with Breaking Bad (Packages): Why Traditional Vulnerability Tracking Fails Supply Chain Attacks. If you've been meaning to get serious about supply chain security, this is the day to do it.
Pair it with a tutorial: Paul Zuradzki's Wednesday tutorial, Practical Software Testing with Python is a strong complement: the discipline of writing tests and the discipline of writing secure code overlap more than most developers realize, and this tutorial gives you the testing foundation that makes security practices easier to implement and verify.
New to Python and Packaging: A First-Timer's Path Through the Conference
Not every pathway is about going deep. This one is for attendees who are newer to Python or who want to level up on tooling, packaging, and writing code that other people can actually use. It runs gently across all three days and ends with a satisfying arc.
Friday
Russell Keith-Magee's How to give your Python code to someone else, the distribution problem from first principles, from one of the most thoughtful voices in the Python community on the topic.
Zanie Blue's Peeking under the hood of uv run covers the modern tooling that's quickly becoming the standard
Trey Hunner's pathlib: why and how to use it is the kind of practical skills upgrade most developers underestimate until they've seen it.
Saturday
Justin Lee's Python for Humans — Designing Python Code Like a User Interface, which will change how you think about writing APIs and interfaces for other developers.
Mario Munoz's Create a Python Package: From Zero to Hero puts the whole packaging arc together in one session.
Sunday
Rafael Mendes de Jesus on From notebooks to scripts: turning one-off analysis into reusable Python code: the graduation moment from exploratory to production-ready.
Pair it with a tutorial: Mason Egger's Thursday tutorial, Writing Pythonic Code: Features That Make Python Powerful, is the ideal warm-up for this entire pathway. It covers the idioms and language features that separate code that works from code that feels like Python, which is exactly the mindset the rest of this track builds on. Or if you are just getting started with no experience at all, try Python for Absolute Beginners. If you've started and stopped learning to code before, or never got around to starting at all, sign up for this tutorial and start PyCon on a strong step.
However you come to PyCon US 2026, there's a path through the schedule built for you. The full talks schedule is at us.pycon.org/2026/schedule/talks, the full tutorials schedule is at https://us.pycon.org/2026/schedule/tutorials/, and registration is open now.
We'll see you in Long Beach.
PyCon US 2026 takes place May 13–19 in Long Beach, California. Talks run Friday, May 15th, through Sunday, May 17th.
Real Python
Quiz: Gemini CLI vs Claude Code: Which to Choose for Python Tasks
In this quiz, you’ll test your understanding of Gemini CLI vs Claude Code: Which to Choose for Python Tasks.
By working through this quiz, you’ll revisit key differences between Gemini CLI and Claude Code, including installation requirements, model selection, performance benchmarks, and pricing models.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Python Continuous Integration and Deployment Using GitHub Actions
This quiz helps you review the key steps for setting up continuous integration and delivery using GitHub Actions. You’ll practice how to organize workflow files, choose common triggers, and use essential Git and YAML features.
Whether you’re just getting started or brushing up, these questions draw directly from Python Continuous Integration and Deployment Using GitHub Actions. Test your understanding before building your next workflow.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
April 12, 2026
Ned Batchelder
Linklint
I wrote a Sphinx extension to eliminate excessive links: linklint. It started as a linter to check and modify .rst files, but it grew into a Sphinx extension that works without changing the source files.
It all started with a topic in the discussion forums: Should not underline links, which argued that the underlining was distracting from the text. Of course we did not remove underlines, they are important for accessibility and for seeing that there are links at all.
But I agreed that there were places in the docs that had too many links. In particular, there are two kinds of link that are excessive:
- Links within a section to the same section. These arise naturally when describing a function (or class or module). Mentioning the function again in the description will link to the function. But we’re already reading about the function. The link is pointless and confusing.
- A second (or third, etc) instance of the same link in a single paragraph. The first mention of a referent should be linked, but subsequent ones don’t need to be.
Linklint is a Sphinx extension that suppresses these two kinds of links during the build process. It examines the doctree (the abstract syntax tree of the documentation) and finds and modifies references matching our criteria for excessiveness. It’s running now in the CPython documentation, where it suppressed 3612 links. Nice.
I had another idea for a kind of link to suppress: “obvious” references. For example, I don’t think it’s useful to link every instance of “str” to the str() constructor. Is there anyone who needs that link because they don’t know what “str” means? And if they don’t know, is that the right place to take them?
There are three problems with that idea: first, not everyone agrees that “obvious” links should be suppressed at all. Second, even among those who do, people won’t agree on what is obvious. Sure, int and str. But what about list, dict, set? Third, there are some places where a link to str() needs to be kept, like “See str() for details.” Sphinx has a syntax for references to suppress the link, but there’s no syntax to force a link when linklint wants to suppress it.
So linklint doesn’t suppress obvious links. Maybe we can do it in the future once there’s been some more thought about it.
In the meantime, linklint is working to stop many excessive links. It was a small project that turned out much better than I expected when I started on it. A Sphinx extension is a really powerful way to adjust or enhance documentation without causing churn in the .rst source files. Sphinx itself can be complex and mysterious, but with a skilled code reading assistant, I was able to build this utility and improve the documentation.
April 11, 2026
Rodrigo Girão Serrão
Personal highlights of PyCon Lithuania 2026
In this article I share my personal highlights of PyCon Lithuania 2026.
Shout out to the organisers and volunteers
This was my second time at PyCon Lithuania and, for the second time in a row, I leave with the impression that everything was very well organised and smooth. Maybe the organisers and volunteers were stressed out all the time — organising a conference is never easy — but everything looked under control all the time and well thought-through.
Thank you for an amazing experience!
And by the way, congratulations for 15 years of PyCon Lithuania. To celebrate, they even served a gigantic cake during the first networking event. The cake was at least 80cm by 30cm:
The PyCon Lithuania cake.I'll be honest with you: I didn't expect the cake to be good. The quality of food tends to degrade when it's cooked at a large scale... But even the taste was great and the cake had three coloured layers in yellow, green, and red.
Social activities
The organisers prepared two networking events, a speakers' dinner, and three city tours (one per evening) for speakers. There was always something for you to do.
The city tour is a brilliant idea and I wonder why more conferences don't do it:
- Participants get to know a bit more of the city that's hosting the conference.
- Participants get the chance to talk to each other in a relaxed and informal environment.
- Hiring a tour guide is typically fairly cheap, especially when compared to organising a full-blown social event in a dedicated venue and with dedicated catering.
I had taken the city tour last time I had been at PyCon Lithuania and taking it again was not a mistake. Here's our group at the end of the tour, immediately before the speakers' dinner:
Some PyCon Lithuania speakers at the city tour.The conference organisers even made sure that the city tour ended close to the location of the speakers' dinner and that the tour ended at the same time as the dinner started. Another small detail that was carefully planned.
The atmosphere of the restaurant was very pleasant and the staff there was helpful and kind, so we had a wonderful night. At some point, at our table, we noticed that the folks at the other two tables were projecting something on a big screen. There was a large curtain that partially separated our table from the other two, so we took some time to realise that an impromptu Python quiz was about to take place.
I'm (way too) competitive and immediately got up to play. After six questions, which included learning about the existence of the web framework Falcon and correctly reordering the first four sentences of the Zen of Python, I was crowned the winner:
The final score for the quiz.The top three players got a free spin on the PyCon Lithuania wheel of fortune.
Egg hunt and swag
On each day of the conference there was an egg hunt running...
Armin Ronacher
The Center Has a Bias
Whenever a new technology shows up, the conversation quickly splits into camps. There are the people who reject it outright, and there are the people who seem to adopt it with religious enthusiasm. For more than a year now, no topic has been more polarising than AI coding agents.
What I keep noticing is that a lot of the criticism directed at these tools is perfectly legitimate, but it often comes from people without a meaningful amount of direct experience with them. They are not necessarily wrong. In fact, many of them cite studies, polls and all kinds of sources that themselves spent time investigating and surveying. And quite legitimately they identified real issues: the output can be bad, the security implications are scary, the economics are strange and potentially unsustainable, there is an environmental impact, the social consequences are unclear, and the hype is exhausting.
But there is something important missing from that criticism when it comes from a position of non-use: it is too abstract.
There is a difference between saying “this looks flawed in principle” and saying “I used this enough to understand where it breaks, where it helps, and how it changes my work.” The second type of criticism is expensive. It costs time, frustration, and a genuine willingness to engage.
The enthusiast camp consists of true believers. These are the people who have adopted the technology despite its shortcomings, sometimes even because they enjoy wrestling with them. They have already decided that the tool is worth fitting into their lives, so they naturally end up forgiving a lot. They might not even recognize the flaws because for them the benefits or excitement have already won.
But what does the center look like? I consider myself to be part of the center: cautiously excited, but also not without criticism. By my observation though that center is not neutral in the way people imagine it to be. Its bias is not towards endorsement so much as towards engagement, because the middle ground between rejecting a technology outright and embracing it fully is usually occupied by people willing to explore it seriously enough to judge it.
Bias on Both Sides
The compositions of the groups of people in the discussions about new technology are oddly shaped because one side has paid the cost of direct experience and the other has not, or not to the same degree. That alone creates an asymmetry.
Take coding agents as an example. If you do not use them, or at least not for productive work, you can still criticize them on many grounds. You can say they generate sloppy code, that they lower your skills, etc. But if you have not actually spent serious time with them, then your view of their practical reality is going to be inherited from somewhere else. You will know them through screenshots, anecdotes, the most annoying users on Twitter, conference talks, company slogans, and whatever filtered back from the people who did use them. That is not nothing, but it is not the same as contact.
The problem is not that such criticism is worthless. The problem is that people often mistake non-use for neutrality. It is not. A serious opinion on a new language, framework, device, or way of working usually has some minimum buy-in. You have to cross a threshold of use before your criticism becomes grounded in the thing itself rather than in its reputation.
That threshold is inconvenient. It asks you to spend time on something that may not pay off, and to risk finding yourself at least partially won over. It is a lot to ask of people. But because that threshold exists, the measured middle is rarely populated by people who are perfectly indifferent to change. It is populated by people who were willing to move toward it enough in order to evaluate it properly.
Simultaneously, it’s important to remember that usage does not automatically create wisdom. The enthusiastic adopter might have their own distortions. They may enjoy the novelty, feel a need to justify the time they invested, or overgeneralize from the niche where the technology works wonderfully. They may simply like progress and want to be associated with it.
This is particularly visible with AI. There are clearly people who have decided that the future is here, all objections are temporary, and every workflow must now be rebuilt around agents. What makes AI weirder is that it’s such a massive shift in capabilities that has triggered a tremendous injection of money, and a meaningful number of adopters have bet their future on that technology.
So if one pole is uninformed abstraction and the other is overcommitted enthusiasm, then surely the center must sit right in the middle between them?
Engagement Is Not Endorsement
The center, I would argue, naturally needs to lean towards engagement. The reason is simple: a genuinely measured opinion on a new technology requires real engagement with it.
You do not get an informed view by trying something for 15 minutes, getting annoyed once, and returning to your previous tools. You also do not get it by admiring demos, listening to podcasts or discussing on social media. You have to use it enough to get past both the first disappointment and the honeymoon phase. Seemingly with AI tools, true understanding is not a matter of hours but weeks of investment.
That means the people in the center are selected from a particular group: people who were willing to give the thing a fair chance without yet assuming it deserved a permanent place in their lives.
That willingness is already a bias towards curiosity and experimentation which makes the center look more like adopters in behavior, because exploration requires use, but it does not make the center identical to enthusiasts in judgment.
This matters because from the perspective of the outright rejecter, all of these people can look the same. If someone spent serious time with coding agents, found them useful in some areas, harmful in others, and came away with a nuanced view, they may still be thrown into the same bucket as the person who thinks agents can do no wrong.
But those are not the same position at all. It’s important to recognize that engagement with those tools does not automatically imply endorsement or at the very least not blanket endorsement.
The Center Looks Suspicious
This is why discussions about new technology, and AI in particular feel so polarized. The actual center is hard to see because it does not appear visually centered. From the outside, serious exploration can look a lot like adoption.
If you map opinions onto a line, you might imagine the middle as the point equally distant from rejection and enthusiasm. But in practice that is not how it works. The middle is shifted toward the side of the people who have actually interacted with the technology enough to say something concrete about it. That does not mean the middle has accepted the adopter’s conclusion. It means the middle has adopted some of the adopter’s behavior, because investigation requires contact.
That creates a strange effect because the people with the most grounded criticism are often also adopters. I would argue some of the best criticism of coding agents right now comes from people who use them extensively. Take Mario: he created a coding agent, yet is also one of the most vocal voices of criticism in the space. These folks can tell you in detail how they fail and they can tell you where they waste time, where they regress code quality, where they need carefully designed tooling, where they only work well in some ecosystems, and where the whole thing falls apart.
But because those people kept using the tools long enough to learn those lessons, they can appear compromised to outsiders. And worse: if they continue to use them, contribute thoughts and criticism back, they are increasingly thrown in with the same people who are devoid of any criticism.
Failure Is Possible
This line of thinking could be seen as an inherent “pro-innovation bias.” That would be wrong, as plenty of technology deserves resistance. Many people are right to resist, and sometimes the people who never gave a technology a chance saw problems earlier than everyone else. Crypto is a good reminder: plenty of projects looked every bit as exciting as coding agents do now, and still collapsed when the economics no longer worked.
What matters here is a narrower point. The center is not biased towards novelty so much as towards contact with the thing that creates potential change. The middle ground is not between use and non-use, but between refusal and commitment and the people in the center will often look more like adopters than skeptics, not because they have already made up their minds, but because getting an informed view requires exploration.
If you want to criticize a new thing well, you first have to get close enough to dislike it for the right reasons. And for some technologies, you also have to hang around long enough to understand what, exactly, deserves criticism.
April 10, 2026
Talk Python to Me
#544: Wheel Next + Packaging PEPs
When you pip install a package with compiled code, the wheel you get is built for CPU features from 2009. Want newer optimizations like AVX2? Your installer has no way to ask for them. GPU support? You're on your own configuring special index URLs. The result is fat binaries, nearly gigabyte-sized wheels, and install pages that read like puzzle books. A coalition from NVIDIA, Astral, and QuantSight has been working on Wheel Next: A set of PEPs that let packages declare what hardware they need and let installers like uv pick the right build automatically. Just uv pip install torch and it works. I sit down with Jonathan Dekhtiar from NVIDIA, Ralf Gommers from QuantSight and the NumPy and SciPy teams, and Charlie Marsh, founder of Astral and creator of uv, to dig into all of it.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code talkpython26</a><br> <a href='https://talkpython.fm/temporal'>Temporal</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guests</strong><br/> <strong>Charlie Marsh</strong>: <a href="https://github.com/charliermarsh?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Ralf Gommers</strong>: <a href="https://github.com/rgommers?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Jonathan Dekhtiar</strong>: <a href="https://github.com/DEKHTIARJonathan?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>CPU dispatcher</strong>: <a href="https://numpy.org/doc/stable/reference/simd/how-it-works.html?featured_on=talkpython" target="_blank" >numpy.org</a><br/> <strong>build options</strong>: <a href="https://numpy.org/doc/stable/reference/simd/build-options.html?featured_on=talkpython" target="_blank" >numpy.org</a><br/> <strong>Red Hat RHEL</strong>: <a href="https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux?featured_on=talkpython" target="_blank" >www.redhat.com</a><br/> <strong>Red Hat RHEL AI</strong>: <a href="https://www.redhat.com/en/products/ai?featured_on=talkpython" target="_blank" >www.redhat.com</a><br/> <strong>RedHats presentation</strong>: <a href="https://wheelnext.dev/summits/2025_03/assets/WheelNext%20Community%20Summit%20-%2006%20-%20Red%20Hat.pdf?featured_on=talkpython" target="_blank" >wheelnext.dev</a><br/> <strong>CUDA release</strong>: <a href="https://developer.nvidia.com/cuda/toolkit?featured_on=talkpython" target="_blank" >developer.nvidia.com</a><br/> <strong>requires a PEP</strong>: <a href="https://discuss.python.org/t/pep-proposal-platform-aware-gpu-packaging-and-installation-for-python/91910?featured_on=talkpython" target="_blank" >discuss.python.org</a><br/> <strong>WheelNext</strong>: <a href="https://wheelnext.dev/?featured_on=talkpython" target="_blank" >wheelnext.dev</a><br/> <strong>Github repo</strong>: <a href="https://github.com/wheelnext?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>PEP 817</strong>: <a href="https://peps.python.org/pep-0817/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>PEP 825</strong>: <a href="https://discuss.python.org/t/pep-825-wheel-variants-package-format-split-from-pep-817/106196?featured_on=talkpython" target="_blank" >discuss.python.org</a><br/> <strong>uv</strong>: <a href="https://docs.astral.sh/uv/?featured_on=talkpython" target="_blank" >docs.astral.sh</a><br/> <strong>A variant-enabled build of uv</strong>: <a href="https://astral.sh/blog/wheel-variants?featured_on=talkpython" target="_blank" >astral.sh</a><br/> <strong>pyx</strong>: <a href="https://astral.sh/blog/introducing-pyx?featured_on=talkpython" target="_blank" >astral.sh</a><br/> <strong>pypackaging-native</strong>: <a href="https://pypackaging-native.github.io?featured_on=talkpython" target="_blank" >pypackaging-native.github.io</a><br/> <strong>PEP 784</strong>: <a href="https://peps.python.org/pep-0784/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=761htncGZpU" target="_blank" >youtube.com</a><br/> <strong>Episode #544 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/544/wheel-next-packaging-peps#takeaways-anchor" target="_blank" >talkpython.fm/544</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/544/wheel-next-packaging-peps" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
PyCharm
How (Not) to Learn Python
While listening to Mark Smith’s inspirational talk for Python Unplugged on PyTV about How to Learn Python, what caught my attention was that Mark suggested turning off some of PyCharm’s AI features to help you learn Python more effectively.
As a PyCharm user myself, I’ve found the AI-powered features beneficial in my day-to-day work; however, I never considered that I could turn certain features on or off to customize my experience. This can be done from the settings menu under Editor | General | Code Completion | Inline.
While we are at it, let’s have a look at these features and investigate in more detail why they are great for professional developers but may not be ideal for learners.
Local full line code completion suggestions
JetBrains AI credits are not consumed when you use local line completion. The completion prediction is performed using a built-in local deep learning model. To use this feature, make sure the box for Enable inline completion using language models is checked, and choose either Local or Cloud and local in the options. To show the complete results using the local model alone, we will look at the predictions when only Local is selected.
When it’s selected, you see that the only code completion available out of the box in PyCharm is for Python. To make suggestions available for CSS or HTML, you need to download additional models.
When you are writing code, you will see suggestions pop up in grey with a hint for you to use Tab to complete the line.
After completing that line, you can press Enter to go to the next one, where there may be a new suggestion that you can again use Tab to complete. As you see, this can be very convenient for developers in their daily coding, as it saves time that would otherwise be spent typing obvious lines of code that follow the flow naturally.
However, for beginners, mindlessly hitting Tab and letting the model complete lines may discourage them from learning how to use the functions correctly. An alternative is to use the hint provided by PyCharm to help you choose an appropriate method from the available list, determine which parameters are needed, check the documentation if necessary, and write the code yourself. Here is what the hint looks like when code completion is turned off:
Cloud-based completion suggestions
Let’s have a look at cloud-based completion in contrast to local completion. When using cloud-based completion, next-edit suggestions are also available (which we will look at in more detail in the next section).
Cloud-based completion comes with support for multiple languages by default, and you can switch it on or off for each language individually.
Cloud-based completion provides more functionality than local model completion, but you need a JetBrains AI subscription to use it.
You may also connect to a third-party AI provider for your cloud-based completion. Since this support is still in Beta in PyCharm 2026.1, it is highly recommended to keep your JetBrains AI subscription active as a backup to ensure all features are available.
After switching to cloud-based completion, one of the differences I noticed was that it is better at multiple-line completion, which can be more convenient. However, I have also encountered situations where the completion provided too much for me, and I had to jump in to make my own modifications after accepting the suggestions.
For learners of Python, again, you may want to disable this functionality or have to audit all the suggestions in detail yourself. In addition to the danger of relying too heavily on code completion, which removes opportunities to learn, cloud code completion poses another risk for learners. Because larger suggestions require active review from the developer, learners may not be equipped to fully audit the wholesale suggestions they are accepting. Disabling this feature for learners not only encourages learning, but it can also help prevent mistakes.
Next edit suggestions
In addition to cloud-based completion, JetBrains AI Pro, Ultimate, and Enterprise users are able to take advantage of next edit suggestions.
When they are enabled, every time you make changes to your code, for example, renaming a variable, you will be given suggestions about other places that need to be changed.
And when you press Tab, the changes will be made automatically. You can also customize this behavior so you can see previews of the changes and jump continuously to the next edit until no more are suggested.
This is, no doubt, a very handy feature. It can help you avoid some careless mistakes, like forgetting to refactor your code when you make changes. However, for learners, thinking about what needs to be done is a valuable thought exercise, and using this feature can deprive them of some good learning opportunities.
Conclusion
PyCharm offers a lot of useful features to smooth out your day-to-day development workflow. However, these features may be too powerful, and even too convenient, for those who have just started working with Python and need to learn by making mistakes. It is good to use AI features to improve our work, but we also need to double-check the results and make sure that we want what the AI suggests.
To learn more about how to level up your Python skills, I highly recommend watching Mark’s talk on PyTV and checking out all the AI features that JetBrains AI has to offer. I hope you will find the perfect way to integrate them into your work while remaining ready to turn them off when you plan to learn something new.
Ahmed Bouchefra
Build Your Own AI Meme Matcher: A Beginner's Guide to Computer Vision with Python
Have you ever wondered how Snapchat filters know exactly where your eyes and mouth are? Or how your phone can unlock just by looking at your face? The magic behind this is called Computer Vision, a field of Artificial Intelligence that allows computers to “see” and understand digital images.
Today, we are going to build something incredibly fun using Computer Vision: a Real-Time Meme Matcher.
Point your webcam at yourself, make a shocked face, and watch as the app instantly matches you with the “Overly Attached Girlfriend” meme. Smile and raise your hand, and Leonardo DiCaprio raises a glass right back at you.
But this isn’t just a fun project. We are going to build this using Object-Oriented Programming (OOP). OOP is a professional coding style that makes your code clean, organized, and easy to upgrade. By the end of this tutorial, you will have a working AI app and a solid understanding of how professional software is structured.
Let’s dive in!
Prerequisites
Before we start coding, make sure you have the following ready:
- Python 3.11 or newer installed on your computer.
- A working webcam.
- A folder named
assetsin your project directory containing a few popular meme images (likesuccess_kid.jpg,disaster_girl.jpg, etc.).
You will also need to install a few Python libraries. Open your terminal or command prompt and run:
pip install mediapipe opencv-python numpy
The Theory: How Does It Work?
Before we look at the code, let’s understand the two main concepts powering our application: Computer Vision (Facial Landmarks) and Object-Oriented Programming.
1. Facial Landmarks (How the AI “Sees” You)
We are using a Google library called MediaPipe. When you feed an image to MediaPipe, it places a virtual “mesh” of 478 invisible dots (called landmarks) over your face.
To figure out your expression, we use simple math. For example, how do we know if your mouth is open in surprise?
We measure the vertical distance between the dot on your top lip and the dot on your bottom lip.
If the distance is large, your mouth is open! We do the same for your eyes and eyebrows to calculate “scores” for surprise, smiling, or concern.
2. Object-Oriented Programming (OOP)
Instead of writing one massive, confusing block of code, OOP allows us to break our program into separate components called Classes.
Think of a Class as a blueprint.
For our Meme Matcher, we will create three distinct classes, each with a “Single Responsibility” (a golden rule of coding):
ExpressionAnalyzer(The Brain): Handles the AI math and MediaPipe.MemeLibrary(The Database): Loads the images and compares the user’s face to the memes.MemeMatcherApp(The UI): Opens the webcam and draws the pictures on the screen.
Step 1: Building the Brain
Let’s start by creating the class that does all the heavy lifting. Create a file named meme_matcher.py and import the necessary tools. Then, we will define our first class.
import cv2
import numpy as np
import mediapipe as mp
from pathlib import Path
from concurrent.futures import ThreadPoolExecutor
import pickle
import os
import subprocess
class ExpressionAnalyzer:
"""
The ExpressionAnalyzer class acts as the 'Brain' of our project.
It encapsulates (hides away) the complex MediaPipe machine learning logic.
"""
# Class Variables: Landmark indices for eyes, eyebrows, and mouth
LEFT_EYE_UPPER = [159, 145, 158]
LEFT_EYE_LOWER = [23, 27, 133]
RIGHT_EYE_UPPER = [386, 374, 385]
RIGHT_EYE_LOWER = [253, 257, 362]
LEFT_EYEBROW = [70, 63, 105, 66, 107]
RIGHT_EYEBROW = [300, 293, 334, 296, 336]
MOUTH_OUTER = [61, 291, 39, 181, 0, 17, 269, 405]
MOUTH_INNER = [78, 308, 95, 88]
NOSE_TIP = 4
def __init__(self, frame_skip: int = 2):
self.last_features = None
self.frame_counter = 0
self.frame_skip = frame_skip
# Download the required AI models automatically
self.face_model_path = self._download_model(
"face_landmarker.task",
"[https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task](https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task)"
)
self.hand_model_path = self._download_model(
"hand_landmarker.task",
"[https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/1/hand_landmarker.task](https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/1/hand_landmarker.task)"
)
# Initialize MediaPipe objects for both video and images
self.face_mesh_video = self._init_face_landmarker(video_mode=True)
self.hand_detector_video = self._init_hand_landmarker(video_mode=True)
self.face_mesh_image = self._init_face_landmarker(video_mode=False)
self.hand_detector_image = self._init_hand_landmarker(video_mode=False)
Understanding the Brain
In the code above, we define lists of numbers like LEFT_EYE_UPPER. These are the exact dot numbers (out of the 478) that outline the eye.
The __init__ method is a special function called a constructor.
Whenever we create an ExpressionAnalyzer, this code runs automatically to set everything up.
It downloads the MediaPipe AI models from Google’s servers and loads them into memory so they are ready to process faces.
Next, we add the logic to extract features:
# ... (Add this inside the ExpressionAnalyzer class) ...
def extract_features(self, image: np.ndarray, is_static: bool = False) -> dict:
"""Analyzes an image and returns facial/hand features as a dictionary."""
face_landmarker = self.face_mesh_image if is_static else self.face_mesh_video
hand_landmarker = self.hand_detector_image if is_static else self.hand_detector_video
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=rgb)
if is_static:
face_res = face_landmarker.detect(mp_image)
hand_res = hand_landmarker.detect(mp_image)
else:
self.frame_counter += 1
if self.frame_counter % self.frame_skip != 0:
return getattr(self, "last_features", None)
face_res = face_landmarker.detect_for_video(mp_image, self.frame_counter)
hand_res = hand_landmarker.detect_for_video(mp_image, self.frame_counter)
if not face_res.face_landmarks:
return None
landmarks = face_res.face_landmarks[0]
landmark_array = np.array([[l.x, l.y] for l in landmarks])
# Calculate the mathematical features
features = self._compute_features(landmark_array, hand_res)
self.last_features = features
return features
def _compute_features(self, landmark_array: np.ndarray, hand_res) -> dict:
"""Helper function to calculate Eye Aspect Ratio (How open the eye is)"""
def ear(upper, lower):
vert = np.linalg.norm(landmark_array[upper] - landmark_array[lower], axis=1).mean()
horiz = np.linalg.norm(landmark_array[upper[0]] - landmark_array[upper[-1]])
return vert / (horiz + 1e-6)
left_ear = ear(self.LEFT_EYE_UPPER, self.LEFT_EYE_LOWER)
right_ear = ear(self.RIGHT_EYE_UPPER, self.RIGHT_EYE_LOWER)
avg_ear = (left_ear + right_ear) / 2.0
# Mouth calculations
mouth_top, mouth_bottom = landmark_array[13], landmark_array[14]
mouth_height = np.linalg.norm(mouth_top - mouth_bottom)
mouth_left, mouth_right = landmark_array[61], landmark_array[291]
mouth_width = np.linalg.norm(mouth_left - mouth_right)
mouth_ar = mouth_height / (mouth_width + 1e-6)
# Eyebrow calculations
left_brow_y = landmark_array[self.LEFT_EYEBROW][:, 1].mean()
right_brow_y = landmark_array[self.RIGHT_EYEBROW][:, 1].mean()
left_eye_center = landmark_array[self.LEFT_EYE_UPPER + self.LEFT_EYE_LOWER][:, 1].mean()
right_eye_center = landmark_array[self.RIGHT_EYE_UPPER + self.RIGHT_EYE_LOWER][:, 1].mean()
avg_brow_h = ((left_eye_center - left_brow_y) + (right_eye_center - right_brow_y)) / 2.0
# Check for hands
num_hands = len(hand_res.hand_landmarks) if hand_res.hand_landmarks else 0
hand_raised = 1.0 if num_hands > 0 else 0.0
return {
'eye_openness': avg_ear,
'mouth_openness': mouth_ar,
'eyebrow_height': avg_brow_h,
'hand_raised': hand_raised,
'surprise_score': avg_ear * avg_brow_h * mouth_ar,
'smile_score': (1.0 - mouth_ar),
}
This section might look heavily mathematical, but it’s just measuring distances!
For instance, mouth_height calculates the distance from the top lip to the bottom lip.
We bundle all these measurements into a neat little package (a Python dictionary) and return it.
Step 2: Building the Database
Now that our brain can understand expressions, we need a library to hold our memes.
class MemeLibrary:
"""
Acts as a database for our memes.
It 'has-a' relationship with ExpressionAnalyzer (Dependency Injection).
"""
CACHE_FILE = "meme_features_cache.pkl"
def __init__(self, analyzer: ExpressionAnalyzer, assets_folder: str = "assets", meme_height: int = 480):
self.analyzer = analyzer
self.assets_folder = assets_folder
self.meme_height = meme_height
self.memes = []
self.meme_features = []
self.feature_keys = ['surprise_score', 'smile_score', 'hand_raised', 'eye_openness', 'mouth_openness', 'eyebrow_height']
self.feature_weights = np.array([25, 20, 25, 20, 25, 20])
self.feature_factors = np.array([10, 10, 15, 5, 5, 5])
self.load_memes()
def load_memes(self):
"""Loads memes from disk or a cache file to save time."""
if os.path.exists(self.CACHE_FILE):
with open(self.CACHE_FILE, "rb") as f:
self.memes, self.meme_features = pickle.load(f)
return
assets_path = Path(self.assets_folder)
image_files = list(assets_path.glob("*.jpg")) + list(assets_path.glob("*.png"))
# Analyze multiple memes at the same time
with ThreadPoolExecutor() as executor:
results = list(executor.map(self._process_single_meme, image_files))
for r in results:
if r:
meme, features = r
self.memes.append(meme)
self.meme_features.append(features)
with open(self.CACHE_FILE, "wb") as f:
pickle.dump((self.memes, self.meme_features), f)
def _process_single_meme(self, img_file: Path) -> tuple:
img = cv2.imread(str(img_file))
if img is None: return None
h, w = img.shape[:2]
scale = self.meme_height / h
img_resized = cv2.resize(img, (int(w * scale), self.meme_height))
features = self.analyzer.extract_features(img_resized, is_static=True)
if features is None: return None
return {'image': img_resized, 'name': img_file.stem.replace('_', ' ').title(), 'path': str(img_file)}, features
def compute_similarity(self, features1: dict, features2: dict) -> float:
"""Mathematical formula to compare two dictionaries of facial features."""
if features1 is None or features2 is None: return 0.0
vec1 = np.array([features1.get(k, 0) for k in self.feature_keys])
vec2 = np.array([features2.get(k, 0) for k in self.feature_keys])
diff = np.abs(vec1 - vec2)
similarity = np.exp(-diff * self.feature_factors)
return float(np.sum(self.feature_weights * similarity))
def find_best_match(self, user_features: dict) -> tuple:
if user_features is None or not self.memes: return None, 0.0
scores = np.array([self.compute_similarity(user_features, mf) for mf in self.meme_features])
if len(scores) == 0: return None, 0.0
best_idx = int(np.argmax(scores))
return self.memes[best_idx], scores[best_idx]
The Magic of Dependency Injection
Did you notice how the __init__ method takes analyzer: ExpressionAnalyzer as an argument?
This is a concept called Dependency Injection.
Instead of the Library trying to build its own AI model, we just hand it the Brain we already built. This keeps our code completely separate and organized!
The find_best_match function is where the matching happens.
It takes the dictionary of your face (how wide your eyes are, etc.) and compares it to the dictionaries of all the memes.
The meme with the closest numbers wins!
Step 3: Building the App Controller
With our AI brain and meme database built, it’s time to bring them to life! We need an application class to turn on your webcam, capture the video, and draw the results on your screen.
class MemeMatcherApp:
"""
The main Application class.
It initializes the other classes and contains the main while loop.
"""
def __init__(self, assets_folder="assets"):
self.analyzer = ExpressionAnalyzer()
self.library = MemeLibrary(analyzer=self.analyzer, assets_folder=assets_folder)
def run(self):
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
print("\n🎥 Camera started! Press 'q' to quit\n")
while cap.isOpened():
ret, frame = cap.read()
if not ret: break
frame = cv2.flip(frame, 1) # Mirror effect
# 1. Ask the Analyzer to look at the webcam frame
user_features = self.analyzer.extract_features(frame)
# 2. Ask the Library to find the best matching meme
best_meme, score = self.library.find_best_match(user_features)
# 3. Handle the User Interface (Displaying the result)
h, w = frame.shape[:2]
if best_meme:
meme_img = best_meme['image']
meme_h, meme_w = meme_img.shape[:2]
scale = h / meme_h
new_w = int(meme_w * scale)
meme_resized = cv2.resize(meme_img, (new_w, h))
display = np.zeros((h, w + new_w, 3), dtype=np.uint8)
display[:, :w] = frame
display[:, w:w + new_w] = meme_resized
# Draw UI Text boxes
cv2.rectangle(display, (5, 5), (200, 45), (0, 0, 0), -1)
cv2.putText(display, "YOU", (10, 35), cv2.FONT_HERSHEY_SIMPLEX, 1.2, (0, 255, 0), 2)
cv2.rectangle(display, (w + 5, 5), (w + new_w - 5, 75), (0, 0, 0), -1)
cv2.putText(display, best_meme['name'], (w + 10, 35), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 255), 2)
else:
display = frame
cv2.putText(display, "No face detected!", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
cv2.imshow("Meme Matcher - Press Q to quit", display)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
The Infinite Loop
The core of any video application is a while loop.
The application reads one picture from your webcam, asks the ExpressionAnalyzer for the features, asks the MemeLibrary for a match, glues the webcam picture and the meme picture together side-by-side using NumPy, and displays it.
Then, it repeats this instantly for the next frame!
Step 4: Putting it All Together
Finally, we just need to start the application. At the very bottom of your file, add the entry point:
if __name__ == "__main__":
print("Meme Matcher Starting...\n")
# Create the application object and run it
app = MemeMatcherApp(assets_folder="assets")
app.run()
Conclusion
Congratulations! You have just built a complex Artificial Intelligence application using advanced Computer Vision techniques.
More importantly, you built it the right way. By structuring your code using Object-Oriented Programming, your project is scalable. Want to add a Graphical User Interface (GUI) with buttons later? You don’t have to touch the math inside the Brain or the Database; you only have to modify the App class.
To see the real magic, download a few distinct meme images, put them in an assets folder next to your script, and run it.
Try raising your eyebrows, opening your mouth wide, or throwing up a peace sign.
Happy coding!
Check out all our books that you can read for free from this page https://10xdev.blog/library
Real Python
The Real Python Podcast – Episode #290: Advice on Managing Projects & Making Python Classes Friendly
What goes into managing a major project? What techniques can you employ for a project that's in crisis? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Exploring Protocols in Python
In this quiz, you’ll test your understanding of Exploring Protocols in Python.
The questions review Python protocols, how they define required methods and attributes, and how static type checkers use them. You’ll also explore structural subtyping, generic protocols, and subprotocols.
This quiz helps you confirm the concepts covered in the course and shows you where to focus further study. If you want to review the material, the course covers these topics in depth at the link above.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
