skip to navigation
skip to content

Planet Python

Last update: April 16, 2026 04:45 PM UTC

April 16, 2026


Real Python

Quiz: Welcome to Real Python!

Get a tour of Real Python, find resources for your skill level, and learn how to use the community to study effectively.

April 16, 2026 12:00 PM UTC

Learning Path: Python Game Development

Build Python games from command-line projects to 2D graphical games with turtle, Tkinter, Pygame, and Arcade.

April 16, 2026 12:00 PM UTC


death and gravity

Learn Python object-oriented programming with Raymond Hettinger

Even if you haven't heard of Raymond Hettinger, you've definitely used his work, bangers such as sorted(), collections, and many others. His talks had a huge impact on my development as a software engineer, are some of the best I've heard, and are the reason *you should not be afraid of inheritance* anymore.

April 16, 2026 09:00 AM UTC

April 15, 2026


Django Weblog

Django Has Adopted Contributor Covenant 3

We’re excited to announce that Django has officially adopted Contributor Covenant 3 as our new Code of Conduct! This milestone represents the completion of a careful, community-driven process that began earlier this year.

What We’ve Accomplished

Back in February, we announced our plan to adopt Contributor Covenant 3 through a transparent, multi-step process. Today, we’re proud to share that we’ve completed all three steps:

Step 1 (Completed February 2026): Established a community-driven process for proposing and reviewing changes to our Code of Conduct.

Step 2 (Completed March 2026): Updated our Enforcement Manual, Reporting Guidelines, and FAQs to align with Contributor Covenant 3 and incorporate lessons learned from our working group’s experience.

Step 3 (Completed April 2026): Adopted the Contributor Covenant 3 with Django-specific enhancements.

Why Contributor Covenant 3?

Contributor Covenant 3 represents a significant evolution in community standards, incorporating years of experience from communities around the world. The new version:

By adopting this widely-used standard, Django joins a global community of projects committed to fostering welcoming, inclusive spaces for everyone.

What’s New in Django’s Code of Conduct

While we’ve adopted Contributor Covenant 3 as our foundation, we’ve also made Django-specific enhancements:

You can view the complete changelog of changes at our Code of Conduct repository.

Community-Driven Process

This adoption represents months of collaborative work. The Code of Conduct Working Group reviewed community feedback, consulted with the DSF Board, and incorporated insights from our enforcement experience. Each step was completed through pull requests that were open for community review and discussion.

We’re grateful to everyone who participated in this process—whether by opening issues, commenting on pull requests, joining forum discussions, or simply taking the time to review and understand the changes.

Where to Find Everything

All of our Code of Conduct documentation is available on both djangoproject.com and our GitHub repository:

How You Can Continue to Help

The Code of Conduct is a living document that will continue to evolve with our community’s needs:

Thank You

Creating a truly welcoming and inclusive community is ongoing work that requires participation from all of us. Thank you for being part of Django’s community and for your commitment to making it a safe, respectful space where everyone can contribute and thrive.

If you have questions about the new Code of Conduct or our processes, please don’t hesitate to reach out to the Code of Conduct Working Group at conduct@djangoproject.com.


Posted by Dan Ryan on behalf of the Django Code of Conduct Working Group

April 15, 2026 08:49 PM UTC


Real Python

Variables in Python: Usage and Best Practices

Explore Python variables from creation to best practices, covering naming conventions, dynamic typing, variable scope, and type hints with examples.

April 15, 2026 02:00 PM UTC


PyCon

Introducing the 7 Companies on Startup Row at PyCon US 2026

April 15, 2026 01:03 PM UTC


Real Python

Quiz: Design and Guidance: Object-Oriented Programming in Python

Test your understanding of the SOLID design principles for writing cleaner, more maintainable object-oriented Python code.

April 15, 2026 12:00 PM UTC

April 14, 2026


PyCoder’s Weekly

Issue #730: Typing Django, Dictionaries, pandas vs Polars, and More (April 14, 2026)

April 14, 2026 07:30 PM UTC


Real Python

Vector Databases and Embeddings With ChromaDB

Learn how to use ChromaDB, an open-source vector database, to store embeddings and give context to large language models in Python.

April 14, 2026 02:00 PM UTC

Quiz: Explore Your Dataset With pandas

Test your pandas fundamentals: core structures, indexing, filtering, grouping, dtypes, and combining DataFrames.

April 14, 2026 12:00 PM UTC

Quiz: Altair: Declarative Charts With Python

Test your knowledge of Altair, the declarative data visualization library for Python that turns DataFrames into interactive charts.

April 14, 2026 12:00 PM UTC

Quiz: Vector Databases and Embeddings With ChromaDB

Test your knowledge of vector databases and ChromaDB, from cosine similarity and embeddings to querying collections and RAG.

April 14, 2026 12:00 PM UTC


Python Software Foundation

PyCon US 2026: Why we're asking you to think about your hotel reservation

April 14, 2026 10:13 AM UTC


Seth Michael Larson

Add Animal Crossing events to your digital calendar

April 14, 2026 12:00 AM UTC

April 13, 2026


Python Software Foundation

Reflecting on Five Years as the PSF’s First CPython Developer in Residence

April 13, 2026 02:01 PM UTC


Real Python

How to Add Features to a Python Project With Codex CLI

Learn how to use Codex CLI to add features to Python projects via the terminal. Master AI-powered coding without needing a browser or IDE plugins.

April 13, 2026 02:00 PM UTC


PyCon

How to Build Your PyCon US 2026 Schedule

April 13, 2026 12:21 PM UTC


Real Python

Quiz: Gemini CLI vs Claude Code: Which to Choose for Python Tasks

Compare Gemini CLI and Claude Code across user experience, performance, code quality, and cost to find the right AI coding tool for you.

April 13, 2026 12:00 PM UTC

Quiz: Python Continuous Integration and Deployment Using GitHub Actions

Practice essential GitHub Actions concepts, from workflow file locations to triggers and common CI/CD tasks, with this hands-on quiz.

April 13, 2026 12:00 PM UTC

April 12, 2026


Ned Batchelder

Linklint

I wrote a Sphinx extension to eliminate excessive links: linklint. It started as a linter to check and modify .rst files, but it grew into a Sphinx extension that works without changing the source files.

It all started with a topic in the discussion forums: Should not underline links, which argued that the underlining was distracting from the text. Of course we did not remove underlines, they are important for accessibility and for seeing that there are links at all.

But I agreed that there were places in the docs that had too many links. In particular, there are two kinds of link that are excessive:

Linklint is a Sphinx extension that suppresses these two kinds of links during the build process. It examines the doctree (the abstract syntax tree of the documentation) and finds and modifies references matching our criteria for excessiveness. It’s running now in the CPython documentation, where it suppressed 3612 links. Nice.

I had another idea for a kind of link to suppress: “obvious” references. For example, I don’t think it’s useful to link every instance of “str” to the str() constructor. Is there anyone who needs that link because they don’t know what “str” means? And if they don’t know, is that the right place to take them?

There are three problems with that idea: first, not everyone agrees that “obvious” links should be suppressed at all. Second, even among those who do, people won’t agree on what is obvious. Sure, int and str. But what about list, dict, set? Third, there are some places where a link to str() needs to be kept, like “See str() for details.” Sphinx has a syntax for references to suppress the link, but there’s no syntax to force a link when linklint wants to suppress it.

So linklint doesn’t suppress obvious links. Maybe we can do it in the future once there’s been some more thought about it.

In the meantime, linklint is working to stop many excessive links. It was a small project that turned out much better than I expected when I started on it. A Sphinx extension is a really powerful way to adjust or enhance documentation without causing churn in the .rst source files. Sphinx itself can be complex and mysterious, but with a skilled code reading assistant, I was able to build this utility and improve the documentation.

April 12, 2026 06:22 PM UTC

April 11, 2026


Rodrigo Girão Serrão

Personal highlights of PyCon Lithuania 2026

In this article I share my personal highlights of PyCon Lithuania 2026.

Shout out to the organisers and volunteers

This was my second time at PyCon Lithuania and, for the second time in a row, I leave with the impression that everything was very well organised and smooth. Maybe the organisers and volunteers were stressed out all the time — organising a conference is never easy — but everything looked under control all the time and well thought-through.

Thank you for an amazing experience!

And by the way, congratulations for 15 years of PyCon Lithuania. To celebrate, they even served a gigantic cake during the first networking event. The cake was at least 80cm by 30cm:

A picture of a large rectangular cake with the PyCon Lithuania logo in the middle.The PyCon Lithuania cake.

I'll be honest with you: I didn't expect the cake to be good. The quality of food tends to degrade when it's cooked at a large scale... But even the taste was great and the cake had three coloured layers in yellow, green, and red.

Social activities

The organisers prepared two networking events, a speakers' dinner, and three city tours (one per evening) for speakers. There was always something for you to do.

The city tour is a brilliant idea and I wonder why more conferences don't do it:

I had taken the city tour last time I had been at PyCon Lithuania and taking it again was not a mistake. Here's our group at the end of the tour, immediately before the speakers' dinner:

Some PyCon Lithuania speakers smile at the camera in front of Gediminas's castle.Some PyCon Lithuania speakers at the city tour.

The conference organisers even made sure that the city tour ended close to the location of the speakers' dinner and that the tour ended at the same time as the dinner started. Another small detail that was carefully planned.

The atmosphere of the restaurant was very pleasant and the staff there was helpful and kind, so we had a wonderful night. At some point, at our table, we noticed that the folks at the other two tables were projecting something on a big screen. There was a large curtain that partially separated our table from the other two, so we took some time to realise that an impromptu Python quiz was about to take place.

I'm (way too) competitive and immediately got up to play. After six questions, which included learning about the existence of the web framework Falcon and correctly reordering the first four sentences of the Zen of Python, I was crowned the winner:

A slanted picture of a blue screen showing the player RGS at the top of the quiz podium.The final score for the quiz.

The top three players got a free spin on the PyCon Lithuania wheel of fortune.

Egg hunt and swag

On each day of the conference there was an egg hunt running...

April 11, 2026 12:23 PM UTC


Armin Ronacher

The Center Has a Bias

April 11, 2026 12:00 AM UTC

April 10, 2026


Talk Python to Me

#544: Wheel Next + Packaging PEPs

When you pip install a package with compiled code, the wheel you get is built for CPU features from 2009. Want newer optimizations like AVX2? Your installer has no way to ask for them. GPU support? You're on your own configuring special index URLs. The result is fat binaries, nearly gigabyte-sized wheels, and install pages that read like puzzle books. A coalition from NVIDIA, Astral, and QuantSight has been working on Wheel Next: A set of PEPs that let packages declare what hardware they need and let installers like uv pick the right build automatically. Just uv pip install torch and it works. I sit down with Jonathan Dekhtiar from NVIDIA, Ralf Gommers from QuantSight and the NumPy and SciPy teams, and Charlie Marsh, founder of Astral and creator of uv, to dig into all of it.

April 10, 2026 04:56 PM UTC


PyCharm

How (Not) to Learn Python

While listening to Mark Smith’s inspirational talk for Python Unplugged on PyTV about How to Learn Python, what caught my attention was that Mark suggested turning off some of PyCharm’s AI features to help you learn Python more effectively. As a PyCharm user myself, I’ve found the AI-powered features beneficial in my day-to-day work; however, […]

April 10, 2026 02:21 PM UTC


Ahmed Bouchefra

Build Your Own AI Meme Matcher: A Beginner's Guide to Computer Vision with Python

Have you ever wondered how Snapchat filters know exactly where your eyes and mouth are? Or how your phone can unlock just by looking at your face? The magic behind this is called Computer Vision, a field of Artificial Intelligence that allows computers to “see” and understand digital images.

Today, we are going to build something incredibly fun using Computer Vision: a Real-Time Meme Matcher.

Point your webcam at yourself, make a shocked face, and watch as the app instantly matches you with the “Overly Attached Girlfriend” meme. Smile and raise your hand, and Leonardo DiCaprio raises a glass right back at you.

But this isn’t just a fun project. We are going to build this using Object-Oriented Programming (OOP). OOP is a professional coding style that makes your code clean, organized, and easy to upgrade. By the end of this tutorial, you will have a working AI app and a solid understanding of how professional software is structured.

Let’s dive in!

Prerequisites

Before we start coding, make sure you have the following ready:

You will also need to install a few Python libraries. Open your terminal or command prompt and run:

pip install mediapipe opencv-python numpy

The Theory: How Does It Work?

Before we look at the code, let’s understand the two main concepts powering our application: Computer Vision (Facial Landmarks) and Object-Oriented Programming.

1. Facial Landmarks (How the AI “Sees” You)

We are using a Google library called MediaPipe. When you feed an image to MediaPipe, it places a virtual “mesh” of 478 invisible dots (called landmarks) over your face.

To figure out your expression, we use simple math. For example, how do we know if your mouth is open in surprise?

We measure the vertical distance between the dot on your top lip and the dot on your bottom lip.

If the distance is large, your mouth is open! We do the same for your eyes and eyebrows to calculate “scores” for surprise, smiling, or concern.

2. Object-Oriented Programming (OOP)

Instead of writing one massive, confusing block of code, OOP allows us to break our program into separate components called Classes.

Think of a Class as a blueprint.

For our Meme Matcher, we will create three distinct classes, each with a “Single Responsibility” (a golden rule of coding):

  1. ExpressionAnalyzer (The Brain): Handles the AI math and MediaPipe.
  2. MemeLibrary (The Database): Loads the images and compares the user’s face to the memes.
  3. MemeMatcherApp (The UI): Opens the webcam and draws the pictures on the screen.

Step 1: Building the Brain

Let’s start by creating the class that does all the heavy lifting. Create a file named meme_matcher.py and import the necessary tools. Then, we will define our first class.

import cv2
import numpy as np
import mediapipe as mp
from pathlib import Path
from concurrent.futures import ThreadPoolExecutor
import pickle
import os
import subprocess

class ExpressionAnalyzer:
    """
    The ExpressionAnalyzer class acts as the 'Brain' of our project.
    It encapsulates (hides away) the complex MediaPipe machine learning logic.
    """
    
    # Class Variables: Landmark indices for eyes, eyebrows, and mouth
    LEFT_EYE_UPPER = [159, 145, 158]
    LEFT_EYE_LOWER = [23, 27, 133]
    RIGHT_EYE_UPPER = [386, 374, 385]
    RIGHT_EYE_LOWER = [253, 257, 362]
    LEFT_EYEBROW = [70, 63, 105, 66, 107]
    RIGHT_EYEBROW = [300, 293, 334, 296, 336]
    MOUTH_OUTER = [61, 291, 39, 181, 0, 17, 269, 405]
    MOUTH_INNER = [78, 308, 95, 88]
    NOSE_TIP = 4

    def __init__(self, frame_skip: int = 2):
        self.last_features = None  
        self.frame_counter = 0     
        self.frame_skip = frame_skip 

        # Download the required AI models automatically
        self.face_model_path = self._download_model(
            "face_landmarker.task",
            "[https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task](https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task)"
        )
        self.hand_model_path = self._download_model(
            "hand_landmarker.task",
            "[https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/1/hand_landmarker.task](https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/1/hand_landmarker.task)"
        )

        # Initialize MediaPipe objects for both video and images
        
        self.face_mesh_video = self._init_face_landmarker(video_mode=True)
        self.hand_detector_video = self._init_hand_landmarker(video_mode=True)
        self.face_mesh_image = self._init_face_landmarker(video_mode=False)
        self.hand_detector_image = self._init_hand_landmarker(video_mode=False)

Understanding the Brain

In the code above, we define lists of numbers like LEFT_EYE_UPPER. These are the exact dot numbers (out of the 478) that outline the eye.

The __init__ method is a special function called a constructor. Whenever we create an ExpressionAnalyzer, this code runs automatically to set everything up. It downloads the MediaPipe AI models from Google’s servers and loads them into memory so they are ready to process faces.

Next, we add the logic to extract features:

    # ... (Add this inside the ExpressionAnalyzer class) ...

    def extract_features(self, image: np.ndarray, is_static: bool = False) -> dict:
        """Analyzes an image and returns facial/hand features as a dictionary."""
        
        face_landmarker = self.face_mesh_image if is_static else self.face_mesh_video
        
        hand_landmarker = self.hand_detector_image if is_static else self.hand_detector_video

        rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
        
        mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=rgb)

        if is_static:
            face_res = face_landmarker.detect(mp_image)
            hand_res = hand_landmarker.detect(mp_image)
        else:
            self.frame_counter += 1
            if self.frame_counter % self.frame_skip != 0:
                return getattr(self, "last_features", None)
            
            face_res = face_landmarker.detect_for_video(mp_image, self.frame_counter)
            hand_res = hand_landmarker.detect_for_video(mp_image, self.frame_counter)

        if not face_res.face_landmarks:
            return None

        landmarks = face_res.face_landmarks[0]
        landmark_array = np.array([[l.x, l.y] for l in landmarks])
        
        # Calculate the mathematical features
        features = self._compute_features(landmark_array, hand_res)
        self.last_features = features
        return features

    def _compute_features(self, landmark_array: np.ndarray, hand_res) -> dict:
        """Helper function to calculate Eye Aspect Ratio (How open the eye is)"""
        
        def ear(upper, lower):
            vert = np.linalg.norm(landmark_array[upper] - landmark_array[lower], axis=1).mean()
            horiz = np.linalg.norm(landmark_array[upper[0]] - landmark_array[upper[-1]])
            return vert / (horiz + 1e-6) 

        left_ear = ear(self.LEFT_EYE_UPPER, self.LEFT_EYE_LOWER)
        right_ear = ear(self.RIGHT_EYE_UPPER, self.RIGHT_EYE_LOWER)
        avg_ear = (left_ear + right_ear) / 2.0

        # Mouth calculations
        
        mouth_top, mouth_bottom = landmark_array[13], landmark_array[14]
        mouth_height = np.linalg.norm(mouth_top - mouth_bottom)
        mouth_left, mouth_right = landmark_array[61], landmark_array[291]
        mouth_width = np.linalg.norm(mouth_left - mouth_right)
        mouth_ar = mouth_height / (mouth_width + 1e-6)

        # Eyebrow calculations
        
        left_brow_y = landmark_array[self.LEFT_EYEBROW][:, 1].mean()
        right_brow_y = landmark_array[self.RIGHT_EYEBROW][:, 1].mean()
        left_eye_center = landmark_array[self.LEFT_EYE_UPPER + self.LEFT_EYE_LOWER][:, 1].mean()
        right_eye_center = landmark_array[self.RIGHT_EYE_UPPER + self.RIGHT_EYE_LOWER][:, 1].mean()
        
        avg_brow_h = ((left_eye_center - left_brow_y) + (right_eye_center - right_brow_y)) / 2.0

        # Check for hands
        
        num_hands = len(hand_res.hand_landmarks) if hand_res.hand_landmarks else 0
        hand_raised = 1.0 if num_hands > 0 else 0.0

        return {
            'eye_openness': avg_ear,
            'mouth_openness': mouth_ar,
            'eyebrow_height': avg_brow_h,
            'hand_raised': hand_raised,
            'surprise_score': avg_ear * avg_brow_h * mouth_ar,
            'smile_score': (1.0 - mouth_ar),
        }

This section might look heavily mathematical, but it’s just measuring distances! For instance, mouth_height calculates the distance from the top lip to the bottom lip. We bundle all these measurements into a neat little package (a Python dictionary) and return it.


Step 2: Building the Database

Now that our brain can understand expressions, we need a library to hold our memes.

class MemeLibrary:
    """
    Acts as a database for our memes. 
    It 'has-a' relationship with ExpressionAnalyzer (Dependency Injection).
    """
    
    CACHE_FILE = "meme_features_cache.pkl"

    def __init__(self, analyzer: ExpressionAnalyzer, assets_folder: str = "assets", meme_height: int = 480):
        self.analyzer = analyzer 
        self.assets_folder = assets_folder
        self.meme_height = meme_height

        self.memes = []
        self.meme_features = []

        self.feature_keys = ['surprise_score', 'smile_score', 'hand_raised', 'eye_openness', 'mouth_openness', 'eyebrow_height']
        self.feature_weights = np.array([25, 20, 25, 20, 25, 20])
        self.feature_factors = np.array([10, 10, 15, 5, 5, 5])

        self.load_memes()

    def load_memes(self):
        """Loads memes from disk or a cache file to save time."""
        if os.path.exists(self.CACHE_FILE):
            with open(self.CACHE_FILE, "rb") as f:
                self.memes, self.meme_features = pickle.load(f)
            return

        assets_path = Path(self.assets_folder)
        image_files = list(assets_path.glob("*.jpg")) + list(assets_path.glob("*.png"))

        # Analyze multiple memes at the same time
        with ThreadPoolExecutor() as executor:
            results = list(executor.map(self._process_single_meme, image_files))

        for r in results:
            if r:
                meme, features = r
                self.memes.append(meme)
                self.meme_features.append(features)

        with open(self.CACHE_FILE, "wb") as f:
            pickle.dump((self.memes, self.meme_features), f)

    def _process_single_meme(self, img_file: Path) -> tuple:
        img = cv2.imread(str(img_file))
        if img is None: return None
        
        h, w = img.shape[:2]
        scale = self.meme_height / h
        img_resized = cv2.resize(img, (int(w * scale), self.meme_height))
        
        features = self.analyzer.extract_features(img_resized, is_static=True)
        if features is None: return None
            
        return {'image': img_resized, 'name': img_file.stem.replace('_', ' ').title(), 'path': str(img_file)}, features

    def compute_similarity(self, features1: dict, features2: dict) -> float:
        """Mathematical formula to compare two dictionaries of facial features."""
        if features1 is None or features2 is None: return 0.0
        
        vec1 = np.array([features1.get(k, 0) for k in self.feature_keys])
        vec2 = np.array([features2.get(k, 0) for k in self.feature_keys])
        
        diff = np.abs(vec1 - vec2)
        similarity = np.exp(-diff * self.feature_factors)
        return float(np.sum(self.feature_weights * similarity))

    def find_best_match(self, user_features: dict) -> tuple:
        if user_features is None or not self.memes: return None, 0.0
            
        scores = np.array([self.compute_similarity(user_features, mf) for mf in self.meme_features])
        if len(scores) == 0: return None, 0.0
            
        best_idx = int(np.argmax(scores)) 
        return self.memes[best_idx], scores[best_idx]

The Magic of Dependency Injection

Did you notice how the __init__ method takes analyzer: ExpressionAnalyzer as an argument?

This is a concept called Dependency Injection.

Instead of the Library trying to build its own AI model, we just hand it the Brain we already built. This keeps our code completely separate and organized!

The find_best_match function is where the matching happens. It takes the dictionary of your face (how wide your eyes are, etc.) and compares it to the dictionaries of all the memes. The meme with the closest numbers wins!


Step 3: Building the App Controller

With our AI brain and meme database built, it’s time to bring them to life! We need an application class to turn on your webcam, capture the video, and draw the results on your screen.

class MemeMatcherApp:
    """
    The main Application class. 
    It initializes the other classes and contains the main while loop.
    """
    
    def __init__(self, assets_folder="assets"):
        self.analyzer = ExpressionAnalyzer()
        self.library = MemeLibrary(analyzer=self.analyzer, assets_folder=assets_folder)

    def run(self):
        cap = cv2.VideoCapture(0)
        cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
        cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)

        print("\n🎥 Camera started! Press 'q' to quit\n")

        while cap.isOpened():
            ret, frame = cap.read()
            if not ret: break
            frame = cv2.flip(frame, 1) # Mirror effect

            # 1. Ask the Analyzer to look at the webcam frame
            user_features = self.analyzer.extract_features(frame)
            
            # 2. Ask the Library to find the best matching meme
            best_meme, score = self.library.find_best_match(user_features)

            # 3. Handle the User Interface (Displaying the result)
            h, w = frame.shape[:2]
            
            if best_meme:
                meme_img = best_meme['image']
                meme_h, meme_w = meme_img.shape[:2]
                
                scale = h / meme_h
                new_w = int(meme_w * scale)
                meme_resized = cv2.resize(meme_img, (new_w, h))

                display = np.zeros((h, w + new_w, 3), dtype=np.uint8)
                display[:, :w] = frame               
                display[:, w:w + new_w] = meme_resized 

                # Draw UI Text boxes
                cv2.rectangle(display, (5, 5), (200, 45), (0, 0, 0), -1)
                cv2.putText(display, "YOU", (10, 35), cv2.FONT_HERSHEY_SIMPLEX, 1.2, (0, 255, 0), 2)
                
                cv2.rectangle(display, (w + 5, 5), (w + new_w - 5, 75), (0, 0, 0), -1)
                cv2.putText(display, best_meme['name'], (w + 10, 35), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 255), 2)
            else:
                display = frame
                cv2.putText(display, "No face detected!", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)

            cv2.imshow("Meme Matcher - Press Q to quit", display)
            if cv2.waitKey(1) & 0xFF == ord("q"):
                break

        cap.release()
        cv2.destroyAllWindows()

The Infinite Loop

The core of any video application is a while loop. The application reads one picture from your webcam, asks the ExpressionAnalyzer for the features, asks the MemeLibrary for a match, glues the webcam picture and the meme picture together side-by-side using NumPy, and displays it. Then, it repeats this instantly for the next frame!


Step 4: Putting it All Together

Finally, we just need to start the application. At the very bottom of your file, add the entry point:

if __name__ == "__main__":
    print("Meme Matcher Starting...\n")
    # Create the application object and run it
    app = MemeMatcherApp(assets_folder="assets")
    app.run()

Conclusion

Congratulations! You have just built a complex Artificial Intelligence application using advanced Computer Vision techniques.

More importantly, you built it the right way. By structuring your code using Object-Oriented Programming, your project is scalable. Want to add a Graphical User Interface (GUI) with buttons later? You don’t have to touch the math inside the Brain or the Database; you only have to modify the App class.

To see the real magic, download a few distinct meme images, put them in an assets folder next to your script, and run it. Try raising your eyebrows, opening your mouth wide, or throwing up a peace sign.

Happy coding!

Check out all our books that you can read for free from this page https://10xdev.blog/library

April 10, 2026 12:00 PM UTC