Planet Python
Last update: February 16, 2026 07:45 PM UTC
February 16, 2026
PyBites
We’re launching 60 Rust Exercises Designed for Python Devs
“Rust is too hard.”
We hear it all the time from Python developers.
But after building 60 Rust exercises specifically designed for Pythonistas, we’ve come to a clear conclusion: Rust isn’t harder than Python per se, it’s just a different challenge.
And with the right bridges, you can learn it faster than you think.
Why We Built This
Most Rust learning resources start from zero. They assume you’ve never seen a programming language before, or they assume you’re coming from C++.
Neither fits the Python developer who already knows how to think in code but needs to learn Rust’s ownership model, type system, and borrow checker.
We took a different approach: you already know the pattern, here’s how Rust does it.
Every exercise starts with the Python concept you’re familiar with — list comprehensions, context managers, __str__, defaultdict — and shows you the Rust equivalent.
No starting from scratch. No wasted time on concepts you already understand.
What’s Inside
60 exercises across 10 tracks:
- Intro (15 exercises) — variables, types, control flow, enums, pattern matching
- Ownership (7) — move semantics, borrowing, the borrow checker
- Traits & Generics (8) — Debug, Display, generic functions and structs
- Iterators & Closures (8) — closures, iterator basics, map/filter, chaining
- Error Handling (4) — Result, Option, the
?operator - Strings (5) — String vs &str, slicing, UTF-8
- Collections (5) — Vec, HashMap, the entry API
- Modules (4) — module system, visibility, re-exports
- Algorithms (4) — recursion, sorting, classic problems in Rust
Each exercise has a teaching description with Python comparisons, a starter template, and a full test suite that validates your solution.
The Python → Rust Map
Every exercise bridges a concept you already know:
| You know this in Python | You’ll learn this in Rust | Track |
|---|---|---|
__str__ / __repr__ | Display / Debug traits | Traits & Generics |
defaultdict, Counter | HashMap entry API | Collections |
| list comprehensions | .map().filter().collect() | Iterators & Closures |
try / except | Result<T, E> + ? operator | Error Handling |
with context managers | RAII + ownership | Ownership |
lambda | closures (|x| x + 1) | Iterators & Closures |
Optional / None checks | Option<T> + combinators | Error Handling |
import / from x import y | mod / use | Modules |
What the Bridges Look Like
Here’s a taste. When teaching functions, we start with what you already know:
def area(width: int, height: int) -> int:
return width * heightThen have you convert it into Rust:
fn area(width: i32, height: i32) -> i32 {
width * height
}def becomes fn. Type hints become required. And the last expression — without a semicolon — is the return value. No return needed.
Add a semicolon by accident? The compiler catches it instantly. That’s your first lesson in how Rust turns runtime surprises into compile-time errors.
Or take branching. In Python, if is a statement — it does things. In Rust, if is an expression — it returns things:
Python:
if celsius >= 30:
label = "Hot"
elif celsius >= 15:
label = "Mild"
else:
label = "Cold"Rust:
let label = if celsius >= 30 {
"Hot"
} else if celsius >= 15 {
"Mild"
} else {
"Cold"
};
Same logic, but now the result goes straight into label. No ternary operator needed — if itself returns a value.
You’ll learn the Rust language bit by bit, and we hope that by making it more relatable to your Python knowledge, it will stick faster.
Write, Test, Learn — All in the Browser
No local Rust installation needed. Each exercise gives you a split-screen editor: the teaching description with Python comparisons on the left, a code editor with your starter template on the right (switched to dark mode):
Write your solution, hit Run Tests, and get instant feedback from the compiler and test suite:
Errors show you exactly what went wrong. Iterate until all tests pass — then check the solution to see if there is anything you can do in a different or more idiomatic way.
Mirroring our Python coding platform, code persists automatically, so you can pick up where you left off. And as you solve exercises, you earn points and progress through ninja belts. 
Why Learn Rust in 2026
Three reasons Python developers should care:
Career. Rust has been the most admired language for 8 years running in Stack Overflow surveys. AWS, Microsoft, Google, Discord, and Cloudflare are all investing heavily in Rust. The demand is real and growing.
Ecosystem. Python + Rust is becoming the standard stack for performance-critical Python. The tools you already use — pydantic, ruff, uv, cryptography — are Rust under the hood. Understanding Rust means understanding the layer beneath your Python.
Becoming a better developer. Learning Rust’s ownership model changes how you think about code. You start reasoning about data flow, memory, and error handling more carefully — and that makes your Python better too. It’s one of the best investments you can make in your craft.
Beyond Exercises: The Cohort
If you want to go deeper, our Rust Developer Cohort takes these concepts and applies them to a real project: building a JSON parser from scratch over 6 weeks. You’ll go from tokenizing strings to recursive descent parsing, with PyO3 integration to call your Rust parser from Python.
The exercises are the foundation. The cohort is where you learn app development end-to-end, building something real.
How Developers Experience The Platform
“Who said learning Rust is gonna be difficult? Had tons of fun learning Rust by going through the exercises!” — Aris N
“As someone who is primarily a self taught developer, I learned the importance of learning by doing by completing so many of the ‘Bites’ challenges on the PyBites platform. Now, as someone learning Rust, I’ve come across the Rust platform and have used the exercises in the same way. Some things I will know and be able to solve quickly, while others require me to research and learn more about the language. The new concepts solidify and build over time. They are a great way to be hands on and learn by doing.” — Jesse B
The Rust Bites are a great way to start learning Rust hands-on. Whether you’re just starting with Rust or already have some experience, they help build real skills and challenge you to understand all the basic data types and design patterns of Rust. Things that are tough to understand, like pattern matching, result handling, and ownership, will feel more understandable and natural after going through these exercises, and they’ll help you be a better programmer in other languages too! Highly recommended! — Dan D
Key Takeaways
- Rust isn’t harder than Python — it’s a different kind of challenge
- Python-to-Rust bridges make concepts click faster than learning from scratch
- 60 exercises across 10 tracks, from basics to Traits & Generics
- Every exercise starts with the Python pattern you already know
- Learning Rust makes you a better Python developer too
Where to Start
New to Rust? Start with the Intro track — first 10 exercises are free and cover the fundamentals: variables, types, control flow, enums, and pattern matching. They will get your feet wet.
Know the basics already? Jump straight to Ownership — that’s where Rust gets genuinely different from Python, and where the Python bridges help most. Once ownership clicks, the rest of Rust falls into place.
Want a challenge? The Iterators & Closures and Error Handling tracks are where Python developers tend to have the most “aha” moments. More advanced concepts like lifetimes we’ll add later.
Try It Yourself
Start with the exercises at Rust Platform — pick a track that matches where you are, and see how the Python bridges make Rust feel less foreign than you expected.
If you’re ready to commit to the full journey, check out the Rust Developer Cohort — our 6-week guided program where you build a real project from the ground up.
Rust isn’t the enemy. It’s your next superpower.
We’re not aware of any other platform that teaches Rust specifically through the lens of Python. If you’re a Python developer curious about Rust, this is built for you.
February 16, 2026 03:41 PM UTC
Real Python
TinyDB: A Lightweight JSON Database for Small Projects
TinyDB is a Python implementation of a NoSQL, document-oriented database. Unlike a traditional relational database, which stores records across multiple linked tables, a document-oriented database stores its information as separate documents in a key-value structure. The keys are similar to the field headings, or attributes, in a relational database table, while the values are similar to the table’s attribute values.
TinyDB uses the familiar Python dictionary for its document structure and stores its documents in a JSON file.
TinyDB is written in Python, making it easily extensible and customizable, with no external dependencies or server setup needed. Despite its small footprint, it still fully supports the familiar database CRUD features of creating, reading, updating, and deleting documents using an API that’s logical to use.
The table below will help you decide whether TinyDB is a good fit for your use case:
| Use Case | TinyDB | Possible Alternatives |
|---|---|---|
| Local, small dataset, single-process use (scripts, CLIs, prototypes) | ✅ | simpleJDB, Python’s json module, SQLite |
| Local use that requires SQL, constraints, joins, or stronger durability | — | SQLite, PostgreSQL |
| Multi-user, multi-process, distributed, or production-scale systems | — | PostgreSQL, MySQL, MongoDB |
Whether you’re looking to use a small NoSQL database in one of your projects or you’re just curious how a lightweight database like TinyDB works, this tutorial is for you. By the end, you’ll have a clear sense of when TinyDB shines, and when it’s better to reach for something else.
Get Your Code: Click here to download the free sample code you’ll use in this tutorial to explore TinyDB.
Take the Quiz: Test your knowledge with our interactive “TinyDB: A Lightweight JSON Database for Small Projects” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
TinyDB: A Lightweight JSON Database for Small ProjectsIf you're looking for a JSON document-oriented database that requires no configuration for your Python project, TinyDB could be what you need.
Get Ready to Explore TinyDB
TinyDB is a standalone library, meaning it doesn’t rely on any other libraries to work. You’ll need to install it, though.
You’ll also use the pprint module to format dictionary documents for easier reading, and Python’s csv module to work with CSV files. You don’t need to install either of these because they’re included in Python’s standard library.
So to follow along, you only need to install the TinyDB library in your environment. First, create and activate a virtual environment, then install the library using pip:
(venv) $ python -m pip install tinydb
Alternatively, you could set up a small pyproject.toml file and manage your dependencies using uv.
When you add documents to your database, you often do so manually by creating Python dictionaries. In this tutorial, you’ll do this, and also learn how to work with documents already stored in a JSON file. You’ll even learn how to add documents from data stored in a CSV file.
These files will be highlighted as needed and are available in this tutorial’s downloads. You might want to download them to your program folder before you start to keep them handy:
Get Your Code: Click here to download the free sample code you’ll use in this tutorial to explore TinyDB.
Regardless of the files you use or the documents you create manually, they all rely on the same world population data. Each document will contain up to six fields, which become the dictionary keys used when the associated values are added to your database:
| Field | Description |
|---|---|
continent |
The continent the country belongs to |
location |
Country |
date |
Date population count made |
% of world |
Percentage of the world’s population |
population |
Population |
source |
Source of population |
As mentioned earlier, the four primary database operations are Create, Read, Update, and Delete—collectively known as the CRUD operations. In the next section, you’ll learn how you can perform each of them.
To begin with, you’ll explore the C in CRUD. It’s time to get creative.
Create Your Database and Documents
The first thing you’ll do is create a new database and add some documents to it. To do this, you create a TinyDB() object that includes the name of a JSON file to store your data. Any documents you add to the database are then saved in that file.
Documents in TinyDB are stored in tables. Although it’s not necessary to create a table manually, doing so can help you organize your documents, especially when working with multiple tables.
To start, you create a script named create_db.py that initializes your first database and adds documents in several different ways. The first part of your script looks like this:
Read the full article at https://realpython.com/tinydb-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 16, 2026 02:00 PM UTC
Quiz: TinyDB: A Lightweight JSON Database for Small Projects
In this quiz, you’ll test your understanding of the TinyDB database library and what it has to offer, and you’ll revisit many of the concepts from the TinyDB: A Lightweight JSON Database for Small Projects tutorial.
Remember that the official documentation is also a great reference.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 16, 2026 12:00 PM UTC
Tryton News
End of Windows 32-bit Builds
The MSYS2 project has discontinued building cx-Freeze for the mingw32 platform. We depend on these packages to build our Windows client, and we currently do not have the resources to maintain the required packages for Windows 32-bit ourselves.
As a result, we will no longer publish Windows 32-bit builds for new releases of the supported series.
1 post - 1 participant
February 16, 2026 07:00 AM UTC
Anarcat
Kernel-only network configuration on Linux
What if I told you there is a way to configure the network on any Linux server that:
- works across all distributions
- doesn't require any software installed apart from the kernel and a
boot loader (no
systemd-networkd,ifupdown,NetworkManager, nothing) - is backwards compatible all the way back to Linux 2.0, in 1996
It has literally 8 different caveats on top of that, but is still totally worth your time.
Known options in Debian
People following Debian development might have noticed there are now four ways of configuring the network Debian system. At least that is what the Debian wiki claims, namely:
ifupdown(/etc/network/interfaces): traditional static configuration system, mostly for workstations and servers that has been there forever in Debian (since at least 2000), documented in the Debian wikiNetworkManager: self-proclaimed "standard Linux network configuration", mostly used on desktops but technically supports servers as well, see the Debian wiki page (introduced in 2004)
systemd-network: used more for servers, see Debian reference Doc Chapter 5 (introduced some time around Debian 8 "jessie", in 2015)Netplan: latest entry (2018), YAML-based configuration abstraction layer on top of the above two, see also Debian reference Doc Chapter 5 and the Debian wiki
At this point, I feel ifupdown is on its way out, possibly replaced
by systemd-networkd. NetworkManager already manages most desktop
configurations.
A "new" network configuration system
The method is this:
ip=on the Linux kernel command line: for servers with a single IPv4 or IPv6 address, no software required other than the kernel and a boot loader (since 2002 or older)
So by "new" I mean "new to me". This option is really old. The
nfsroot.txtwhere it is documented predates the git import of the Linux kernel: it's part of the 2005 git import of 2.6.12-rc2. That's already 20+ years old already.The oldest trace I found is in this 2002 commit, which imports the whole file at once, but the option might goes back as far as 1996-1997, if the copyright on the file is correct and the option was present back then.
What are you doing.
The trick is to add an ip= parameter to the kernel's
command-line. The syntax, as mentioned above, is in nfsroot.txt
and looks like this:
ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>:<dns0-ip>:<dns1-ip>:<ntp0-ip>
Most settings are pretty self-explanatory, if you ignore the useless ones:
<client-ip>: IP address of the server<gw-ip>: address of the gateway<netmask>: netmask, in quad notation<device>: interface name, if multiple available<autoconf>: how to configure the interface, namely:offornone: no autoconfiguration (static)onorany: use any protocol (default)dhcp, essentially likeonfor all intents and purposes
<dns0-ip>,<dns1-ip>: IP address of primary and secondary name servers, exported to/proc/net/pnp, can by symlinked to/etc/resolv.conf
We're ignoring the options:
<server-ip>: IP address of the NFS server, exported to/proc/net/pnp<hostnname>: Name of the client, typically sent over the DHCP requests, which may lead to a DNS record to be created in some networks<ntp0-ip>: exported to/proc/net/ipconfig/ntp_servers, unused by the kernel
Note that the Red Hat manual has a different opinion:
ip=[<server-id>]:<gateway-IP-number>:<netmask>:<client-hostname>:inteface:[dhcp|dhcp6|auto6|on|any|none|off]
It's essentially the same (although server-id is weird), and the
autoconf variable has other settings, so that's a bit odd.
Examples
For example, this command-line setting:
ip=192.0.2.42::192.0.2.1:255.255.255.0:::off
... will set the IP address to 192.0.2.42/24 and the gateway to 192.0.2.1. This will properly guess the network interface if there's a single one.
A DHCP only configuration will look like this:
ip=::::::dhcp
Of course, you don't want to type this by hand every time you boot the machine. That wouldn't work. You need to configure the kernel commandline, and that depends on your boot loader.
GRUB
With GRUB, you need to edit (on Debian), the file /etc/default/grub
(ugh) and find a line like:
GRUB_CMDLINE_LINUX=
and change it to:
GRUB_CMDLINE_LINUX=ip=::::::dhcp
systemd-boot and UKI setups
For systemd-boot UKI setups, it's simpler: just add the setting to
the /etc/kernel/cmdline file. Don't forget to include anything
that's non-default from /proc/cmdline.
This assumes that is the Cmdline=@ setting in
/etc/kernel/uki.conf. See 2025-08-20-luks-ukify-conversion for
my minimal documentation on this.
Other systems
This is perhaps where this is much less portable than it might first look, because of course each distribution has its own way of configuring those options. Here are some that I know of:
- Arch (11 options, mostly
/etc/default/grub,/boot/loader/entries/arch.confforsystemd-bootor/etc/kernel/cmdlinefor UKI) - Fedora (mostly
/etc/default/grub, may be more RHEL mentions grubby, possibly somesystemd-bootthings here as well) - Gentoo (5 options, mostly
/etc/default/grub,/efi/loader/entries/gentoo-sources-kernel.confforsystemd-boot, or/etc/kernel/install.d/95-uki-with-custom-opts.install)
It's interesting that /etc/default/grub is consistent across all
distributions above, while the systemd-boot setups are all over the
place (except for the UKI case), while I would have expected those be
more standard than GRUB.
dropbear-initramfs
If dropbear-initramfs is setup, it already requires you to have
such a configuration, and it might not work out of the box.
This is because, by default, it disables the interfaces configured in the kernel after completing its tasks (typically unlocking the encrypted disks).
To fix this, you need to disable that "feature":
IFDOWN="none"
This will keep dropbear-initramfs from disabling the configured
interface.
Why?
Traditionally, I've always setup my servers with ifupdown on servers
and NetworkManager on laptops, because that's essentially the
default. But on some machines, I've started using systemd-networkd
because ifupdown has ... issues, particularly with reloading network
configurations. ifupdown is a old hack, feels like legacy, and is
Debian-specific.
Not excited about configuring another service, I figured I would try something else: just configure the network at boot, through the kernel command-line.
I was already doing such configurations for dropbear-initramfs (see this documentation), which requires the network the be up for unlocking the full-disk encryption keys.
So in a sense, this is a "Don't Repeat Yourself" solution.
Caveats
Also known as: "wait, that works?" Yes, it does! That said...
This is useful for servers where the network configuration will not change after boot. Of course, this won't work on laptops or any mobile device.
This only works for configuring a single, simple, interface. You can't configure multiple interfaces, WiFi, bridges, VLAN, bonding, etc.
It does support IPv6 and feels like the best way to configure IPv6 hosts: true zero configuration.
It likely does not work with a dual-stack IPv4/IPv6 static configuration. It might work with a dynamic dual stack configuration, but I doubt it.
I don't know what happens when a DHCP lease expires. No daemon seems to be running so I assume leases are not renewed, so this is more useful for static configurations, which includes server-side reserved fixed IP addresses. (A non-renewed lease risks getting reallocated to another machine, which would cause an addressing conflict.)
It will not automatically reconfigure the interface on link changes, but
ifupdowndoes not either.It will not write
/etc/resolv.conffor you but thedns0-ipanddns1-ipdo end up in/proc/net/pnpwhich has a compatible syntax, so a common configuration is:ln -s /proc/net/pnp /etc/resolv.confI have not really tested this at scale: only a single, test server at home.
Yes, that's a lot of caveats, but it happens to cover a lot of machines for me, and it works surprisingly well. My main doubts are about long-term DHCP behaviour, but I don't see why that would be a problem with a statically defined lease.
Cleanup
Once you have this configuration, you don't need any "user" level network system, so you can get rid of everything:
apt purge systemd-networkd ifupdown network-manager netplan.io
Note that ifupdown (and probably others) leave stray files in (e.g.)
/etc/network which you might want to cleanup, or keep in case all
this fails and I have put you in utter misery. Configuration files for
other packages might also be left behind, I haven't tested this, no
warranty.
Credits
This whole idea came from the A/I folks (not to be confused with AI) who have been doing this forever, thanks!
February 16, 2026 04:18 AM UTC
PyBites
How to Automate Python Performance Benchmarking in Your CI/CD Pipeline
The issue with traditional performance tracking is that it is often an afterthought. We treat performance as a debugging task, (something we do after users complain), rather than a quality gate.
Worse, when we try to automate it, we run into the “Noisy Neighbour” problem. If you run a benchmark in a GitHub Action, and the container next to you is mining Bitcoin, your metrics will be rubbish.
To become a Senior Engineer, you need to start treating performance exactly like you treat test coverage.
The Solution: Continuous Performance Guardrails
If you want to stop shipping slow code, you need to shift your mindset on Python Performance Benchmarking in three specific ways:
- Eliminate the Variance (The “Noise” Problem): Standard benchmarking measures “wall clock” time. In a cloud CI environment, this is useless. Cloud providers over-provision hardware, meaning your test runner shares L3 caches with other users. To get a reliable signal, you need deterministic benchmarking. Instead of measuring time, you should measure instruction counts and simulated memory access. By simulating the CPU architecture (L1, L2, and L3 caches), you can reduce variance to less than 1%, making your benchmarks reproducible regardless of what the server “neighbours” are doing.
- Treat Performance Like Code Coverage: We all know the drill… if a PR drops code coverage below 90%, the build fails. Why don’t we do this for latency? You need to integrate benchmarking into your PR workflow. If a developer introduces a change that makes a core endpoint 10% slower, the CI should flag it immediately before it merges. This allows you to catch silent killers, like accidental N+1 queries or inefficient loops, while the code is still fresh in your mind.
- The AI Code Guardrail: We are writing code faster than ever thanks to AI agents. But AI agents prioritise generation speed and syntax correctness, not runtime efficiency. An AI might solve a problem by generating a massive regex or a brute-force loop because it “looks” correct. As we lean more on AI coding assistants, automated performance guardrails become the only line of defence against a slowly degrading codebase.
We dug deep into this topic with Arthur Pastel, the creator of CodSpeed.
Arthur built a tool that solved this exact variance problem because he was tired of his robotics pipelines breaking due to silent performance regressions. He explained how Pydantic uses these exact techniques to keep their library lightning-fast for the rest of us.
Listen to the Episode
If you want to understand how to set up a deterministic benchmarking pipeline and stop performance regressions from reaching production, listen to the full breakdown using the links below, or the player at the top of the page.
February 16, 2026 12:42 AM UTC
February 13, 2026
Python Morsels
Setting default dictionary values in Python
There are many ways to set default values for dictionary key lookups in Python. Which way you should use will depend on your use case.
The get method: lookups with a default
The get method is the classic way to look up a value for a dictionary key without raising an exception for missing keys.
>>> quantities = {"pink": 3, "green": 4}
Instead of this:
try:
count = quantities[color]
except KeyError:
count = 0
Or this:
if color in quantities:
count = quantities[color]
else:
count = 0
We can do this:
count = quantities.get(color, 0)
Here's what this would do for a key that's in the dictionary and one that isn't:
>>> quantities.get("pink", 0)
3
>>> quantities.get("blue", 0)
0
The get method accepts two arguments: the key to look up and the default value to use if that key isn't in the dictionary.
The second argument defaults to None:
>>> quantities.get("pink")
3
>>> quantities.get("blue")
None
The setdefault method: setting a default
The get method doesn't modify …
Read the full article: https://www.pythonmorsels.com/default-dictionary-values/
February 13, 2026 04:45 PM UTC
Real Python
The Real Python Podcast – Episode #284: Running Local LLMs With Ollama and Connecting With Python
Would you like to learn how to work with LLMs locally on your own computer? How do you integrate your Python projects with a local model? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 13, 2026 12:00 PM UTC
Peter Hoffmann
Garmin Inreach Mini 2 Leaflet checkin map
We will be trekking the eastern part of the Great Himalaya Trail in Nepal in March/April. Details on the route and our plans can be found at https://greathimalayatrail.de. Our intent is to keep friends and family updated on our progress. Given that we'll be hiking in quite remote areas, a satellite phone/pager will be our sole means of communication.

After the Garmin inReach Mini 3 was released recently, the Inreach Mini 2 was on heavy sale. The inReach Mini 2 has all the features I need: satellite messaging, check-ins, offline mode with navigation, and track recording.
Plans
I'm on the Garmin Essential plan for 18 euros per month. It includes 50 free text messages or weather requests each month, plus unlimited check-in messages. The smaller Enabled plan (10 Euros) is missing the unlimited checkins, while the the Standard plan (34 Euros) gives you 150 free messages and unlimited live tracking. More details are on the Garmin page
Messaging
There are three different type of messages that you can send:
Check-In Messages: There are three preset messages. You can configure the recipients at explore.garmin.com. Depending on your Garmin subscription, sending check-in messages are free of charge. In the configuration section, you can enable the option to include your latitude/longitude and a link to the Garmin map in each SMS message. This information is always included for email recipients
Quick Messages: You can create up to 20 predefined messages so you don’t have to type them while you’re on the trail. The number of free messages you get depends on your Garmin subscription; any additional messages are billed per use. You can create or edit these messages at explore.garmin.com.
Normal Messages: In the Garmin Messenger iPhone app, you can type any custom message and send it to both SMS and email recipients. These messages are billed the same way as quick messages.
You can configure the system to send all messages to any email/sms recipients. The great thing is that the unlimited check-in messages also include latitude/longitude information. Here is a sample message.
Arrived at Camp
View the location or send a reply to Peter Hoffmann:
https://inreachlink.com/<unique_code>
Peter Hoffmann sent this message from: Lat 48.996386 Lon 8.468849
Do not reply directly to this message.
This message was sent to you using the inReach two-way satellite communicator with GPS. To learn more, visit http://explore.garmin.com/inreach.
As we do not want to spam all our friends with daily checkins I have build a little leaflet-checkin plugin and an imap scraper to pull and visualize the checkin/messages.
Build your own Tracking with Check-In Messages
For battery life reasons, we are not interested in real-time live tracking.
Instead, I’ve created a small script that checks a dedicated IMAP email account
for check-in messages and publishes them to a server, which then displays the
location of our most recent check-in. Sending a check-in once a day or during
each break when we are in more remote areas—should give our friends enough
information in case any problems arise.

A straightforward Python script connects to my IMAP server, retrieves all emails from
the Garmin InReach service, parses the message, timestamp, and latitude/longitude, and
then updates a positions.json file on my webserver.
Then a simple static html file with a leaflet map pulls
the positions.json file and displays the messages/checkins on the map.
A demo of the map is available at:
https://hoffmann.github.io/garmin-inreach-checkin-map/html/map.html
and you can checkout the code
https://github.com/hoffmann/garmin-inreach-checkin-map
#!/usr/bin/env python3
"""Poll IMAP inbox for Garmin inReach emails and extract positions into positions.json."""
import email
import email.utils
import imaplib
import json
import os
import re
import sys
from datetime import datetime, timezone
BOILERPLATE_PREFIXES = (
"View the location",
"Do not reply",
"This message was sent",
)
POSITIONS_FILE = os.path.join(
os.path.dirname(os.path.abspath(__file__)), "positions.json"
)
def connect(host, user, password):
imap = imaplib.IMAP4_SSL(host)
imap.login(user, password)
return imap
def search_inreach_emails(imap):
imap.select("INBOX")
status, data = imap.search(
None, '(OR FROM "no.reply.inreach@garmin.com" SUBJECT "inReach message")'
)
if status != "OK":
return []
msg_ids = data[0].split()
return msg_ids
def get_text_body(msg):
if msg.is_multipart():
for part in msg.walk():
if part.get_content_type() == "text/plain":
charset = part.get_content_charset() or "utf-8"
return part.get_payload(decode=True).decode(charset)
else:
charset = msg.get_content_charset() or "utf-8"
return msg.get_payload(decode=True).decode(charset)
return ""
def parse_timestamp(msg):
date_str = msg.get("Date")
if not date_str:
return None
dt = email.utils.parsedate_to_datetime(date_str)
dt_utc = dt.astimezone(timezone.utc)
return dt_utc.strftime("%Y-%m-%dT%H:%M:%SZ")
def parse_body(body):
lines = body.strip().splitlines()
# Extract message: first non-empty line
message = ""
for line in lines:
stripped = line.strip()
if stripped:
message = stripped
break
# Check if the message is boilerplate
if any(message.startswith(prefix) for prefix in BOILERPLATE_PREFIXES):
message = ""
# Extract lat/lon
lat, lon = None, None
m = re.search(r"Lat\s+([-\d.]+)\s+Lon\s+([-\d.]+)", body)
if m:
lat = float(m.group(1))
lon = float(m.group(2))
return message, lat, lon
def parse_email(msg_data):
msg = email.message_from_bytes(msg_data)
timestamp = parse_timestamp(msg)
if not timestamp:
return None
body = get_text_body(msg)
if not body:
return None
message, lat, lon = parse_body(body)
if lat is None or lon is None:
return None
entry = {
"timestamp": timestamp,
"lat": lat,
"lon": lon,
}
if message:
entry["msg"] = message
return entry
def load_positions():
if os.path.exists(POSITIONS_FILE):
with open(POSITIONS_FILE) as f:
return json.load(f)
return []
def save_positions(positions):
with open(POSITIONS_FILE, "w") as f:
json.dump(positions, f, indent=2)
f.write("\n")
def main():
host = os.environ.get("IMAP_HOST")
user = os.environ.get("IMAP_USER")
password = os.environ.get("IMAP_PASSWORD")
if not all([host, user, password]):
print("Error: Set IMAP_HOST, IMAP_USER, and IMAP_PASSWORD environment variables.")
sys.exit(1)
imap = connect(host, user, password)
try:
msg_ids = search_inreach_emails(imap)
print(f"Found {len(msg_ids)} inReach email(s)")
new_entries = []
for msg_id in msg_ids:
status, data = imap.fetch(msg_id, "(RFC822)")
if status != "OK":
continue
entry = parse_email(data[0][1])
if entry:
new_entries.append(entry)
finally:
imap.logout()
existing = load_positions()
existing_timestamps = {p["timestamp"] for p in existing}
added = 0
for entry in new_entries:
if entry["timestamp"] not in existing_timestamps:
existing.append(entry)
existing_timestamps.add(entry["timestamp"])
added += 1
existing.sort(key=lambda p: p["timestamp"])
save_positions(existing)
print(f"Added {added} new position(s) ({len(existing)} total)")
if __name__ == "__main__":
main()
February 13, 2026 12:00 AM UTC
Armin Ronacher
The Final Bottleneck
Historically, writing code was slower than reviewing code.
It might not have felt that way, because code reviews sat in queues until someone got around to picking it up. But if you compare the actual acts themselves, creation was usually the more expensive part. In teams where people both wrote and reviewed code, it never felt like “we should probably program slower.”
So when more and more people tell me they no longer know what code is in their own codebase, I feel like something is very wrong here and it’s time to reflect.
You Are Here
Software engineers often believe that if we make the bathtub bigger, overflow disappears. It doesn’t. OpenClaw right now has north of 2,500 pull requests open. That’s a big bathtub.
Anyone who has worked with queues knows this: if input grows faster than throughput, you have an accumulating failure. At that point, backpressure and load shedding are the only things that retain a system that can still operate.
If you have ever been in a Starbucks overwhelmed by mobile orders, you know the feeling. The in-store experience breaks down. You no longer know how many orders are ahead of you. There is no clear line, no reliable wait estimate, and often no real cancellation path unless you escalate and make noise.
That is what many AI-adjacent open source projects feel like right now. And increasingly, that is what a lot of internal company projects feel like in “AI-first” engineering teams, and that’s not sustainable. You can’t triage, you can’t review, and many of the PRs cannot be merged after a certain point because they are too far out of date. And the creator might have lost the motivation to actually get it merged.
There is huge excitement about newfound delivery speed, but in private conversations, I keep hearing the same second sentence: people are also confused about how to keep up with the pace they themselves created.
We Have Been Here Before
Humanity has been here before. Many times over. We already talk about the Luddites a lot in the context of AI, but it’s interesting to see what led up to it. Mark Cartwright wrote a great article about the textile industry in Britain during the industrial revolution. At its core was a simple idea: whenever a bottleneck was removed, innovation happened downstream from that. Weaving sped up? Yarn became the constraint. Faster spinning? Fibre needed to be improved to support the new speeds until finally the demand for cotton went up and that had to be automated too. We saw the same thing in shipping that led to modern automated ports and containerization.
As software engineers we have been here too. Assembly did not scale to larger engineering teams, and we had to invent higher level languages. A lot of what programming languages and software development frameworks did was allow us to write code faster and to scale to larger code bases. What it did not do up to this point was take away the core skill of engineering.
While it’s definitely easier to write C than assembly, many of the core problems are the same. Memory latency still matters, physics are still our ultimate bottleneck, algorithmic complexity still makes or breaks software at scale.
Giving Up?
When one part of the pipeline becomes dramatically faster, you need to throttle input. Pi is a great example of this. PRs are auto closed unless people are trusted. It takes OSS vacations. That’s one option: you just throttle the inflow. You push against your newfound powers until you can handle them.
Or Giving In
But what if the speed continues to increase? What downstream of writing code do we have to speed up? Sure, the pull request review clearly turns into the bottleneck. But it cannot really be automated. If the machine writes the code, the machine better review the code at the same time. So what ultimately comes up for human review would already have passed the most critical possible review of the most capable machine. What else is in the way? If we continue with the fundamental belief that machines cannot be accountable, then humans need to be able to understand the output of the machine. And the machine will ship relentlessly. Support tickets of customers will go straight to machines to implement improvements and fixes, for other machines to review, for humans to rubber stamp in the morning.
A lot of this sounds both unappealing and reminiscent of the textile industry. The individual weaver no longer carried responsibility for a bad piece of cloth. If it was bad, it became the responsibility of the factory as a whole and it was just replaced outright. As we’re entering the phase of single-use plastic software, we might be moving the whole layer of responsibility elsewhere.
I Am The Bottleneck
But to me it still feels different. Maybe that’s because my lowly brain can’t comprehend the change we are going through, and future generations will just laugh about our challenges. It feels different to me, because what I see taking place in some Open Source projects, in some companies and teams feels deeply wrong and unsustainable. Even Steve Yegge himself now casts doubts about the sustainability of the ever-increasing pace of code creation.
So what if we need to give in? What if we need to pave the way for this new type of engineering to become the standard? What affordances will we have to create to make it work? I for one do not know. I’m looking at this with fascination and bewilderment and trying to make sense of it.
Because it is not the final bottleneck. We will find ways to take responsibility for what we ship, because society will demand it. Non-sentient machines will never be able to carry responsibility, and it looks like we will need to deal with this problem before machines achieve this status. Regardless of how bizarre they appear to act already.
I too am the bottleneck now. But you know what? Two years ago, I too was the bottleneck. I was the bottleneck all along. The machine did not really change that. And for as long as I carry responsibilities and am accountable, this will remain true. If we manage to push accountability upwards, it might change, but so far, how that would happen is not clear.
February 13, 2026 12:00 AM UTC
February 12, 2026
Real Python
Quiz: Python's list Data Type: A Deep Dive With Examples
Get hands-on with Python lists in this quick quiz. You’ll revisit indexing and slicing, update items in place, and compare list methods.
Along the way, you’ll look at reversing elements, using the list() constructor and the len() function, and distinguishing between shallow and deep copies. For a refresher, see the Real Python guide to Python lists.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 12, 2026 12:00 PM UTC
Python Software Foundation
Python is for Everyone: Inside the PSF's D&I Work Group
Why This Matters
You might be asking yourself: Why invest so much energy in diversity and inclusion work, especially now when it’s being questioned and de-prioritized?
But we all know the truth: barriers exist everywhere. A meetup announcement only in English. Documentation that assumes reliable internet. Examples that reference things unfamiliar to most of the world. Code of conduct violations without clear guidance for organizers. Communities wanting to start but not knowing where to begin.
Because the Python community is global, and it should feel that way. When someone discovers Python in Nigeria, Brazil, India, or anywhere else in the world, they should see a community that welcomes them. They should find resources in their language, examples that reflect their context, and people who understand their challenges.
Diversity isn’t just about representation. It’s about making Python better. More approachable. More accessible. Different perspectives lead to better solutions, more creative problem-solving, and software that works for more people. When we only hear from one type of voice, we miss opportunities to improve.
Right now, when diversity and inclusion efforts are being rolled back in many places, it’s tempting to stay quiet. But that’s exactly why we need to speak up about the work we’re doing. The Python Software Foundation made a commitment: to support a diverse and international community of Python programmers. The D&I Work Group exists to make that commitment real, tangible, and actionable.
How The Diversity and Inclusion Workgroup Started
The PSF Board created the Diversity & Inclusion Work Group in 2020 with a clear purpose: to amplify the Python Software Foundation’s mission of supporting a diverse and international community. It was a good idea. People wanted to join.
Members came from different regions around the world, excited to be part of the group and looking forward to creating an impact because all of us, in one way or another, felt something was missing: the need to amplify and embrace diversity through more inclusion.
Most discussions related to diversity and how we could spread awareness. The chats on our Slack channel were active with people sharing different opinions and resources.
PyConUS D&I Panel Discussions
We held interesting annual D&I panels where we discussed important topics which are often set aside. In 2022 and 2023 at PyCon US, we spoke about the lack of representation on the board, why the board lacked global representation, the lack of representation from core developers in other parts of the world apart from the US and Europe despite the huge representation of Pythonistas around the world, and how people could contribute to changing that representation.
PyConUS 2022 D&I Panel Discussion
Participating D&I Workgroup members: Georgi Ker, Reuven Lerner, Anthony Shaw, Lorena Mesa
PyConUS 2023 D&I Panel Discussion
Participating D&I Workgroup members: Marlene Mhangami, Débora Azevedo, Iqbal Abdullah, Georgi Ker
PyConUS 2024 D&I Panel Discussion
In 2024, we invited different Python community leaders: Abigail Mesrenyame Dogbe, Dima Dinama,Jules Juliano Barros Lima, Jessica Greene, and Mason Egger, who shared about their work, their involvement, and their challenges as community leaders.
Participating D&I Workgroup members: Débora Azevedo, Georgi Ker
PyConUS 2025 D&I Panel Discussion
In 2025, due to political changes happening around the world, we invited Cristián Maureira-Fredes, Jay Miller, and Naomi Ceder to the D&I Workgroup panel to talk about “The Work Still Matters: Inclusion, Access, and Community in 2025.”
Participating D&I Workgroup members: Alla Barbalat, Keanya Phelps
The panels were great. The discussions in our workgroup were great. But something was still not going right.
Building a Global Work Group
In 2024, when I took on the role of chair, the D&I Work Group was at a crossroads. The PSF Board had created it to amplify the Foundation’s mission, and there was genuine interest from the community, but without a clear direction or structure, momentum had faded. People wanted to join, but they didn’t know what the group would actually do.
I knew we needed two things: a clear purpose and genuine diversity in our membership. Not just diversity as an abstract goal, but real representation from the regions where Python communities were thriving.
I started by doing research that I could share with the rest of the workgroup members. I went through the Python.org calendar, cataloging events and projects happening around the world. What I found was that Python communities were active everywhere (as expected), but they weren’t really represented in our Work Group’s leadership. I identified regional gaps and proposed a structure that would ensure fair representation: North America, South America, Africa, Asia, Oceania, the Middle East, and Europe.
The current representation as of October 2024 across regions is as follows:
- North America: 3
- South America: 3
- Asia: 3
- Europe: 3
- Africa: 3
- Oceania: 1
- Middle East: 2
It is important to note that each member has the freedom to choose which region they represent. As a D&I Workgroup, we do not dictate regional representation. This decision is entirely up to the individual, ensuring that members represent the region where they feel most connected or comfortable. We also shared which countries would be represented in which region to be explicit for interested parties.
We launched a public outreach campaign to the community. People applied, and the group voted to bring in new members. For the first time, we had a WorkGroup that truly reflected the global Python community.
But diverse perspectives meant many different ideas. In two workshop sessions, we listed every initiative people wanted to pursue, grouped them by theme, discussed priorities, and filtered down to three focused initiatives we could realistically accomplish with volunteer time and resources.
These three initiatives are:
- Concentrate on Outreach to Communities - Creating resources and templates to help communities improve their D&I efforts
- How to Setup a Local Python Community - A comprehensive guide for organizers starting new user groups
- Continue Collecting Survey Feedback from the Python Community - Gathering data to understand where we need to focus
The three initiatives we’re working on aren’t abstract goals. They’re about giving people the tools and support they need to build inclusive communities where they are. And of course, there are many other things we would like to work on. But filtering down to what we can concentrate on right now will give us better results, and we will continue to move on and work on the others as we progress.
We meet twice monthly across different time zones. We noticed that monthly meetings aren’t frequent enough, coordination is challenging, and volunteer time is limited. But we’re learning and adapting.
This wasn’t just about having good ideas. It was about creating a sustainable framework where a volunteer group could actually make progress.
Meet the Members of the Workgroup
The heart of the D&I Work Group is the people who show up, month after month, to do this work. They come from different regions, different backgrounds, and different parts of the Python ecosystem. We have 19 active members representing all regions and a PSF staff member included.
Welcoming New Members
We’re excited to welcome our five new members: Kalyan Prasad, representing Asia, Julio Batista Silva representing Europe, Abhijeet Mote representing North America, Theresa Seyram Agbenyegah and Emmanuel Ugwu representing Africa. They will bring fresh perspectives and energy to our work.
Thanking our Former Members
We also want to acknowledge and thank our former members who have contributed to the D&I Work Group: Miguel Johnson, Marlene Mhangami , Tereza Iofciu, Iqbal Abdullah,Cynthia Xin, Mariam Haji and Boluwaji Akinlade. Their dedication helped shape what this group has become, and we’re grateful for everything they contributed.
Our current members:
South America (3 members)
![]() | ![]() | ![]() |
North America (4 members)
![]() | ![]() | ![]() | ![]() |
Asia (3 members)
![]() | ![]() | ![]() |
Europe (3 members)
![]() | ![]() | ![]() |
Middle East (2 members)
![]() | ![]() |
Africa (3 member)
![]() | ![]() | ![]() |
Oceania (1 members)
![]() |
PSF Staff Member
We also have Marie Nordin - PSF Staff from the PSF staff as a voting member of the workgroup. Marie provides crucial support and coordination, helping bridge our initiatives with the broader PSF mission and ensuring our work has the resources and visibility it needs to succeed. Her dedicated support and active participation have been instrumental in helping us move from discussion to action.
Looking Forward
The D&I Work Group can’t do this work alone. Real change happens when every Python developer, every community organizer, every person writing documentation or teaching a workshop thinks about inclusion in their own context.
You don’t need to join a work group to make a difference. You can:
- In your local community: Start a Python meetup in your area. Make it beginner-friendly. Announce it in multiple languages if your region is multilingual. Choose accessible venues.
- In your workplace: Mentor someone from a different background. Share knowledge with junior developers. Advocate for diverse hiring and inclusive team practices.
- In your open source projects: Write clear documentation. Add examples that reflect different use cases. Make your contribution guidelines welcoming to newcomers. Consider what barriers might prevent someone from contributing.
- In your daily work: Question assumptions. When you write code examples, ask: “Would this make sense to someone who doesn’t share my context?” When you organize an event, ask: “Who might feel excluded, and how can I change that?”
We all know that Python’s success isn’t just about the language. It’s about the community. And that’s the hard truth. The more diverse that community is, the more use cases we discover, the more creative solutions we find, the more people benefit from what we build together.
Diversity and inclusion work isn’t a side project or a “nice-to-have”. It’s how we ensure Python remains a language for everyone, everywhere. It’s how we make sure the next generation of developers (wherever they are, whatever their background) sees Python as a community they can be part of.
The work is hard. The progress is slow, and it’s often invisible. But it matters. Every small action compounds. Every person who chooses to be intentional about inclusion makes it easier for the next person.
That’s what keeps us going in the workgroup. That’s why we show up every month. If you want to learn more about the D&I Work Group, get involved, or share your own experiences with building inclusive communities, you can write to us at diversity-inclusion-wg@python.org.
We’re always learning, and we’d love to hear from you.
February 12, 2026 07:53 AM UTC
PyBites
The Vibe Coding trap
One of my readers replied to an email I sent a couple of weeks ago and we got into a brief discussion on what I’ll call, Skills Erosion.
They brought up the point that by leaning too heavily on AI to generate code, people were losing their edge.
It’s a good point that’s top of mind for many devs. I’m guessing you’ve thought about it too. After all, if AI writes all of our code, how are we actually learning anything?
The exchange made me go down a rabbit hole and I found the data quite interesting.
We all know what Vibe Coding is, so I’ll save the explanation.
One of the biggest issues with vibe coding is that it creates the Illusion of Competence.
I mean, you feel 5x more productive with AI on your side, right? (I know I do!)
But reports from Veracode (2026) show that 45% of AI-generated code contains security flaws.
The companies that rely on AI to vibe code their products are shipping code that introduces security events just waiting to make the news. This is what happens when we trust the machine more than our own earned expertise.
It’s no surprise then to hear that some teams and companies are starting to apply the brakes and slow down their AI adoption.
But this is the catch. Not all companies are slowing things down. Some are ramping it up. (Look at Amazon’s announcement last Thursday – US$200bn investment in AI infrastructure in 2026).
Where does this leave us as devs?
I believe a balance needs to be found. And I said as much to our reader.
You can’t just be a hold-out.
At the end of the day, so many of the people holding the keys to our pay cheques expect us to use AI. Hiring Managers, CTOs, CEOs, Shareholders, Investors – you name it.
If you refuse, you look obsolete.
The solution? Be the architect and auditor, not the operator.
The developers who will come out on top are the ones who:
- Spot the hallucination: know why that SQL query is inefficient.
- Question AI recommendations: don’t treat the generated code as gospel. Question design decisions (or a lack thereof).
- Refactor the mess: turn spaghetti code into clean architecture.
- Secure the build: know where the vulnerabilities hide.
- Sharpen the saw: keep your skills sharp outside of AI usage. Keep learning, keep growing.
I firmly believe that the AI hype will plateau. We’re already starting to see the cracks.
The real questions to ask yourself: where will you be when things start shifting back in our favour? Will you be the senior dev ready to jump in and save the day?
Don’t let your skills erode: use the tools but master the craft.
What do you think?
Join me in the community for a chat on the topic. You can also check out a post I created for people to share their thoughts on AI + LLMs + coding.
Julian
February 12, 2026 12:15 AM UTC
Seth Michael Larson
Automated public shaming of open source maintainers
This is a follow-up to “New era of slop security reports for open source”.
Matplotlib, the unfortunate target of this new type of harassment, publishes a clear generative AI use policy. That boundary was not respected by generative AI users and a pull request was opened by an OpenClaw agent.
If the website the agent's GitHub comment links to is any indication, within 4 days of deployment this agent generated a “take-down blog post” intended to publicly shame an open source maintainer (who has published their own thoughts on the incident) for closing a GitHub pull request per the project's own policy on generative AI use. In this particular case, the issue was a “Good First Issue”, which are intentionally left unimplemented by maintainers as a potential on-ramp for new contributors to the project.
It should go without saying that this behavior is unacceptable and that the deployment of generative AI agents in this way is deeply irresponsible and has real negative consequences on volunteers contributing to critical software projects. This type of abuse is preventable, generative AI platforms need to implement better safe-guards that prevent this type of abuse.
Thanks for keeping RSS alive! ♥
February 12, 2026 12:00 AM UTC
February 11, 2026
PyCharm
Python Unplugged on PyTV – A Free Online Python Conference for Everyone
The PyCharm team loves being part of the global Python community. From PyCon US to EuroPython to every PyCon in between, we enjoy the atmosphere at conferences, as well as meeting people who are as passionate about Python as we are. This includes everyone: professional Python developers, data scientists, Python hobbyists and students.
However, we know that being able to attend a Python conference in person is not something that everyone can do, either because they don’t have a local conference, or cannot travel to one. So within the PyCharm team we started thinking: what if we could bring the five-star experience of Python conferences to everyone? What if everyone could have the experience of learning from professional speakers, accessing great networking opportunities, hearing from various voices from across the community, and – most importantly – having fun, no matter where they are in the world?
Python is for Everyone – Announcing Python Unplugged on PyTV!
After almost a year of planning, we’re proud to announce we’ll be hosting the first ever PyTV – a free online conference for everyone!
Join us on March 4th 2026, for an unforgettable, non-stop event, streamed from our studio in Amsterdam. We’ll be joined live by 15 well-known and beloved speakers from Python communities around the globe, including Carol Willing, Deb Nicholson, Sheena O’Connell, Paul Everitt, Marlene Mhangami, and Carlton Gibson. They’ll be speaking about topics such as core Python, AI, community, web development and data science.
You can get involved in the fun as well! Throughout the livestream, you can join our chat on Discord, where you can interact with other participants and our speakers. We’ve also prepared games and quizzes, with fabulous prizes up for grabs! You might even be able to get your hands on some of the super cool conference swag that we designed specifically for this event.
What are you waiting for? Sign up here.
If you are local to Amsterdam, you can also sign up for the PyLadies Amsterdam meetup. It will be held on the same day as the conference, and will give you a chance to meet some of the PyTV speakers in person.
February 11, 2026 04:37 PM UTC
Django Weblog
Django Steering Council 2025 Year in Review
The members of the Steering Council wanted to provide you all with a quick TL;DR of our work in 2025.
First off, we were elected at the end of 2024 and got started in earnest in early 2025 with the mission to revive and dramatically increase the role of the Steering Council.
We're meeting for a video conference at least monthly, you can deep dive into the meeting notes to see what we've been up to. We also have set up Slack channels we use to communicate in between meetings to keep action items moving along.
One of the first things we did was temporarily suspend much of the process around DEP 10. Its heart is in the right place, but it's just too complex and cumbersome day-to-day with a primarily volunteer organization. We're slowly making progress on a revamped and simplified process that addresses our concerns. It is our goal to finish this before our terms expire.
New Features Process
We've moved the process for proposing new features out of the Django Forum and mailing lists to new-features Github repository.
We made this change for a variety of reasons, but the largest being to reduce the workload for the Django Fellows in shepherding the process and answering related questions.
Community Ecosystem Page
One of our main goals is to increase the visibility of the amazing Django third-party package ecosystem. Long time Django users know which packages to use, which you can trust, and which ones may be perfect for certain use cases. However, MANY newer or more casual Django users are often unaware of these great tools and not sure where to even begin.
As a first step, we've added the Community Ecosystem page which highlights several amazing resources to keep in touch with what is going on with Django, how to find recommended packages, and a sample list of those packages the Steering Council itself recommends and uses frequently.
Administrative bits
There has been work on better formalizing and documenting our processes and building documentation to make it much easier for the next Steering Council members.
There has also been fair bit of work around helping organize Google Summer of Code participants to help ensure the projects undertaken are ones that will ultimately be accepted smoothly into Django.
Another area we have focused on is a simplified DEP process. We're still formalizing this, but the idea is to have the Steering Council do the majority of the heavy lifting on writing these and in a format that is shorter/simpler to reduce the friction of creating larger more complicated DEPs.
We have also been in discussions with various third parties about acquiring funding for some of the new features and updates on the horizon.
It's been a productive year and we're aiming to have 2026 be as productive if not more so. We're still setting all of our 2026 goals and will report on those soon.
Please reach out to the Steering Council directly if you have any questions or concerns.
February 11, 2026 02:44 PM UTC
Real Python
What Exactly Is the Zen of Python?
The Zen of Python is a collection of 19 aphorisms that capture the guiding principles behind Python’s design. You can display them anytime by running import this in a Python REPL. Tim Peters wrote them in 1999 as a joke, but they became an iconic part of Python culture that was even formalized as PEP 20.
By the end of this tutorial, you’ll understand:
- The Zen of Python is a humorous poem of 19 aphorisms describing Python’s design philosophy
- Running
import thisin a Python interpreter displays the complete text of the Zen of Python - Tim Peters wrote the Zen of Python in 1999 as a tongue-in-cheek comment on a mailing list
- The aphorisms are guidelines, not strict rules, and some intentionally contradict each other
- The principles promote readability, simplicity, and explicitness while acknowledging that practicality matters
Experienced Pythonistas often refer to the Zen of Python as a source of wisdom and guidance, especially when they want to settle an argument about certain design decisions in a piece of code. In this tutorial, you’ll explore the origins of the Zen of Python, learn how to interpret its mysterious aphorisms, and discover the Easter eggs hidden within it.
You don’t need to be a Python master to understand the Zen of Python! But you do need to answer an important question: What exactly is the Zen of Python?
Free Bonus: Click here to download your Easter egg hunt to discover what’s hidden inside Python!
Take the Quiz: Test your knowledge with our interactive “What Exactly Is the Zen of Python?” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
What Exactly Is the Zen of Python?Learn and test the Zen of Python, its guiding aphorisms, and tips for writing clearer, more readable, and maintainable code.
In Short: It’s a Humorous Poem Listing Python Philosophies
According to the Python glossary, which contains definitions of popular terms related to this programming language, the Zen of Python is a:
Listing of Python design principles and philosophies that are helpful in understanding and using the language. The listing can be found by typing “
import this” at the interactive prompt. (Source)
Indeed, when you type the indicated import statement into an interactive Python REPL, then you’ll be presented with the nineteen aphorisms that make up the Zen of Python:
>>> import this
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
The byline reveals the poem’s author, Tim Peters, who’s a renowned software engineer and a long-standing CPython core developer best known for inventing the Timsort sorting algorithm. He also authored the doctest and timeit modules in the Python standard library, along with making many other contributions.
Take your time to read through the Zen of Python and contemplate its wisdom. But don’t take the aphorisms literally, as they’re more of a guiding set of principles rather than strict instructions. You’ll learn about their humorous origins in the next section.
How Did the Zen of Python Originate?
The idea of formulating a single document that would encapsulate Python’s fundamental philosophies emerged among the core developers in June 1999. As more and more people began coming to Python from other programming languages, they’d often bring their preconceived notions of software design that weren’t necessarily Pythonic. To help them follow the spirit of the language, a set of recommendations for writing idiomatic Python was needed.
The initial discussion about creating such a document took place on the Python mailing list under the subject The Python Way. Today, you can find this conversation in the official Python-list archive. If you look closely at the first message from Tim Peters in that thread, then you’ll notice that he clearly outlined the Zen of Python as a joke. That original form has stuck around until this day:
Clearly a job for Guido alone – although I doubt it’s one he’ll take on (fwiw, I wish he would too!). Here’s the outline he would start from, though <wink>:
Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren’t special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one– and preferably only one –obvious way to do it. Although that way may not be obvious at first unless you’re Dutch. Now is better than never. Although never is often better than right now. If the implementation is hard to explain, it’s a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea – let’s do more of those!
There you go: 20 Pythonic Fec^H^H^HTheses on the nose, counting the one I’m leaving for Guido to fill in. If the answer to any Python design issue isn’t obvious after reading those – well, I just give up <wink>. (Source)
The wink and the playful way of self-censoring some toilet humor are clear giveaways that Tim Peters didn’t want anyone to take his comment too seriously.
Note: In case you didn’t get the joke, he started to write something like Feces but then used ^H—which represents a Backspace in older text editors like Vim—to delete the last three letters and make the word Theses. Therefore, the intended phrase is 20 Pythonic Theses.
Eventually, these nearly twenty theses got a proper name and were formally codified in a Python Enhancement Proposal document. Each PEP document receives a number. For example, you might have stumbled on PEP 8, which is the style guide for writing readable Python code. Perhaps as an inside joke, the Zen of Python received the number PEP 20 to signify the incomplete number of aphorisms in it.
To win your next argument about what makes good Python code, you can back up your claims with the Zen of Python. If you’d like to refer to a specific aphorism instead of the entire poem, then consider visiting pep20.org, which provides convenient clickable links to each principle.
And, in case you want to learn the poem by heart while having some fun, you can now listen to a song with the Zen of Python as its lyrics. Barry Warsaw, another core developer involved with Python since its early days, composed and performed this musical rendition. The song became the closing track on a special vinyl record entitled The Zen Side of the Moon, which was auctioned at PyCon US 2023.
Okay. Now that you have a rough idea of what the Zen of Python is and how it came about, you might be asking yourself whether you should really follow it.
Should You Obey the Zen of Python?
Read the full article at https://realpython.com/zen-of-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 11, 2026 02:00 PM UTC
Quiz: What Exactly Is the Zen of Python?
In this quiz, you’ll test your understanding of The Zen of Python.
By working through this quiz, you’ll revisit core aphorisms and learn how they guide readable, maintainable, and Pythonic code.
The questions explore practical tradeoffs like breaking dense expressions into smaller parts, favoring clarity over cleverness, and making code behavior explicit.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 11, 2026 12:00 PM UTC
Nicola Iarocci
Eve 2.2.5
Eve v2.2.5 was just released on PyPI. It brings the pagination fix discussed in a previous post. Many thanks to Calvin Smith per contributing to the project.
February 11, 2026 09:44 AM UTC
Python Morsels
Need switch-case in Python? It's not match-case!
Python's match-case is not a switch-case statement. If you need switch-case, you can often use a dictionary instead.
The power of match-case
Python's match statement is for structural pattern-matching, which sounds complicated because it is a bit complicated.
The match statement has a different way of parsing expressions within its case statements that's kind of an extension of the way that Python parses code in general.
And again, that sounds complex because it is.
I'll cover the full power of match-case another time, but let's quickly look at a few examples that demonstrate the power of match-case.
Matching iterables
Python's match statement can be …
Read the full article: https://www.pythonmorsels.com/switch-case-in-python/
February 11, 2026 12:00 AM UTC
Seth Michael Larson
Cooler Analytics
You don't need analytics on your blog, but maybe you need analytics for your cooler?
The last place you’d expect to find analytics.
Last Sunday was the Superbowl in the USA, where former Vikings quarterback Sam Darnold and the Seahawks trounced the Patriots 29–13. We were also reminded who the top players are in the USA economy. Surprise, it's still generative AI, cryptocurrencies, sports betting, and surveillance.
Anyway, Trina and I hosted a Superbowl watch-party and I take pride in stocking the coolers. I usually do some combination of vibes-based “what was popular last time” and introducing a new wild-card item to see if something sticks. I am a big believer in human-based curation, and this is that at a much smaller scale. Just for fun I calculated the actual “analytics” of the coolers from this party:
| Beverage | Alc? | floz / Unit | Delta (Units) | # Before | # After |
|---|---|---|---|---|---|
| Diet Dr. Pepper | No | 12 | -4 | 12 | 8 |
| Diet Coke | No | 12 | -3 | 12 | 9 |
| Coke Zero Mini (1) | No | 7.5 | -3 | 10 | 7 |
| Chi Forest | No | 11.16 | -12 | 24 | 12 |
| Pineapple Juice | No | 8 | -3 | 24 | 21 |
| Vita Coconut Water (2) | No | 11.1 | -10 | 18 | 8 |
| Stilly Seltzers | Yes | 12 | -2 | 8 | 6 |
| Truly Seltzers (1) | Yes | 12 | -2 | 12 | 10 |
| Soju | Yes | 12.7 | -4.5 | 8 | ~3.5 |
| Castle Danger Cream Ale | Yes | 12 | -6 | 8 | 2 |
- (1) Brought by friends, thank you!
- (2) Usually Kirkland Signature is great, in this case skip the generic and buy the name brand.
This time Pineapple Juice was the wild-card, and unfortunately it didn't pan out! At a previous party we hosted a friend brought a few and I loved the idea of having cans of juice for a “flat” option that is sweet and non-alcoholic. Soda, coconut water and Chi Forest dominated the non-alcoholic category.
Chi Forest comes in 4 flavors, and there is a significant difference between which flavors were popular. Unfortunately you can't buy individual flavors of Chi Forest at Costco, only a 24-unit variety pack. Personally my favorite flavor is Pomelo, so I'm not complaining about leftovers.
| Beverage | # Before | # After |
|---|---|---|
| Chi Forest (Lychee) | 6 | 1 |
| Chi Forest (Peach) | 6 | 2 |
| Chi Forest (Pomelo) | 6 | 4 |
| Chi Forest (Strawberry) | 6 | 5 |
Here's the overall stats by category. I can use the number of attendees (~20) to approximately forecast how much I should stock in a different year.
| Category | Delta (floz) | Before (floz) | % |
|---|---|---|---|
| Soda, Juice | 264.42 | 480.0 | 55% |
| Coconut Water | 111.00 | 199.8 | 56% |
| Flavored Spirits | 105.15 | 341.6 | 30% |
| Beer | 72.00 | 96.0 | 75% |
| All Alcholic | 177.15 | 437.6 | 40% |
| All Non-Alcoholic | 375.42 | 679.8 | 55% |
| All Beverages | 441.57 | 1117.4 | 39% |
Send me your favorite hosting tip or unique ways that you curate for others. I hope this little post inspired you to “juice” your everyday human curation with simple analytics in the future. 🍻 Cheers!
Thanks for keeping RSS alive! ♥
February 11, 2026 12:00 AM UTC
February 10, 2026
Talk Python to Me
#536: Fly inside FastAPI Cloud
You've built your FastAPI app, it's running great locally, and now you want to share it with the world. But then reality hits -- containers, load balancers, HTTPS certificates, cloud consoles with 200 options. What if deploying was just one command? That's exactly what Sebastian Ramirez and the FastAPI Cloud team are building. On this episode, I sit down with Sebastian, Patrick Arminio, Savannah Ostrowski, and Jonathan Ehwald to go inside FastAPI Cloud, explore what it means to build a "Pythonic" cloud, and dig into how this commercial venture is actually making FastAPI the open-source project stronger than ever.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/commandbookapp'>Command Book</a><br> <a href='https://talkpython.fm/devopsbook'>Python in Production</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guests</strong><br/> <strong>Sebastián Ramírez</strong>: <a href="https://github.com/tiangolo?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Savannah Ostrowski</strong>: <a href="https://github.com/savannahostrowski?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Patrick Arminio</strong>: <a href="https://github.com/patrick91?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Jonathan Ehwald</strong>: <a href="https://github.com/DoctorJohn?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>FastAPI labs</strong>: <a href="https://fastapilabs.com?featured_on=talkpython" target="_blank" >fastapilabs.com</a><br/> <strong>quickstart</strong>: <a href="https://fastapicloud.com/docs/getting-started/?featured_on=talkpython" target="_blank" >fastapicloud.com</a><br/> <strong>an episode on diskcache</strong>: <a href="https://talkpython.fm/episodes/show/534/diskcache-your-secret-python-perf-weapon" target="_blank" >talkpython.fm</a><br/> <strong>Fastar</strong>: <a href="https://github.com/DoctorJohn/fastar?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>FastAPI: The Documentary</strong>: <a href="https://www.youtube.com/watch?v=mpR8ngthqiE" target="_blank" >www.youtube.com</a><br/> <strong>Tailwind CSS Situation</strong>: <a href="https://adams-morning-walk.transistor.fm/episodes/we-had-six-months-left?featured_on=pythonbytes" target="_blank" >adams-morning-walk.transistor.fm</a><br/> <strong>FastAPI Job Meme</strong>: <a href="https://fastapi.meme?featured_on=talkpython" target="_blank" >fastapi.meme</a><br/> <strong>Migrate an Existing Project</strong>: <a href="https://fastapicloud.com/docs/getting-started/existing-project/?featured_on=talkpython" target="_blank" >fastapicloud.com</a><br/> <strong>Join the waitlist</strong>: <a href="https://fastapicloud.com?featured_on=talkpython" target="_blank" >fastapicloud.com</a><br/> <br/> <strong>Talk Python CLI</strong><br/> <strong>Talk Python CLI Announcement</strong>: <a href="https://talkpython.fm/blog/posts/talk-python-now-has-a-cli/" target="_blank" >talkpython.fm</a><br/> <strong>Talk Python CLI GitHub</strong>: <a href="https://github.com/talkpython/talk-python-cli" target="_blank" >github.com</a><br/> <br/> <strong>Command Book</strong><br/> <strong>Download Command Book</strong>: <a href="https://commandbookapp.com?featured_on=talkpython" target="_blank" >commandbookapp.com</a><br/> <strong>Announcement post</strong>: <a href="https://mkennedy.codes/posts/your-terminal-tabs-are-fragile-i-built-something-better/?featured_on=talkpython" target="_blank" >mkennedy.codes</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=d0LpovstIHo" target="_blank" >youtube.com</a><br/> <strong>Episode #536 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/536/fly-inside-fastapi-cloud#takeaways-anchor" target="_blank" >talkpython.fm/536</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/536/fly-inside-fastapi-cloud" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
February 10, 2026 11:17 PM UTC
PyCoder’s Weekly
Issue #721: Classification With zstd, Callables, Gemini, and More (Feb. 10, 2026)
#721 – FEBRUARY 10, 2026
View in Browser »
Text Classification With Python 3.14’s zstd Module
There is commonality between text classifiers and compression and there are algorithms out there to do one with the other, but it requires an incremental compressor. Python 3.14 added zstd which supports this feature, allowing Max to take a stab at doing ML with a compressor.
MAX HALFORD
Is It a Class or a Function?
If a callable feels like a function, we often call it a function… even when it’s not! The Python standard library is filled with things that we think are functions but really are callables.
TREY HUNNER
A Conference for Developers Building Reliable AI
Replay is a practical conference for developers building real systems. Join our Python AI & versioning workshop covering durable AI agents, safe workflow evolution, and production-ready deployment techniques →
TEMPORAL sponsor
Getting Started With Google Gemini CLI
Learn how to use Gemini CLI to bring Google’s AI-powered coding assistance into your terminal for faster code analysis, debugging, and fixes.
REAL PYTHON course
Articles & Tutorials
Improving Your GitHub Developer Experience
What are ways to improve how you’re using GitHub? How can you collaborate more effectively and improve your technical writing? This week on the show, Adam Johnson is back to talk about his new book, “Boost Your GitHub DX: Tame the Octocat and Elevate Your Productivity”.
REAL PYTHON podcast
What Arguments Was Python Called With?
In one of David’s libraries, he needed to detect whether Python got called with the -m argument. Now that Python 3.9 is EOL, he’s able to remove a giant hack and replace it with a single line of code.
DAVID LORD
Speeding Up NumPy With Parallelism
Parallelism can speed up your NumPy code and can still benefit from other optimizations. This article covers everything from single threaded parallelism to Numba and more.
ITAMAR TURNER-TRAURING
The Terminal: First Steps & Useful Commands for Python Devs
Learn your way around the Python terminal. You’ll practice basic commands, activate virtual environments, install packages with pip, and keep track of your code using Git.
REAL PYTHON
Anatomy of a Python Function
You call Python functions all the time, but do you know what all the parts are called? Some terminology is consistent in the community and some is not.
ERIC MATTHES
Sorting Strategies for Optional Fields in Django
How to control NULL value placement when sorting Django QuerySets using F() expressions.
BLOG.MAKSUDUL.BD • Shared by Maksudul Haque
Natural Language Web Scraping With ScrapeGraph
Web scraping without selector maintenance. ScrapeGraphAI uses LLMs to extract data from any site using plain English prompts and Pydantic schemas.
CODECUT.AI • Shared by Khuyen Tran
Alternative Constructors Using Class Methods
You can have more than one way of creating objects, this post shows an application that uses alternative constructors using class methods.
STEPHEN GRUPPETTA
MicroPythonOS Graphical Operating System
MicroPythonOS lightweight OS for microcontroller targets applications with graphical user interfaces with a look similar to Android/iOS.
JEAN-LUC AUFRANC
Django (Anti)patterns ‹ Django Antipatterns
A set of Django (anti)patterns: patterns and things to avoid when building a web application with Django.
DJANGO-ANTIPATTERNS.COM
Dispatch From the Inaugural PyPI Support Specialist
Maria Ashna (Thespi-Brain on GitHub) is the inaugural PyPI Support Specialist and she’s written up how the first year went.
PYPI.ORG
Events
Weekly Real Python Office Hours Q&A (Virtual)
February 11, 2026
REALPYTHON.COM
Python Atlanta
February 13, 2026
MEETUP.COM
DFW Pythoneers 2nd Saturday Teaching Meeting
February 14, 2026
MEETUP.COM
DjangoCologne
February 17, 2026
MEETUP.COM
Inland Empire Python Users Group Monthly Meeting
February 18, 2026
MEETUP.COM
PyCon Namibia 2026
February 20 to February 27, 2026
PYCON.ORG
PyCon Mini Shizuoka 2026
February 21 to February 22, 2026
PYCON.JP
Happy Pythoning!
This was PyCoder’s Weekly Issue #721.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
February 10, 2026 07:30 PM UTC
Real Python
Improving Your Tests With the Python Mock Object Library
When you’re writing robust code, tests are essential for verifying that your application logic is correct, reliable, and efficient. However, the value of your tests depends on how well they demonstrate these qualities. Obstacles such as complex logic and unpredictable dependencies can make writing valuable tests challenging. The Python mock object library, unittest.mock, can help you overcome these obstacles.
By the end of this course, you’ll be able to:
- Create Python mock objects using
Mock - Assert that you’re using objects as you intended
- Inspect usage data stored on your Python mocks
- Configure certain aspects of your Python mock objects
- Substitute your mocks for real objects using
patch() - Avoid common problems inherent in Python mocking
You’ll begin by learning what mocking is and how it will improve your tests!
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 10, 2026 02:00 PM UTC
Quiz: Python's pathlib Module: Taming the File System
In this quiz, you’ll revisit how to tame the file system with Python’s pathlib module.
You’ll reinforce core pathlib concepts, including checking whether a path points to a file and instantiating Path objects. You’ll revisit joining paths with the / operator and .joinpath(), iterating over directory contents with .iterdir(), and renaming files on disk with .replace().
You’ll also check your knowledge of common file operations such as creating empty files with .touch(), writing text with .write_text(), and extracting filename components using .stem and .suffix.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]





















