Planet Python
Last update: February 13, 2026 07:44 PM UTC
February 13, 2026
Python Morsels
Setting default dictionary values in Python
There are many ways to set default values for dictionary key lookups in Python. Which way you should use will depend on your use case.
The get method: lookups with a default
The get method is the classic way to look up a value for a dictionary key without raising an exception for missing keys.
>>> quantities = {"pink": 3, "green": 4}
Instead of this:
try:
count = quantities[color]
except KeyError:
count = 0
Or this:
if color in quantities:
count = quantities[color]
else:
count = 0
We can do this:
count = quantities.get(color, 0)
Here's what this would do for a key that's in the dictionary and one that isn't:
>>> quantities.get("pink", 0)
3
>>> quantities.get("blue", 0)
0
The get method accepts two arguments: the key to look up and the default value to use if that key isn't in the dictionary.
The second argument defaults to None:
>>> quantities.get("pink")
3
>>> quantities.get("blue")
None
The setdefault method: setting a default
The get method doesn't modify …
Read the full article: https://www.pythonmorsels.com/default-dictionary-values/
February 13, 2026 04:45 PM UTC
Real Python
The Real Python Podcast – Episode #284: Running Local LLMs With Ollama and Connecting With Python
Would you like to learn how to work with LLMs locally on your own computer? How do you integrate your Python projects with a local model? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 13, 2026 12:00 PM UTC
Peter Hoffmann
Garmin Inreach Mini 2 Leaflet checkin map
We will be trekking the eastern part of the Great Himalaya Trail in Nepal in March/April. Details on the route and our plans can be found at https://greathimalayatrail.de. Our intent is to keep friends and family updated on our progress. Given that we'll be hiking in quite remote areas, a satellite phone/pager will be our sole means of communication.

After the Garmin inReach Mini 3 was released recently, the Inreach Mini 2 was on heavy sale. The inReach Mini 2 has all the features I need: satellite messaging, check-ins, offline mode with navigation, and track recording.
Plans
I'm on the Garmin Essential plan for 18 euros per month. It includes 50 free text messages or weather requests each month, plus unlimited check-in messages. The smaller Enabled plan (10 Euros) is missing the unlimited checkins, while the the Standard plan (34 Euros) gives you 150 free messages and unlimited live tracking. More details are on the Garmin page
Messaging
There are three different type of messages that you can send:
Check-In Messages: There are three preset messages. You can configure the recipients at explore.garmin.com. Depending on your Garmin subscription, sending check-in messages are free of charge. In the configuration section, you can enable the option to include your latitude/longitude and a link to the Garmin map in each SMS message. This information is always included for email recipients
Quick Messages: You can create up to 20 predefined messages so you don’t have to type them while you’re on the trail. The number of free messages you get depends on your Garmin subscription; any additional messages are billed per use. You can create or edit these messages at explore.garmin.com.
Normal Messages: In the Garmin Messenger iPhone app, you can type any custom message and send it to both SMS and email recipients. These messages are billed the same way as quick messages.
You can configure the system to send all messages to any email/sms recipients. The great thing is that the unlimited check-in messages also include latitude/longitude information. Here is a sample message.
Arrived at Camp
View the location or send a reply to Peter Hoffmann:
https://inreachlink.com/<unique_code>
Peter Hoffmann sent this message from: Lat 48.996386 Lon 8.468849
Do not reply directly to this message.
This message was sent to you using the inReach two-way satellite communicator with GPS. To learn more, visit http://explore.garmin.com/inreach.
As we do not want to spam all our friends with daily checkins I have build a little leaflet-checkin plugin and an imap scraper to pull and visualize the checkin/messages.
Build your own Tracking with Check-In Messages
For battery life reasons, we are not interested in real-time live tracking.
Instead, I’ve created a small script that checks a dedicated IMAP email account
for check-in messages and publishes them to a server, which then displays the
location of our most recent check-in. Sending a check-in once a day or during
each break when we are in more remote areas—should give our friends enough
information in case any problems arise.

A straightforward Python script connects to my IMAP server, retrieves all emails from
the Garmin InReach service, parses the message, timestamp, and latitude/longitude, and
then updates a positions.json file on my webserver.
Then a simple static html file with a leaflet map pulls
the positions.json file and displays the messages/checkins on the map.
A demo of the map is available at:
https://hoffmann.github.io/garmin-inreach-checkin-map/html/map.html
and you can checkout the code
https://github.com/hoffmann/garmin-inreach-checkin-map
#!/usr/bin/env python3
"""Poll IMAP inbox for Garmin inReach emails and extract positions into positions.json."""
import email
import email.utils
import imaplib
import json
import os
import re
import sys
from datetime import datetime, timezone
BOILERPLATE_PREFIXES = (
"View the location",
"Do not reply",
"This message was sent",
)
POSITIONS_FILE = os.path.join(
os.path.dirname(os.path.abspath(__file__)), "positions.json"
)
def connect(host, user, password):
imap = imaplib.IMAP4_SSL(host)
imap.login(user, password)
return imap
def search_inreach_emails(imap):
imap.select("INBOX")
status, data = imap.search(
None, '(OR FROM "no.reply.inreach@garmin.com" SUBJECT "inReach message")'
)
if status != "OK":
return []
msg_ids = data[0].split()
return msg_ids
def get_text_body(msg):
if msg.is_multipart():
for part in msg.walk():
if part.get_content_type() == "text/plain":
charset = part.get_content_charset() or "utf-8"
return part.get_payload(decode=True).decode(charset)
else:
charset = msg.get_content_charset() or "utf-8"
return msg.get_payload(decode=True).decode(charset)
return ""
def parse_timestamp(msg):
date_str = msg.get("Date")
if not date_str:
return None
dt = email.utils.parsedate_to_datetime(date_str)
dt_utc = dt.astimezone(timezone.utc)
return dt_utc.strftime("%Y-%m-%dT%H:%M:%SZ")
def parse_body(body):
lines = body.strip().splitlines()
# Extract message: first non-empty line
message = ""
for line in lines:
stripped = line.strip()
if stripped:
message = stripped
break
# Check if the message is boilerplate
if any(message.startswith(prefix) for prefix in BOILERPLATE_PREFIXES):
message = ""
# Extract lat/lon
lat, lon = None, None
m = re.search(r"Lat\s+([-\d.]+)\s+Lon\s+([-\d.]+)", body)
if m:
lat = float(m.group(1))
lon = float(m.group(2))
return message, lat, lon
def parse_email(msg_data):
msg = email.message_from_bytes(msg_data)
timestamp = parse_timestamp(msg)
if not timestamp:
return None
body = get_text_body(msg)
if not body:
return None
message, lat, lon = parse_body(body)
if lat is None or lon is None:
return None
entry = {
"timestamp": timestamp,
"lat": lat,
"lon": lon,
}
if message:
entry["msg"] = message
return entry
def load_positions():
if os.path.exists(POSITIONS_FILE):
with open(POSITIONS_FILE) as f:
return json.load(f)
return []
def save_positions(positions):
with open(POSITIONS_FILE, "w") as f:
json.dump(positions, f, indent=2)
f.write("\n")
def main():
host = os.environ.get("IMAP_HOST")
user = os.environ.get("IMAP_USER")
password = os.environ.get("IMAP_PASSWORD")
if not all([host, user, password]):
print("Error: Set IMAP_HOST, IMAP_USER, and IMAP_PASSWORD environment variables.")
sys.exit(1)
imap = connect(host, user, password)
try:
msg_ids = search_inreach_emails(imap)
print(f"Found {len(msg_ids)} inReach email(s)")
new_entries = []
for msg_id in msg_ids:
status, data = imap.fetch(msg_id, "(RFC822)")
if status != "OK":
continue
entry = parse_email(data[0][1])
if entry:
new_entries.append(entry)
finally:
imap.logout()
existing = load_positions()
existing_timestamps = {p["timestamp"] for p in existing}
added = 0
for entry in new_entries:
if entry["timestamp"] not in existing_timestamps:
existing.append(entry)
existing_timestamps.add(entry["timestamp"])
added += 1
existing.sort(key=lambda p: p["timestamp"])
save_positions(existing)
print(f"Added {added} new position(s) ({len(existing)} total)")
if __name__ == "__main__":
main()
February 13, 2026 12:00 AM UTC
Armin Ronacher
The Final Bottleneck
Historically, writing code was slower than reviewing code.
It might not have felt that way, because code reviews sat in queues until someone got around to picking it up. But if you compare the actual acts themselves, creation was usually the more expensive part. In teams where people both wrote and reviewed code, it never felt like “we should probably program slower.”
So when more and more people tell me they no longer know what code is in their own codebase, I feel like something is very wrong here and it’s time to reflect.
You Are Here
Software engineers often believe that if we make the bathtub bigger, overflow disappears. It doesn’t. OpenClaw right now has north of 2,500 pull requests open. That’s a big bathtub.
Anyone who has worked with queues knows this: if input grows faster than throughput, you have an accumulating failure. At that point, backpressure and load shedding are the only things that retain a system that can still operate.
If you have ever been in a Starbucks overwhelmed by mobile orders, you know the feeling. The in-store experience breaks down. You no longer know how many orders are ahead of you. There is no clear line, no reliable wait estimate, and often no real cancellation path unless you escalate and make noise.
That is what many AI-adjacent open source projects feel like right now. And increasingly, that is what a lot of internal company projects feel like in “AI-first” engineering teams, and that’s not sustainable. You can’t triage, you can’t review, and many of the PRs cannot be merged after a certain point because they are too far out of date. And the creator might have lost the motivation to actually get it merged.
There is huge excitement about newfound delivery speed, but in private conversations, I keep hearing the same second sentence: people are also confused about how to keep up with the pace they themselves created.
We Have Been Here Before
Humanity has been here before. Many times over. We already talk about the Luddites a lot in the context of AI, but it’s interesting to see what led up to it. Mark Cartwright wrote a great article about the textile industry in Britain during the industrial revolution. At its core was a simple idea: whenever a bottleneck was removed, innovation happened downstream from that. Weaving sped up? Yarn became the constraint. Faster spinning? Fibre needed to be improved to support the new speeds until finally the demand for cotton went up and that had to be automated too. We saw the same thing in shipping that led to modern automated ports and containerization.
As software engineers we have been here too. Assembly did not scale to larger engineering teams, and we had to invent higher level languages. A lot of what programming languages and software development frameworks did was allow us to write code faster and to scale to larger code bases. What it did not do up to this point was take away the core skill of engineering.
While it’s definitely easier to write C than assembly, many of the core problems are the same. Memory latency still matters, physics are still our ultimate bottleneck, algorithmic complexity still makes or breaks software at scale.
Giving Up?
When one part of the pipeline becomes dramatically faster, you need to throttle input. Pi is a great example of this. PRs are auto closed unless people are trusted. It takes OSS vacations. That’s one option: you just throttle the inflow. You push against your newfound powers until you can handle them.
Or Giving In
But what if the speed continues to increase? What downstream of writing code do we have to speed up? Sure, the pull request review clearly turns into the bottleneck. But it cannot really be automated. If the machine writes the code, the machine better review the code at the same time. So what ultimately comes up for human review would already have passed the most critical possible review of the most capable machine. What else is in the way? If we continue with the fundamental belief that machines cannot be accountable, then humans need to be able to understand the output of the machine. And the machine will ship relentlessly. Support tickets of customers will go straight to machines to implement improvements and fixes, for other machines to review, for humans to rubber stamp in the morning.
A lot of this sounds both unappealing and reminiscent of the textile industry. The individual weaver no longer carried responsibility for a bad piece of cloth. If it was bad, it became the responsibility of the factory as a whole and it was just replaced outright. As we’re entering the phase of single-use plastic software, we might be moving the whole layer of responsibility elsewhere.
I Am The Bottleneck
But to me it still feels different. Maybe that’s because my lowly brain can’t comprehend the change we are going through, and future generations will just laugh about our challenges. It feels different to me, because what I see taking place in some Open Source projects, in some companies and teams feels deeply wrong and unsustainable. Even Steve Yegge himself now casts doubts about the sustainability of the ever-increasing pace of code creation.
So what if we need to give in? What if we need to pave the way for this new type of engineering to become the standard? What affordances will we have to create to make it work? I for one do not know. I’m looking at this with fascination and bewilderment and trying to make sense of it.
Because it is not the final bottleneck. We will find ways to take responsibility for what we ship, because society will demand it. Non-sentient machines will never be able to carry responsibility, and it looks like we will need to deal with this problem before machines achieve this status. Regardless of how bizarre they appear to act already.
I too am the bottleneck now. But you know what? Two years ago, I too was the bottleneck. I was the bottleneck all along. The machine did not really change that. And for as long as I carry responsibilities and am accountable, this will remain true. If we manage to push accountability upwards, it might change, but so far, how that would happen is not clear.
February 13, 2026 12:00 AM UTC
February 12, 2026
Real Python
Quiz: Python's list Data Type: A Deep Dive With Examples
Get hands-on with Python lists in this quick quiz. You’ll revisit indexing and slicing, update items in place, and compare list methods.
Along the way, you’ll look at reversing elements, using the list() constructor and the len() function, and distinguishing between shallow and deep copies. For a refresher, see the Real Python guide to Python lists.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 12, 2026 12:00 PM UTC
Python Software Foundation
Python is for Everyone: Inside the PSF's D&I Work Group
Why This Matters
You might be asking yourself: Why invest so much energy in diversity and inclusion work, especially now when it’s being questioned and de-prioritized?
But we all know the truth: barriers exist everywhere. A meetup announcement only in English. Documentation that assumes reliable internet. Examples that reference things unfamiliar to most of the world. Code of conduct violations without clear guidance for organizers. Communities wanting to start but not knowing where to begin.
Because the Python community is global, and it should feel that way. When someone discovers Python in Nigeria, Brazil, India, or anywhere else in the world, they should see a community that welcomes them. They should find resources in their language, examples that reflect their context, and people who understand their challenges.
Diversity isn’t just about representation. It’s about making Python better. More approachable. More accessible. Different perspectives lead to better solutions, more creative problem-solving, and software that works for more people. When we only hear from one type of voice, we miss opportunities to improve.
Right now, when diversity and inclusion efforts are being rolled back in many places, it’s tempting to stay quiet. But that’s exactly why we need to speak up about the work we’re doing. The Python Software Foundation made a commitment: to support a diverse and international community of Python programmers. The D&I Work Group exists to make that commitment real, tangible, and actionable.
How The Diversity and Inclusion Workgroup Started
The PSF Board created the Diversity & Inclusion Work Group in 2020 with a clear purpose: to amplify the Python Software Foundation’s mission of supporting a diverse and international community. It was a good idea. People wanted to join.
Members came from different regions around the world, excited to be part of the group and looking forward to creating an impact because all of us, in one way or another, felt something was missing: the need to amplify and embrace diversity through more inclusion.
Most discussions related to diversity and how we could spread awareness. The chats on our Slack channel were active with people sharing different opinions and resources.
PyConUS D&I Panel Discussions
We held interesting annual D&I panels where we discussed important topics which are often set aside. In 2022 and 2023 at PyCon US, we spoke about the lack of representation on the board, why the board lacked global representation, the lack of representation from core developers in other parts of the world apart from the US and Europe despite the huge representation of Pythonistas around the world, and how people could contribute to changing that representation.
PyConUS 2022 D&I Panel Discussion
Participating D&I Workgroup members: Georgi Ker, Reuven Lerner, Anthony Shaw, Lorena Mesa
PyConUS 2023 D&I Panel Discussion
Participating D&I Workgroup members: Marlene Mhangami, Débora Azevedo, Iqbal Abdullah, Georgi Ker
PyConUS 2024 D&I Panel Discussion
In 2024, we invited different Python community leaders: Abigail Mesrenyame Dogbe, Dima Dinama,Jules Juliano Barros Lima, Jessica Greene, and Mason Egger, who shared about their work, their involvement, and their challenges as community leaders.
Participating D&I Workgroup members: Débora Azevedo, Georgi Ker
PyConUS 2025 D&I Panel Discussion
In 2025, due to political changes happening around the world, we invited Cristián Maureira-Fredes, Jay Miller, and Naomi Ceder to the D&I Workgroup panel to talk about “The Work Still Matters: Inclusion, Access, and Community in 2025.”
Participating D&I Workgroup members: Alla Barbalat, Keanya Phelps
The panels were great. The discussions in our workgroup were great. But something was still not going right.
Building a Global Work Group
In 2024, when I took on the role of chair, the D&I Work Group was at a crossroads. The PSF Board had created it to amplify the Foundation’s mission, and there was genuine interest from the community, but without a clear direction or structure, momentum had faded. People wanted to join, but they didn’t know what the group would actually do.
I knew we needed two things: a clear purpose and genuine diversity in our membership. Not just diversity as an abstract goal, but real representation from the regions where Python communities were thriving.
I started by doing research that I could share with the rest of the workgroup members. I went through the Python.org calendar, cataloging events and projects happening around the world. What I found was that Python communities were active everywhere (as expected), but they weren’t really represented in our Work Group’s leadership. I identified regional gaps and proposed a structure that would ensure fair representation: North America, South America, Africa, Asia, Oceania, the Middle East, and Europe.
The current representation as of October 2024 across regions is as follows:
- North America: 3
- South America: 3
- Asia: 3
- Europe: 3
- Africa: 3
- Oceania: 1
- Middle East: 2
It is important to note that each member has the freedom to choose which region they represent. As a D&I Workgroup, we do not dictate regional representation. This decision is entirely up to the individual, ensuring that members represent the region where they feel most connected or comfortable. We also shared which countries would be represented in which region to be explicit for interested parties.
We launched a public outreach campaign to the community. People applied, and the group voted to bring in new members. For the first time, we had a WorkGroup that truly reflected the global Python community.
But diverse perspectives meant many different ideas. In two workshop sessions, we listed every initiative people wanted to pursue, grouped them by theme, discussed priorities, and filtered down to three focused initiatives we could realistically accomplish with volunteer time and resources.
These three initiatives are:
- Concentrate on Outreach to Communities - Creating resources and templates to help communities improve their D&I efforts
- How to Setup a Local Python Community - A comprehensive guide for organizers starting new user groups
- Continue Collecting Survey Feedback from the Python Community - Gathering data to understand where we need to focus
The three initiatives we’re working on aren’t abstract goals. They’re about giving people the tools and support they need to build inclusive communities where they are. And of course, there are many other things we would like to work on. But filtering down to what we can concentrate on right now will give us better results, and we will continue to move on and work on the others as we progress.
We meet twice monthly across different time zones. We noticed that monthly meetings aren’t frequent enough, coordination is challenging, and volunteer time is limited. But we’re learning and adapting.
This wasn’t just about having good ideas. It was about creating a sustainable framework where a volunteer group could actually make progress.
Meet the Members of the Workgroup
The heart of the D&I Work Group is the people who show up, month after month, to do this work. They come from different regions, different backgrounds, and different parts of the Python ecosystem. We have 19 active members representing all regions and a PSF staff member included.
Welcoming New Members
We’re excited to welcome our five new members: Kalyan Prasad, representing Asia, Julio Batista Silva representing Europe, Abhijeet Mote representing North America, Theresa Seyram Agbenyegah and Emmanuel Ugwu representing Africa. They will bring fresh perspectives and energy to our work.
Thanking our Former Members
We also want to acknowledge and thank our former members who have contributed to the D&I Work Group: Miguel Johnson, Marlene Mhangami , Tereza Iofciu, Iqbal Abdullah,Cynthia Xin, Mariam Haji and Boluwaji Akinlade. Their dedication helped shape what this group has become, and we’re grateful for everything they contributed.
Our current members:
South America (3 members)
![]() | ![]() | ![]() |
North America (4 members)
![]() | ![]() | ![]() | ![]() |
Asia (3 members)
![]() | ![]() | ![]() |
Europe (3 members)
![]() | ![]() | ![]() |
Middle East (2 members)
![]() | ![]() |
Africa (3 member)
![]() | ![]() | ![]() |
Oceania (1 members)
![]() |
PSF Staff Member
We also have Marie Nordin - PSF Staff from the PSF staff as a voting member of the workgroup. Marie provides crucial support and coordination, helping bridge our initiatives with the broader PSF mission and ensuring our work has the resources and visibility it needs to succeed. Her dedicated support and active participation have been instrumental in helping us move from discussion to action.
Looking Forward
The D&I Work Group can’t do this work alone. Real change happens when every Python developer, every community organizer, every person writing documentation or teaching a workshop thinks about inclusion in their own context.
You don’t need to join a work group to make a difference. You can:
- In your local community: Start a Python meetup in your area. Make it beginner-friendly. Announce it in multiple languages if your region is multilingual. Choose accessible venues.
- In your workplace: Mentor someone from a different background. Share knowledge with junior developers. Advocate for diverse hiring and inclusive team practices.
- In your open source projects: Write clear documentation. Add examples that reflect different use cases. Make your contribution guidelines welcoming to newcomers. Consider what barriers might prevent someone from contributing.
- In your daily work: Question assumptions. When you write code examples, ask: “Would this make sense to someone who doesn’t share my context?” When you organize an event, ask: “Who might feel excluded, and how can I change that?”
We all know that Python’s success isn’t just about the language. It’s about the community. And that’s the hard truth. The more diverse that community is, the more use cases we discover, the more creative solutions we find, the more people benefit from what we build together.
Diversity and inclusion work isn’t a side project or a “nice-to-have”. It’s how we ensure Python remains a language for everyone, everywhere. It’s how we make sure the next generation of developers (wherever they are, whatever their background) sees Python as a community they can be part of.
The work is hard. The progress is slow, and it’s often invisible. But it matters. Every small action compounds. Every person who chooses to be intentional about inclusion makes it easier for the next person.
That’s what keeps us going in the workgroup. That’s why we show up every month. If you want to learn more about the D&I Work Group, get involved, or share your own experiences with building inclusive communities, you can write to us at diversity-inclusion-wg@python.org.
We’re always learning, and we’d love to hear from you.
February 12, 2026 07:53 AM UTC
PyBites
The Vibe Coding trap
One of my readers replied to an email I sent a couple of weeks ago and we got into a brief discussion on what I’ll call, Skills Erosion.
They brought up the point that by leaning too heavily on AI to generate code, people were losing their edge.
It’s a good point that’s top of mind for many devs. I’m guessing you’ve thought about it too. After all, if AI writes all of our code, how are we actually learning anything?
The exchange made me go down a rabbit hole and I found the data quite interesting.
We all know what Vibe Coding is, so I’ll save the explanation.
One of the biggest issues with vibe coding is that it creates the Illusion of Competence.
I mean, you feel 5x more productive with AI on your side, right? (I know I do!)
But reports from Veracode (2026) show that 45% of AI-generated code contains security flaws.
The companies that rely on AI to vibe code their products are shipping code that introduces security events just waiting to make the news. This is what happens when we trust the machine more than our own earned expertise.
It’s no surprise then to hear that some teams and companies are starting to apply the brakes and slow down their AI adoption.
But this is the catch. Not all companies are slowing things down. Some are ramping it up. (Look at Amazon’s announcement last Thursday – US$200bn investment in AI infrastructure in 2026).
Where does this leave us as devs?
I believe a balance needs to be found. And I said as much to our reader.
You can’t just be a hold-out.
At the end of the day, so many of the people holding the keys to our pay cheques expect us to use AI. Hiring Managers, CTOs, CEOs, Shareholders, Investors – you name it.
If you refuse, you look obsolete.
The solution? Be the architect and auditor, not the operator.
The developers who will come out on top are the ones who:
- Spot the hallucination: know why that SQL query is inefficient.
- Question AI recommendations: don’t treat the generated code as gospel. Question design decisions (or a lack thereof).
- Refactor the mess: turn spaghetti code into clean architecture.
- Secure the build: know where the vulnerabilities hide.
- Sharpen the saw: keep your skills sharp outside of AI usage. Keep learning, keep growing.
I firmly believe that the AI hype will plateau. We’re already starting to see the cracks.
The real questions to ask yourself: where will you be when things start shifting back in our favour? Will you be the senior dev ready to jump in and save the day?
Don’t let your skills erode: use the tools but master the craft.
What do you think?
Join me in the community for a chat on the topic. You can also check out a post I created for people to share their thoughts on AI + LLMs + coding.
Julian
February 12, 2026 12:15 AM UTC
Seth Michael Larson
Automated public shaming of open source maintainers
This is a follow-up to “New era of slop security reports for open source”.
Matplotlib, the unfortunate target of this new type of harassment, publishes a clear generative AI use policy. That boundary was not respected by generative AI users and a pull request was opened by an OpenClaw agent.
If the website the agent's GitHub comment links to is any indication, within 4 days of deployment this agent generated a “take-down blog post” intended to publicly shame an open source maintainer (who has published their own thoughts on the incident) for closing a GitHub pull request per the project's own policy on generative AI use. In this particular case, the issue was a “Good First Issue”, which are intentionally left unimplemented by maintainers as a potential on-ramp for new contributors to the project.
It should go without saying that this behavior is unacceptable and that the deployment of generative AI agents in this way is deeply irresponsible and has real negative consequences on volunteers contributing to critical software projects. This type of abuse is preventable, generative AI platforms need to implement better safe-guards that prevent this type of abuse.
Thanks for keeping RSS alive! ♥
February 12, 2026 12:00 AM UTC
February 11, 2026
PyCharm
Python Unplugged on PyTV – A Free Online Python Conference for Everyone
The PyCharm team loves being part of the global Python community. From PyCon US to EuroPython to every PyCon in between, we enjoy the atmosphere at conferences, as well as meeting people who are as passionate about Python as we are. This includes everyone: professional Python developers, data scientists, Python hobbyists and students.
However, we know that being able to attend a Python conference in person is not something that everyone can do, either because they don’t have a local conference, or cannot travel to one. So within the PyCharm team we started thinking: what if we could bring the five-star experience of Python conferences to everyone? What if everyone could have the experience of learning from professional speakers, accessing great networking opportunities, hearing from various voices from across the community, and – most importantly – having fun, no matter where they are in the world?
Python is for Everyone – Announcing Python Unplugged on PyTV!
After almost a year of planning, we’re proud to announce we’ll be hosting the first ever PyTV – a free online conference for everyone!
Join us on March 4th 2026, for an unforgettable, non-stop event, streamed from our studio in Amsterdam. We’ll be joined live by 15 well-known and beloved speakers from Python communities around the globe, including Carol Willing, Deb Nicholson, Sheena O’Connell, Paul Everitt, Marlene Mhangami, and Carlton Gibson. They’ll be speaking about topics such as core Python, AI, community, web development and data science.
You can get involved in the fun as well! Throughout the livestream, you can join our chat on Discord, where you can interact with other participants and our speakers. We’ve also prepared games and quizzes, with fabulous prizes up for grabs! You might even be able to get your hands on some of the super cool conference swag that we designed specifically for this event.
What are you waiting for? Sign up here.
If you are local to Amsterdam, you can also sign up for the PyLadies Amsterdam meetup. It will be held on the same day as the conference, and will give you a chance to meet some of the PyTV speakers in person.
February 11, 2026 04:37 PM UTC
Django Weblog
Django Steering Council 2025 Year in Review
The members of the Steering Council wanted to provide you all with a quick TL;DR of our work in 2025.
First off, we were elected at the end of 2024 and got started in earnest in early 2025 with the mission to revive and dramatically increase the role of the Steering Council.
We're meeting for a video conference at least monthly, you can deep dive into the meeting notes to see what we've been up to. We also have set up Slack channels we use to communicate in between meetings to keep action items moving along.
One of the first things we did was temporarily suspend much of the process around DEP 10. Its heart is in the right place, but it's just too complex and cumbersome day-to-day with a primarily volunteer organization. We're slowly making progress on a revamped and simplified process that addresses our concerns. It is our goal to finish this before our terms expire.
New Features Process
We've moved the process for proposing new features out of the Django Forum and mailing lists to new-features Github repository.
We made this change for a variety of reasons, but the largest being to reduce the workload for the Django Fellows in shepherding the process and answering related questions.
Community Ecosystem Page
One of our main goals is to increase the visibility of the amazing Django third-party package ecosystem. Long time Django users know which packages to use, which you can trust, and which ones may be perfect for certain use cases. However, MANY newer or more casual Django users are often unaware of these great tools and not sure where to even begin.
As a first step, we've added the Community Ecosystem page which highlights several amazing resources to keep in touch with what is going on with Django, how to find recommended packages, and a sample list of those packages the Steering Council itself recommends and uses frequently.
Administrative bits
There has been work on better formalizing and documenting our processes and building documentation to make it much easier for the next Steering Council members.
There has also been fair bit of work around helping organize Google Summer of Code participants to help ensure the projects undertaken are ones that will ultimately be accepted smoothly into Django.
Another area we have focused on is a simplified DEP process. We're still formalizing this, but the idea is to have the Steering Council do the majority of the heavy lifting on writing these and in a format that is shorter/simpler to reduce the friction of creating larger more complicated DEPs.
We have also been in discussions with various third parties about acquiring funding for some of the new features and updates on the horizon.
It's been a productive year and we're aiming to have 2026 be as productive if not more so. We're still setting all of our 2026 goals and will report on those soon.
Please reach out to the Steering Council directly if you have any questions or concerns.
February 11, 2026 02:44 PM UTC
Real Python
What Exactly Is the Zen of Python?
The Zen of Python is a collection of 19 aphorisms that capture the guiding principles behind Python’s design. You can display them anytime by running import this in a Python REPL. Tim Peters wrote them in 1999 as a joke, but they became an iconic part of Python culture that was even formalized as PEP 20.
By the end of this tutorial, you’ll understand:
- The Zen of Python is a humorous poem of 19 aphorisms describing Python’s design philosophy
- Running
import thisin a Python interpreter displays the complete text of the Zen of Python - Tim Peters wrote the Zen of Python in 1999 as a tongue-in-cheek comment on a mailing list
- The aphorisms are guidelines, not strict rules, and some intentionally contradict each other
- The principles promote readability, simplicity, and explicitness while acknowledging that practicality matters
Experienced Pythonistas often refer to the Zen of Python as a source of wisdom and guidance, especially when they want to settle an argument about certain design decisions in a piece of code. In this tutorial, you’ll explore the origins of the Zen of Python, learn how to interpret its mysterious aphorisms, and discover the Easter eggs hidden within it.
You don’t need to be a Python master to understand the Zen of Python! But you do need to answer an important question: What exactly is the Zen of Python?
Free Bonus: Click here to download your Easter egg hunt to discover what’s hidden inside Python!
Take the Quiz: Test your knowledge with our interactive “What Exactly Is the Zen of Python?” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
What Exactly Is the Zen of Python?Learn and test the Zen of Python, its guiding aphorisms, and tips for writing clearer, more readable, and maintainable code.
In Short: It’s a Humorous Poem Listing Python Philosophies
According to the Python glossary, which contains definitions of popular terms related to this programming language, the Zen of Python is a:
Listing of Python design principles and philosophies that are helpful in understanding and using the language. The listing can be found by typing “
import this” at the interactive prompt. (Source)
Indeed, when you type the indicated import statement into an interactive Python REPL, then you’ll be presented with the nineteen aphorisms that make up the Zen of Python:
>>> import this
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
The byline reveals the poem’s author, Tim Peters, who’s a renowned software engineer and a long-standing CPython core developer best known for inventing the Timsort sorting algorithm. He also authored the doctest and timeit modules in the Python standard library, along with making many other contributions.
Take your time to read through the Zen of Python and contemplate its wisdom. But don’t take the aphorisms literally, as they’re more of a guiding set of principles rather than strict instructions. You’ll learn about their humorous origins in the next section.
How Did the Zen of Python Originate?
The idea of formulating a single document that would encapsulate Python’s fundamental philosophies emerged among the core developers in June 1999. As more and more people began coming to Python from other programming languages, they’d often bring their preconceived notions of software design that weren’t necessarily Pythonic. To help them follow the spirit of the language, a set of recommendations for writing idiomatic Python was needed.
The initial discussion about creating such a document took place on the Python mailing list under the subject The Python Way. Today, you can find this conversation in the official Python-list archive. If you look closely at the first message from Tim Peters in that thread, then you’ll notice that he clearly outlined the Zen of Python as a joke. That original form has stuck around until this day:
Clearly a job for Guido alone – although I doubt it’s one he’ll take on (fwiw, I wish he would too!). Here’s the outline he would start from, though <wink>:
Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren’t special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one– and preferably only one –obvious way to do it. Although that way may not be obvious at first unless you’re Dutch. Now is better than never. Although never is often better than right now. If the implementation is hard to explain, it’s a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea – let’s do more of those!
There you go: 20 Pythonic Fec^H^H^HTheses on the nose, counting the one I’m leaving for Guido to fill in. If the answer to any Python design issue isn’t obvious after reading those – well, I just give up <wink>. (Source)
The wink and the playful way of self-censoring some toilet humor are clear giveaways that Tim Peters didn’t want anyone to take his comment too seriously.
Note: In case you didn’t get the joke, he started to write something like Feces but then used ^H—which represents a Backspace in older text editors like Vim—to delete the last three letters and make the word Theses. Therefore, the intended phrase is 20 Pythonic Theses.
Eventually, these nearly twenty theses got a proper name and were formally codified in a Python Enhancement Proposal document. Each PEP document receives a number. For example, you might have stumbled on PEP 8, which is the style guide for writing readable Python code. Perhaps as an inside joke, the Zen of Python received the number PEP 20 to signify the incomplete number of aphorisms in it.
To win your next argument about what makes good Python code, you can back up your claims with the Zen of Python. If you’d like to refer to a specific aphorism instead of the entire poem, then consider visiting pep20.org, which provides convenient clickable links to each principle.
And, in case you want to learn the poem by heart while having some fun, you can now listen to a song with the Zen of Python as its lyrics. Barry Warsaw, another core developer involved with Python since its early days, composed and performed this musical rendition. The song became the closing track on a special vinyl record entitled The Zen Side of the Moon, which was auctioned at PyCon US 2023.
Okay. Now that you have a rough idea of what the Zen of Python is and how it came about, you might be asking yourself whether you should really follow it.
Should You Obey the Zen of Python?
Read the full article at https://realpython.com/zen-of-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 11, 2026 02:00 PM UTC
Quiz: What Exactly Is the Zen of Python?
In this quiz, you’ll test your understanding of The Zen of Python.
By working through this quiz, you’ll revisit core aphorisms and learn how they guide readable, maintainable, and Pythonic code.
The questions explore practical tradeoffs like breaking dense expressions into smaller parts, favoring clarity over cleverness, and making code behavior explicit.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 11, 2026 12:00 PM UTC
Nicola Iarocci
Eve 2.2.5
Eve v2.2.5 was just released on PyPI. It brings the pagination fix discussed in a previous post. Many thanks to Calvin Smith per contributing to the project.
February 11, 2026 09:44 AM UTC
Python Morsels
Need switch-case in Python? It's not match-case!
Python's match-case is not a switch-case statement. If you need switch-case, you can often use a dictionary instead.
The power of match-case
Python's match statement is for structural pattern-matching, which sounds complicated because it is a bit complicated.
The match statement has a different way of parsing expressions within its case statements that's kind of an extension of the way that Python parses code in general.
And again, that sounds complex because it is.
I'll cover the full power of match-case another time, but let's quickly look at a few examples that demonstrate the power of match-case.
Matching iterables
Python's match statement can be …
Read the full article: https://www.pythonmorsels.com/switch-case-in-python/
February 11, 2026 12:00 AM UTC
Seth Michael Larson
Cooler Analytics
You don't need analytics on your blog, but maybe you need analytics for your cooler?
The last place you’d expect to find analytics.
Last Sunday was the Superbowl in the USA, where former Vikings quarterback Sam Darnold and the Seahawks trounced the Patriots 29–13. We were also reminded who the top players are in the USA economy. Surprise, it's still generative AI, cryptocurrencies, sports betting, and surveillance.
Anyway, Trina and I hosted a Superbowl watch-party and I take pride in stocking the coolers. I usually do some combination of vibes-based “what was popular last time” and introducing a new wild-card item to see if something sticks. I am a big believer in human-based curation, and this is that at a much smaller scale. Just for fun I calculated the actual “analytics” of the coolers from this party:
| Beverage | Alc? | floz / Unit | Delta (Units) | # Before | # After |
|---|---|---|---|---|---|
| Diet Dr. Pepper | No | 12 | -4 | 12 | 8 |
| Diet Coke | No | 12 | -3 | 12 | 9 |
| Coke Zero Mini (1) | No | 7.5 | -3 | 10 | 7 |
| Chi Forest | No | 11.16 | -12 | 24 | 12 |
| Pineapple Juice | No | 8 | -3 | 24 | 21 |
| Vita Coconut Water (2) | No | 11.1 | -10 | 18 | 8 |
| Stilly Seltzers | Yes | 12 | -2 | 8 | 6 |
| Truly Seltzers (1) | Yes | 12 | -2 | 12 | 10 |
| Soju | Yes | 12.7 | -4.5 | 8 | ~3.5 |
| Castle Danger Cream Ale | Yes | 12 | -6 | 8 | 2 |
- (1) Brought by friends, thank you!
- (2) Usually Kirkland Signature is great, in this case skip the generic and buy the name brand.
This time Pineapple Juice was the wild-card, and unfortunately it didn't pan out! At a previous party we hosted a friend brought a few and I loved the idea of having cans of juice for a “flat” option that is sweet and non-alcoholic. Soda, coconut water and Chi Forest dominated the non-alcoholic category.
Chi Forest comes in 4 flavors, and there is a significant difference between which flavors were popular. Unfortunately you can't buy individual flavors of Chi Forest at Costco, only a 24-unit variety pack. Personally my favorite flavor is Pomelo, so I'm not complaining about leftovers.
| Beverage | # Before | # After |
|---|---|---|
| Chi Forest (Lychee) | 6 | 1 |
| Chi Forest (Peach) | 6 | 2 |
| Chi Forest (Pomelo) | 6 | 4 |
| Chi Forest (Strawberry) | 6 | 5 |
Here's the overall stats by category. I can use the number of attendees (~20) to approximately forecast how much I should stock in a different year.
| Category | Delta (floz) | Before (floz) | % |
|---|---|---|---|
| Soda, Juice | 264.42 | 480.0 | 55% |
| Coconut Water | 111.00 | 199.8 | 56% |
| Flavored Spirits | 105.15 | 341.6 | 30% |
| Beer | 72.00 | 96.0 | 75% |
| All Alcholic | 177.15 | 437.6 | 40% |
| All Non-Alcoholic | 375.42 | 679.8 | 55% |
| All Beverages | 441.57 | 1117.4 | 39% |
Send me your favorite hosting tip or unique ways that you curate for others. I hope this little post inspired you to “juice” your everyday human curation with simple analytics in the future. 🍻 Cheers!
Thanks for keeping RSS alive! ♥
February 11, 2026 12:00 AM UTC
February 10, 2026
Talk Python to Me
#536: Fly inside FastAPI Cloud
You've built your FastAPI app, it's running great locally, and now you want to share it with the world. But then reality hits -- containers, load balancers, HTTPS certificates, cloud consoles with 200 options. What if deploying was just one command? That's exactly what Sebastian Ramirez and the FastAPI Cloud team are building. On this episode, I sit down with Sebastian, Patrick Arminio, Savannah Ostrowski, and Jonathan Ehwald to go inside FastAPI Cloud, explore what it means to build a "Pythonic" cloud, and dig into how this commercial venture is actually making FastAPI the open-source project stronger than ever.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/commandbookapp'>Command Book</a><br> <a href='https://talkpython.fm/devopsbook'>Python in Production</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guests</strong><br/> <strong>Sebastián Ramírez</strong>: <a href="https://github.com/tiangolo?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Savannah Ostrowski</strong>: <a href="https://github.com/savannahostrowski?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Patrick Arminio</strong>: <a href="https://github.com/patrick91?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Jonathan Ehwald</strong>: <a href="https://github.com/DoctorJohn?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>FastAPI labs</strong>: <a href="https://fastapilabs.com?featured_on=talkpython" target="_blank" >fastapilabs.com</a><br/> <strong>quickstart</strong>: <a href="https://fastapicloud.com/docs/getting-started/?featured_on=talkpython" target="_blank" >fastapicloud.com</a><br/> <strong>an episode on diskcache</strong>: <a href="https://talkpython.fm/episodes/show/534/diskcache-your-secret-python-perf-weapon" target="_blank" >talkpython.fm</a><br/> <strong>Fastar</strong>: <a href="https://github.com/DoctorJohn/fastar?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>FastAPI: The Documentary</strong>: <a href="https://www.youtube.com/watch?v=mpR8ngthqiE" target="_blank" >www.youtube.com</a><br/> <strong>Tailwind CSS Situation</strong>: <a href="https://adams-morning-walk.transistor.fm/episodes/we-had-six-months-left?featured_on=pythonbytes" target="_blank" >adams-morning-walk.transistor.fm</a><br/> <strong>FastAPI Job Meme</strong>: <a href="https://fastapi.meme?featured_on=talkpython" target="_blank" >fastapi.meme</a><br/> <strong>Migrate an Existing Project</strong>: <a href="https://fastapicloud.com/docs/getting-started/existing-project/?featured_on=talkpython" target="_blank" >fastapicloud.com</a><br/> <strong>Join the waitlist</strong>: <a href="https://fastapicloud.com?featured_on=talkpython" target="_blank" >fastapicloud.com</a><br/> <br/> <strong>Talk Python CLI</strong><br/> <strong>Talk Python CLI Announcement</strong>: <a href="https://talkpython.fm/blog/posts/talk-python-now-has-a-cli/" target="_blank" >talkpython.fm</a><br/> <strong>Talk Python CLI GitHub</strong>: <a href="https://github.com/talkpython/talk-python-cli" target="_blank" >github.com</a><br/> <br/> <strong>Command Book</strong><br/> <strong>Download Command Book</strong>: <a href="https://commandbookapp.com?featured_on=talkpython" target="_blank" >commandbookapp.com</a><br/> <strong>Announcement post</strong>: <a href="https://mkennedy.codes/posts/your-terminal-tabs-are-fragile-i-built-something-better/?featured_on=talkpython" target="_blank" >mkennedy.codes</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=d0LpovstIHo" target="_blank" >youtube.com</a><br/> <strong>Episode #536 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/536/fly-inside-fastapi-cloud#takeaways-anchor" target="_blank" >talkpython.fm/536</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/536/fly-inside-fastapi-cloud" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
February 10, 2026 11:17 PM UTC
PyCoder’s Weekly
Issue #721: Classification With zstd, Callables, Gemini, and More (Feb. 10, 2026)
#721 – FEBRUARY 10, 2026
View in Browser »
Text Classification With Python 3.14’s zstd Module
There is commonality between text classifiers and compression and there are algorithms out there to do one with the other, but it requires an incremental compressor. Python 3.14 added zstd which supports this feature, allowing Max to take a stab at doing ML with a compressor.
MAX HALFORD
Is It a Class or a Function?
If a callable feels like a function, we often call it a function… even when it’s not! The Python standard library is filled with things that we think are functions but really are callables.
TREY HUNNER
A Conference for Developers Building Reliable AI
Replay is a practical conference for developers building real systems. Join our Python AI & versioning workshop covering durable AI agents, safe workflow evolution, and production-ready deployment techniques →
TEMPORAL sponsor
Getting Started With Google Gemini CLI
Learn how to use Gemini CLI to bring Google’s AI-powered coding assistance into your terminal for faster code analysis, debugging, and fixes.
REAL PYTHON course
Articles & Tutorials
Improving Your GitHub Developer Experience
What are ways to improve how you’re using GitHub? How can you collaborate more effectively and improve your technical writing? This week on the show, Adam Johnson is back to talk about his new book, “Boost Your GitHub DX: Tame the Octocat and Elevate Your Productivity”.
REAL PYTHON podcast
What Arguments Was Python Called With?
In one of David’s libraries, he needed to detect whether Python got called with the -m argument. Now that Python 3.9 is EOL, he’s able to remove a giant hack and replace it with a single line of code.
DAVID LORD
Speeding Up NumPy With Parallelism
Parallelism can speed up your NumPy code and can still benefit from other optimizations. This article covers everything from single threaded parallelism to Numba and more.
ITAMAR TURNER-TRAURING
The Terminal: First Steps & Useful Commands for Python Devs
Learn your way around the Python terminal. You’ll practice basic commands, activate virtual environments, install packages with pip, and keep track of your code using Git.
REAL PYTHON
Anatomy of a Python Function
You call Python functions all the time, but do you know what all the parts are called? Some terminology is consistent in the community and some is not.
ERIC MATTHES
Sorting Strategies for Optional Fields in Django
How to control NULL value placement when sorting Django QuerySets using F() expressions.
BLOG.MAKSUDUL.BD • Shared by Maksudul Haque
Natural Language Web Scraping With ScrapeGraph
Web scraping without selector maintenance. ScrapeGraphAI uses LLMs to extract data from any site using plain English prompts and Pydantic schemas.
CODECUT.AI • Shared by Khuyen Tran
Alternative Constructors Using Class Methods
You can have more than one way of creating objects, this post shows an application that uses alternative constructors using class methods.
STEPHEN GRUPPETTA
MicroPythonOS Graphical Operating System
MicroPythonOS lightweight OS for microcontroller targets applications with graphical user interfaces with a look similar to Android/iOS.
JEAN-LUC AUFRANC
Django (Anti)patterns ‹ Django Antipatterns
A set of Django (anti)patterns: patterns and things to avoid when building a web application with Django.
DJANGO-ANTIPATTERNS.COM
Dispatch From the Inaugural PyPI Support Specialist
Maria Ashna (Thespi-Brain on GitHub) is the inaugural PyPI Support Specialist and she’s written up how the first year went.
PYPI.ORG
Events
Weekly Real Python Office Hours Q&A (Virtual)
February 11, 2026
REALPYTHON.COM
Python Atlanta
February 13, 2026
MEETUP.COM
DFW Pythoneers 2nd Saturday Teaching Meeting
February 14, 2026
MEETUP.COM
DjangoCologne
February 17, 2026
MEETUP.COM
Inland Empire Python Users Group Monthly Meeting
February 18, 2026
MEETUP.COM
PyCon Namibia 2026
February 20 to February 27, 2026
PYCON.ORG
PyCon Mini Shizuoka 2026
February 21 to February 22, 2026
PYCON.JP
Happy Pythoning!
This was PyCoder’s Weekly Issue #721.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
February 10, 2026 07:30 PM UTC
Real Python
Improving Your Tests With the Python Mock Object Library
When you’re writing robust code, tests are essential for verifying that your application logic is correct, reliable, and efficient. However, the value of your tests depends on how well they demonstrate these qualities. Obstacles such as complex logic and unpredictable dependencies can make writing valuable tests challenging. The Python mock object library, unittest.mock, can help you overcome these obstacles.
By the end of this course, you’ll be able to:
- Create Python mock objects using
Mock - Assert that you’re using objects as you intended
- Inspect usage data stored on your Python mocks
- Configure certain aspects of your Python mock objects
- Substitute your mocks for real objects using
patch() - Avoid common problems inherent in Python mocking
You’ll begin by learning what mocking is and how it will improve your tests!
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 10, 2026 02:00 PM UTC
Quiz: Python's pathlib Module: Taming the File System
In this quiz, you’ll revisit how to tame the file system with Python’s pathlib module.
You’ll reinforce core pathlib concepts, including checking whether a path points to a file and instantiating Path objects. You’ll revisit joining paths with the / operator and .joinpath(), iterating over directory contents with .iterdir(), and renaming files on disk with .replace().
You’ll also check your knowledge of common file operations such as creating empty files with .touch(), writing text with .write_text(), and extracting filename components using .stem and .suffix.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 10, 2026 12:00 PM UTC
death and gravity
DynamoDB crash course: part 2 – data model
This is part two of a series covering core DynamoDB concepts, from philosophy all the way to single-table design. The goal is to get you to understand idiomatic usage and trade-offs in under an hour.
Today, we're looking at the DynamoDB data model – what the main abstractions are, what you can do with them, and how they scale.
(While the AWS documentation is mostly comprehensive, it's also all over the place, including some other places that aren't the documentation at all, like the AWS blog. This series brings the important stuff in one place, so you can get a mental model of how it all ties together without having to read the entire documentation twice).
ContentsCore components #
According to the documentation, the core components of DynamoDB are tables, items, and attributes. This is accurate in the sense of what you can act on through the API, but can be deceptively simple, and leaves out two other equally important aspects: what you can do with it (the logical model) and how it scales (the physical model).
Let's put it all together, starting from the top.
attribute attribute ··· item ··· item ··· collection (B-tree) ··· collection ··· partition ··· partition ··· table (hash table) sort key sort key < partition key partition key logical physical only API only death.andgravity.comAPI model: tables, items, attributes #
As far as the API is concerned, "a table is a collection of items, and each item is a collection of attributes".1
An item is uniquely identified by two attributes, the partition key and the sort key,2 which together compose its primary key.3 A group of items with the same partition key value is called an item collection,4 but this is more of a logical grouping, and does not exist as a distinct entity in the API.
An attribute is a named data element, with its value either a scalar (number, string, binary, boolean, null), a set of scalars, or a document (a list or map of possibly nested attributes, similar to JSON).
There are no limits on table size or number of items, nor on those of an item collection. Items do have a size limit of 400 KB / item, which indirectly limits attribute size.
As we've seen in the previous article, the core DynamoDB data operations are:
PutItem,GetItem,UpdateItem,DeleteItemQueryitems with the same partition key, sorted by sort key, and optionally narrowed down to a specific a range of sort keysScanall the items in the table, possibly in parallel
Besides whole items, the API allows getting and updating specific attributes, as well as filtering query and scan results by expressions using them.
See also
Logical model: hash table of B-trees #
The operations above may seem arbitrarily restrictive – for example, why can't I query items by sort key alone? It might make more sense to think about it like this:
Conceptually, a DynamoDB table is a hash table of B-trees, with partition keys being hash table keys, and sort keys being B-tree keys (making item collections B-trees). The hash table allows efficient find collection by partition key operations; within each collection, the B-tree keeps the items sorted, and allows efficient find item by sort key and find items by sort key range operations.
As a consequence, any access not based on partition and sort key is expensive, since instead of taking advantage of the underlying data structure, you have to go through all the items in the table to find anything (aka a full table scan), and at the scales you'd use DynamoDB at, this can mean billions of items.5
Example
(from here) Take a Music table where items correspond to songs, with Artist as primary key and Song as sort key:
# table Music (partition key: Artist, sort key: Song)
1000mods: !btree
Claws: { Album: Vultures }
Vidage: { Year: 2011 }
Kyuss: !btree
Space Cadet: { }
You can efficiently:
- query songs by artist (sorted by song title)
- get the song by artist and song title
...and that's it, anything else requires a full table scan.
To a first approximation, this is also a decent model of how DynamoDB scales – you could imagine that each collection has its own dedicated computer, which in theory would account for the unlimited number of collections.
Physical model: partitions #
Of course, there are not infinitely many computers, and that would be wildly inefficient anyway. Instead, collections are packed together into a smaller number of partitions, each a few gigabytes in size. To figure out which partition an item should go on, DynamoDB hashes its partition key (also called a hash key, for obvious reasons).
This is similar to hash table buckets,6 except there's one more level of indirection – instead of mapping to a single number, each partition maps to a range of numbers, which allows splitting a partition into two new ones by splitting its range. Furthermore, an item collection can be split on multiple partitions too, by using the sort key.7
And that is how the scaling magic happens:
- When you increase provisioned capacity, partitions are split as needed.
- If a partition or collection becomes too big, it gets split.
- If the throughput to a partition or collection is high enough for long enough, it also gets split,8 possibly with a bias towards keys with higher utilization; this is a feature of adaptive capacity.
Partition management is handled entirely by DynamoDB and is transparent to the user, but it doesn't happen instantly – it takes several minutes to allocate new partitions and shuffle things around.
Since partitions are backed by real computers, they do have a throughput limit.
See also
- Partitions and data distribution
- Burst and adaptive capacity # Isolate frequently accessed items
- (blog) How Amazon DynamoDB adaptive capacity accommodates uneven data access patterns (2018)
- (blog) How partitions, hot keys, and split for heat impact performance (2023)
- (unofficial) Everything you need to know about DynamoDB Partitions
Limits #
Part of DynamoDB's appeal is that it scales "infinitely" for specific dimensions: there are no limits on table size or number of items. However, there are some hard, non-adjustable limits you will have to take into account when designing your application.
See also
- Cheat sheet # Service quota basics
- Quotas
- Constraints
- (unofficial) The Three DynamoDB Limits You Need to Know
Partition throughput #
The most important limit is that on partition throughput (aka capacity) – how much data DynamoDB can read from or write to a partition in a given amount of time:9
- 1 MB/s for writes
- 24 MB/s for reads, eventually consistent
- 12 MB/s for reads, strongly consistent
Throughput measures whole items DynamoDB has to access, not the data that goes through the API. While you can touch single attributes and filter query results, the consumed capacity is always that of the whole items DynamoDB had to read or write.10
Once you reach the limit, the operation is throttled, and you can try again later, ideally with exponential backoff (the AWS SDK usually takes care of this for you).
The best way to avoid throttling is to distribute the load uniformly across partitions by using a high-cardinality partition key.11 Uneven key distribution can create hot partitions that suffer from persistent throttling.
Nowadays, this is less of a problem. For long-term imbalances, partition splitting should rebalance things over time; you "might" even end up with a single popular item per partition. For short-term ones like traffic spikes, burst and adaptive capacity will, on a best effort basis, "borrow" capacity above the table limit and between partitions. However, AWS is very non-committal about their behavior, and there's nothing you can do besides increasing traffic gradually, so good partition key design remains key.
Of note, while the throughput is fixed, the other dimensions are not; this means that you have a trade-off between how often you access items, the number of items, and item size; for example, you can split items into smaller ones based on how attributes are accessed, aka vertical partitioning.
See also
- Read and write operations (capacity unit consumption)
- Partition key design and Distributing workloads
- Sort key design
- (blog) Choosing the Right DynamoDB Partition Key
Item size #
Second, the maximum item size is 400 KB, which ought to be enough for anybody. You can work around this limit either by splitting items into parts, or by putting the data somewhere else entirely, like S3, and keeping only a reference in DynamoDB.
See also
Page size #
Finally,
the maximum response size for query and scan operations is 1 MB
(a page).
You can continue from the end of the previous page
by passing the LastEvaluatedKey response element to subsequent calls,
which is essentially
keyset pagination.
One consequence of this is that,
throughput limit aside,
there's an implicit limit
on how fast you can query the items in a collection,
since the calls are sequentiall.12
See also
Indexes #
As discussed in the logical model, access not based on primary key is very inefficient.
Secondary indexes allow queries and scans that use alternative primary keys, ones composed of different attributes than that of the base table. Unlike tables, index sort keys do not have to be unique for a given partition key. An item that is missing one of the index primary key attributes will not appear in the index.
Changes to the table are automatically propagated to any secondary indexes. Aside from the index and table primary key attributes, an index can include copies of other attributes (aka attribute projection), which allows the index to answer queries alone, without extra reads to the base table.13
See also
Global secondary indexes #
A global secondary index allows using different partition and sort key attributes.
Conceptually, a global secondary index is just a table: it has its own separate capacity, no limits on size or number of items, and the same partition throughput limits apply.
Despite GSIs being updated asynchronously, an index without enough capacity to process the updates will cause write throttling. To retrieve attributes not in the index, you have to get them yourself from the table (batch operations can speed this up).
Example
(from here) Continuing with the music example, a GSI with Genre and Album as partition and sort keys would allow you to also efficiently:
- query songs by genre (and, with additional processing, albums by genre)
- query songs by genre and album (but, since two albums can have the same genre and title, you might want to group by artist in application code)
# table Music (partition key: Artist, sort key: Song)
Kyuss: !btree
Demon Cleaner: { Album: Welcome To Sky Valley, Genre: Rock }
Space Cadet: { Album: Welcome To Sky Valley, Genre: Rock }
1000mods: !btree
Claws: { Genre: Rock } # has no Album!
Vidage: { Album: Super Van Vacation, Genre: Rock }
Solar Fields: !btree
Air Song: { Album: Leaving Home, Genre: Electronic }
# GSI Genres (partition key: Genre, sort key: Album)
Rock: !btree
Super Van Vacation: { Artist: 1000mods, Song: Vidage}
Welcome To Sky Valley: { Artist: Kyuss, Song: Space Cadet }
Welcome To Sky Valley: { Artist: Kyuss, Song: Demon Cleaner }
Electronic: !btree
Leaving Home: { Artist: Solar Fields, Song: Air Song }
See also
Local secondary indexes #
A local secondary index allows using a different sort key attribute.
LSI data is stored together with partition data (the index is local to the partition), so besides the table B-tree, each collection has one B-tree per LSI.
This allows strongly consistent reads and fetching non-projected attributes, but also limits collection size to 10 GB and collection throughput to the partition limit, since it prevents further partition splitting (as each sort key would split the items in a different way).
Example
A LSI with Year as sort key would allow you to also efficiently:
- query songs by artist, in chronological order
- query songs by artist and year
# table Music (partition key: Artist, sort key: Song)
1000mods: !btree
Claws: { Year: 2014 }
Road To Burn: { Year: 2011 }
Vidage: { Year: 2011 }
Solar Fields: !btree
Sombrero: { Year: 2011 }
# table Music (partition key: Artist, LSI sort key: Year)
1000mods: !btree
2011: { Song: Vidage }
2011: { Song: Road To Burn }
2014: { Song: Claws }
Solar Fields: !btree
2011: { Song: Sombrero }
See also
Features #
Let's look at some of the things DynamoDB can do besides CRUD operations.
Eventual consistency #
So, remember I said partitions are backed by real computers? I didn't say how many.
To allow the high-availability magic to happen, a partition is backed by three nodes in separate data centers:14 a leader that handles writes and two asynchronous replicas.
This explains why there are two kinds of reads:
- strongly consistent reads go to the leader, so you always get the latest data
- eventually consistent reads go to any node, so you may get slightly older data, but if you repeat the read later, you will eventually get the latest data; because they use all the available nodes, they are more efficient, and thus cheaper
Note that strongly consistent reads do not replace synchronization primitives like conditional writes and transactions, but they can be useful to lower the rate at which these operations fail for highly-contended items.
See also
Conditional writes #
Write operations can specify a condition expression that must be true for the write to happen (e.g. an attribute has a specific value); if the expression is false, the write fails. Condition expressions can refer only to the item being modified.
Conditional writes are critical for data consistency and avoiding concurrency bugs, since they are only the way to run logic server-side, while the item is being modified. You can use conditional writes to build higher level abstractions like optimistic locking, distributed locks, and atomic counters.15
See also
- Working with items # Conditional writes
- Condition and filter expressions
- Conditional expressions examples
- (unofficial) Understanding DynamoDB Condition Expressions
Transactions #
Transactions allow performing multiple writes as a single atomic operation, isolated from other operations; if two operations attempt to change an item at the same time, one of them fails. Transactions can target up to 100 distinct items in one or more tables in the same region, and consume twice as much capacity.
You can use transactions with condition expressions – if a condition fails for one item, none of the items are modified; you can also check an item without modifying it. Like with single-item writes, an expression can refer only an individual item (you can't have a condition about another item in the transaction).
See also
Batch operations #
Batch operations allow you to put/delete up to 25 items or read up to 100 items in a single request, up to 16 MB in total, more efficiently than using single-item operations. Batch writes don't support updates or condition expressions.
The operations in a batch are independent from one another – some writes may fail, or only some of the read items may be returned (e.g. if throughput limits are reached).
See also
Streams #
Streams allow you to capture changes to the items in a table in near-real time. There are two flavors of streams, DynamoDB Streams and Kinesis Data Streams, each with different features and integrations.
Notable applications of streams are Lambda triggers (similar to the ones in relational databases, except they run after the change), replication to places like S3 or Redshift via Firehose, and automatic archival.
See also
- Core components # DynamoDB Streams
- Working with streams
- (unofficial) What you should know about DynamoDB Streams
Anyway, that's it for now.
In the next article, we'll have a closer look at core DynamoDB design patterns, including the fundamental single-table design.
Learned something new today? Share it with others, it really helps!
Want to know when new articles come out? Subscribe here to get new stuff straight to your inbox!
Here, "collection" just means a "group of things". [return]
Ignore the names for now, it'll make sense in a bit. [return]
It is also possible to have a table with only a partition key and no sort key, but you can think of that like a degenerate case where the sort key is a constant value, and thus each partition key can have only one item. [return]
Yes, an item collection is different from a "collection of items". Don't look at me, I didn't pick the names. ಠ_ಠ [return]
Indexes, which we'll discuss later, offer an escape hatch to this. [return]
A quick rant on naming. With a hash table, you would say "hash table key", or maybe even "item key"; you would not say "bucket key", since that's a low level detail, and also a bucket can have multiple keys. You know, like in DynamoDB.
THEN WHY IS IT CALLED A PARTITION KEY [return]
You have to admit that this is a great explanation. Surely you'd find it in the docs, and not burried in a random blog post published four years after the feature was announced, also in a blog post, and which itself explains more than the official documentation does to this day, seven years later! [return]
Assuming the table has enough configured throughput. [return]
Converted to normal people units for your convenience. DynamoDB uses its own capacity units as a convoluted way of saying that for accounting purposes, item size is rounded up to 4 KB (1 RCU) for reads and 1 KB (1 WCU) for writes. This is presumably because the size of a capacity unit can increase over time. [return]
Yes, this includes just counting them. [return]
If there is no good natural partition key, you can make one by sharding a low-cardinality attribute, which we'll cover in the next article. [return]
You could make it twice as fast by querying from both ends, but probably no faster, since for binary search you'd need to jump in the middle of two sort keys. Unsurprisingly, we'll look at a potential solution in the next article. [return]
This is the same as a covering index in relational databases. [return]
Or better said, availability zones. [return]
Although you can also use update expressions for atomic counters. [return]
February 10, 2026 10:35 AM UTC
Python Software Foundation
Introducing the PSF Community Partner Program
The Python Software Foundation (PSF) is excited to announce the introduction of the PSF Community Partner Program. This new program is designed as an “in-kind” way for us to support Python events and initiatives with non-financial assistance through the use of the PSF logo and name, as well as promotional support via sharing qualified posts on PSF official social media accounts. The PSF looks forward to supporting Python community events and initiatives through this new program!
The introduction of the PSF Community Partner Program grew out of our desire to find alternative ways to support the community during the pause of our Grants Program (read more about the resulting process below). Even so, we intend to continue offering this in-kind support program after the Grants Program reopens. Our big picture hope is that, over the long term, some community events and initiatives will continue to partner with the PSF while being financially dependent on sponsors and individual donors alone.
The PSF is also working on the future of our Grants Program, including when and how we can reopen it in a way that ensures the program’s long-term sustainability while balancing the needs of the Python community. In light of the truly staggering outpouring of support from our community during the 2025 year-end fundraiser, we are now in a stronger position to reopen the Grants Program and are eager to give back in a thoughtful and sustainable way. More updates to come!
As with the rollout of any new program, we anticipate small adjustments will need to be made for processes to flow smoothly and to ensure the program serves the Python community well. The PSF welcomes your comments, feedback, and suggestions regarding the new Community Partner Program on the corresponding Discuss thread. We also invite you to join our upcoming PSF Board or Grants Program Office Hour sessions to talk with the PSF Board and Staff synchronously. If you wish to send your feedback privately, please email grants@python.org.
How the program will work
The PSF Board delegated authority to the Grants Work Group (GWG) to review, approve, and deny applications for the Community Partner Program.
Similar to the PSF Grants Program, the PSF must ensure that applicants meet certain criteria before being approved as a Community Partner. To qualify, an event or initiative must:
- Demonstrate a positive impact on the Python community
- Be Python-specific or primarily Python-related
- Have an established web presence, such as a dedicated website, Meetup page, or Luma page
- Have an enforceable Code of Conduct with clear reporting mechanisms in place
- Acknowledge and agree to the defined bounds of the Community Partner title as outlined in the application form
The PSF Community Partner application process begins with a one-page form designed to collect the information needed for review by the GWG. The form gathers:
- Basic applicant details
- Information about the event or initiative
- Required acknowledgements related to trademark usage and an enforceable Code of Conduct
- A couple questions to better understand the event or initiative, support evaluation, or help the PSF gather relevant metrics
Applicants are asked to submit their application at least six weeks before their event or initiative, with first-time applicants encouraged to apply eight weeks in advance. Applications may be submitted up to six months ahead of time, allowing the PSF to plan and provide timely promotional support. Once submitted, applications undergo an initial pre-review by PSF staff, who may follow up with clarifying questions as needed. The application will then be reviewed by the GWG, with consultation from the PSF Board in some cases and additional follow-up questions when necessary.
Decisions will be communicated via the email address provided in the application. Accepted Community Partners will receive guidance on PSF logo usage, social media re-sharing, and an invitation to provide an optional report.
How the program took shape
Upon the pause of the PSF Grants Program, the PSF Board and Staff set out to understand how we can continue to support Python events and initiatives for the duration of the program's pause. We dedicated Board and Grants Office Hour sessions, gathered input on a Discuss thread, tracked our social media replies to the pause announcement, and talked with community members one-on-one to get a picture of the various needs of our community. From there, PSF Staff compiled the feedback to identify the common threads to weave them together into action.
One of the most common themes uncovered is that while the financial assistance offered by our grants is incredibly valuable, the use of the PSF name that comes with grants also provides a strong signal of community trust–an official “stamp of approval”. This stamp of approval empowers Python events and initiatives to approach potential sponsors and is useful as a point of leverage and proof of trustworthiness to convince sponsors to sign on.
The next most common theme was that Python events and initiatives would greatly benefit from promotional support. This is a common benefit of “in-kind” partnerships and was a natural addition to the new PSF Community Partner Program. It’s also a bit of a tricky line for the PSF to navigate–as a 501(c)(3) non-profit based in the USA, we cannot raise funds for other organizations. That means we are implementing guidelines for what the PSF can and cannot promote to remain compliant with the requirements of the US federal tax code.
After identifying both of these recurring themes, PSF Staff put together a program proposal with input from the GWG and PSF Board. The process from there included review periods for the PSF Board, Staff, and GWG, integrating feedback, two votes from the PSF Board, and PSF Staff work on setting up processes and documentation.
About the Python Software Foundation
The Python Software Foundation is a US non-profit whose mission is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers. The PSF supports the Python community using corporate sponsorships, grants, and donations. Are you interested in sponsoring or donating to the PSF so we can continue supporting Python and its community? Check out our sponsorship program, donate directly, or contact our team at sponsors@python.org!
February 10, 2026 08:42 AM UTC
February 09, 2026
Ari Lamstein
New GeoPandas Tutorial Published on RealPython
I just published a new tutorial on RealPython: GeoPandas Basics: Maps, Projections and Spatial Joins.
If you’re interested in using Python to make maps or analyze spatial data, then I recommend checking out the tutorial. It walks you through common geospatial tasks using GeoPandas, one of the most widely used geospatial libraries in Python.
Map Projections: A Side-by-Side Comparison
My favorite part of the tutorial is walking readers through map projections.
You start with a world map that uses longitude and latitude. This map stretches areas near the poles, which makes Antarctica and Greenland look much larger than they are. Then you switch to the Mollweide Equal-Area projection, which preserves relative area and gives a more accurate sense of landmass size.
The final image shows both maps side by side:
The contrast is striking: the left map distorts area, while the right map preserves it. It’s a simple, visual way to understand why projections matter.
Interactive Maps and Spatial Joins
The tutorial also walks you through reading geographic data, creating interactive maps and doing a spatial join.
In one example, you load the boundaries of New York City boroughs. You’re given the coordinates of the Empire State Building and do a spatial join to see which borough it’s in. It’s a straightforward demonstration of how spatial joins work and why they’re useful.
Reflections on Technical Writing
Over the years I’ve written a lot of technical content, but this was my first time doing it as a paid engagement. I really enjoyed the collaboration and the chance to learn more about a library that I had previously only used in passing.
If your team is looking for someone to create tutorials, workshops, or educational content around Python or R, feel free to reach out. This is work I love doing.
February 09, 2026 07:53 PM UTC
Real Python
pandas 3.0 Lands Breaking Changes and Other Python News for February 2026
Last month brought exciting performance news for Python! The Python 3.15 alphas showed JIT compiler speed gains of up to 7–8% on some platforms, while pandas released version 3.0, delivering the most significant performance improvements in years. The Python Software Foundation also received considerable investments in Python security and launched the 2026 Python Developers Survey, and PyTorch 2.10 deprecated TorchScript.
Time to dive into the biggest Python news from the past month!
Join Now: Click here to join the Real Python Newsletter and you’ll never miss another Python tutorial, course, or news update.
Python Releases and PEP Highlights
Last month brought two Python 3.15 alpha releases in quick succession, with notable JIT compiler improvements showing promising performance gains. Several PEPs also emerged, including one proposing a cleaner way to write multiline strings.
Python 3.15.0 Alpha 4 and 5: Two Releases in Two Days
January saw an unusual situation in Python’s release history: Python 3.15.0a4 arrived on January 13, but it was accidentally compiled against outdated source code from December 2025. The release team quickly followed up with 3.15.0a5 on January 14 to correct the issue.
Both releases continue the work on Python 3.15’s headline features:
- UTF-8 as the default text encoding for files that don’t specify an encoding, via PEP 686
- A new statistical sampling profiler that’s high-frequency and low-overhead, via PEP 799
- The
PyBytesWriterC API for creatingbytesobjects more efficiently, via PEP 782 - Enhanced error messages with improved clarity and usefulness
The most exciting news for performance enthusiasts is the continued progress on Python’s experimental JIT compiler. Alpha 5 reports a 4–5% performance improvement on x86-64 Linux and a 7–8% speedup on AArch64 macOS compared to the standard interpreter.
Note: These are preview releases and are not recommended for production. The beta phase begins May 5, 2026, with the next alpha, 3.15.0a6, scheduled for February 10, 2026.
If you maintain packages, now is a good time to start running tests against the alphas in a separate environment so you can catch regressions early.
PEP 822 Drafted: Dedented Multiline Strings (d-strings)
A new PEP emerged in January that could make writing multiline strings clearer. PEP 822, authored by Inada Naoki, proposes adding dedented multiline strings (d-strings) to Python.
If you’ve ever written a multiline string inside an indented function or class, you’ve likely run into the awkward choice between breaking your code’s visual structure or using textwrap.dedent() to clean up the extra whitespace. PEP 822 offers a cleaner solution with a new d prefix:
def get_help_message():
# Current approach
return textwrap.dedent("""\
Usage: app [options]
Options:
-h Show this help message
-v Enable verbose mode
""")
# Proposed d-string approach
return d"""
Usage: app [options]
Options:
-h Show this help message
-v Enable verbose mode
"""
The d prefix tells Python to automatically strip the common leading whitespace from each line, using the indentation of the closing quotes as a reference. This differs slightly from textwrap.dedent(), which uses the least-indented line to determine how much to strip.
The proposal targets Python 3.15 and is currently in draft status. If you work with templates, SQL queries, or any code that embeds multiline text, this feature could simplify your workflow. You can follow the discussion on the Python Discourse and provide feedback while the PEP is still being refined.
PSF News: Investments, Fellows, and Survey
The Python Software Foundation (PSF) had a busy month with a major security investment announcement, new Fellows recognition, and the launch of the annual developers survey.
Anthropic Invests in Python Security
Read the full article at https://realpython.com/python-news-february-2026/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 09, 2026 02:00 PM UTC
PyBites
How Dependency Injection makes your FastAPI Code Better Testable
Most Python web frameworks make you choose between testability and convenience. You either have clean code with complex test setup, or you use global state and hope your tests don’t interfere with each other.
FastAPI’s Depends() solves this elegantly.
Here is an example how you would use it in your API:
What’s happening here:
Type-safe – Your editor knows repo is a SnippetRepository. Full autocomplete, type checking works.
Automatic cleanup – The context manager ensures the database session closes, even if an exception occurs.
No global state – Every request gets its own session. No risk of one request interfering with another.
Testable – Here’s the magic:
You override the dependency with an in-memory implementation. Your test doesn’t hit the database. It doesn’t need Docker. It doesn’t even need fixtures to clean up test data. And it runs in milliseconds. 
This is how professional FastAPI applications are structured:
→ Dependencies are functions that provide resources → FastAPI calls them automatically for each request → Tests override dependencies with test doubles → Your endpoint code stays clean – it just declares what it needs
I see too many FastAPI tutorials that skip dependency injection or treat it as “advanced.” However, this is foundational knowledge. If you’re not leveraging Depends(), you’re making testing harder than it needs to be.
I think it’s a nice, practical application of clean architecture in the context of a beloved modern framework. This practice will go a long way towards any app you’ll design afterwards.
Thanks for reading. Let me know in our community how you have used dependency injection in FastAPI or beyond. I am also curious to hear where it didn’t work so well for you, and what you then did to work around it.
And check out our PDC Snipster program, where dependency injection is a first-class citizen. We build the repository pattern first, then wire it into FastAPI using Depends().
The Pybites Developer Cohort provided a motivating, structured environment that helped me bring a full Python project to life. Over six weeks, I shipped Snipster, a CLI + API app for managing code snippets, while sharpening my skills in FastAPI, SQLModel, testing, deployment, and architecture.
— Ben G, Data Engineer
February 09, 2026 12:57 PM UTC
Ned Batchelder
EdText
I have a new small project: edtext provides text selection and manipulation functions inspired by the classic ed text editor.
I’ve long used cog to build documentation and HTML presentations. Cog interpolates text from elsewhere, like source code or execution output. Often I don’t want the full source file or all of the lines of output. I want to be able to choose the lines, and sometimes I need to tweak the lines with a regex to get the results I want.
Long ago I wrote my own ad-hoc function to include a file and over the years it had grown “organically”, to use a positive word. It had become baroque and confusing. Worse, it still didn’t do all the things I needed.
The old function has 16 arguments (!), nine of which are for selecting the lines of text:
start=None,
end=None,
start_has=None,
end_has=None,
start_from=None,
end_at=None,
start_nth=1,
end_nth=1,
line_count=None,
Recently I started a new presentation, and when I couldn’t express what I needed with these nine arguments, I thought of a better way: the ed text editor has concise mechanisms for addressing lines of text. Ed addressing evolved into vim and sed, and probably other things too, so it might already be familiar to you.
I wrote edtext to replace my ad-hoc function that I was copying from project to project. Edtext lets me select subsets of lines using ed/sed/vim address ranges. Now if I have a source file like this with section-marking comments:
import pytest
# section1
def six_divided(x):
return 6 / x
# Check the happy paths
@pytest.mark.parametrize(
"x, expected",
[ (4, 1.5), (3, 2.0), (2, 3.0), ]
)
def test_six_divided(x, expected):
assert six_divided(x) == expected
# end
# section2
# etc....
then with an include_file helper that reads the file and gives me an
EdText object, I can select just section1 with:
include_file("test_six_divided.py")["/# section1/+;/# end/-"]
EdText allows slicing with a string containing an ed address range. Ed addresses often (but don’t always) use regexes, and they have a similar powerful compact feeling. “/# section1/” finds the next line containing that string, and the “+” suffix adds one, so our range starts with the line after the section1 comment. The semicolon means to look for the end line starting from the start line, then we find “# end”, and the “-” suffix means subtract one. So our range ends with the line before the “# end” comment, giving us:
def six_divided(x):
return 6 / x
# Check the happy paths
@pytest.mark.parametrize(
"x, expected",
[ (4, 1.5), (3, 2.0), (2, 3.0), ]
)
def test_six_divided(x, expected):
assert six_divided(x) == expected
Most of ed addressing is implemented, and there’s a sub() method to
make regex replacements on selected lines. I can run pytest, put the output into
an EdText object, then use:
pytest_edtext["1", "/collected/,$-"].sub("g/====", r"0.0\ds", "0.01s")
This slice uses two address ranges. The first selects just the first line,
the pytest command itself. The second range gets the lines from “collected” to
the second-to-last line. Slicing gives me a new EdText object, then I use
.sub() to tweak the output: on any line containing “====”, change the
total time to “0.01s” so that slight variations in the duration of the test run
doesn’t cause needless changes in the output.
It was very satisfying to write edtext: it’s small in scope, but useful. It has a full test suite. It might even be done!





















