Planet Python
Last update: May 15, 2026 04:44 PM UTC
May 15, 2026
Real Python
The Real Python Podcast – Episode #295: Agentic Architecture: Why Files Aren't Always Enough
What are the limitations of using a file-based agent workflow? Why do massive context windows tend to collapse? This week on the show, Mikiko Bazeley from MongoDB joins us to discuss agentic architecture and context engineering.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Python's Array: Working With Numeric Data Efficiently
In this quiz, you’ll test your understanding of Python’s Array: Working With Numeric Data Efficiently.
By working through this quiz, you’ll revisit the differences between Python’s array module and the built-in list, the meaning of type codes, how to create and manipulate arrays as mutable sequences, and the performance trade-offs of using a low-level numeric container.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
EuroPython
May Newsletter: Sessions, Speakers, Sprints
Hi all Pythonistas! 👋
Hope you’ve been enjoying these last few weeks, and hopefully planning your trip to Kraków in July! With two months left before the conference, the EuroPython organising team has been firing on all cylinders to create a conference to remember. Here’s the latest from us:
📋 Session and Speaker Lists Are Available
Our Programme Team is busy preparing a detailed schedule for you. We plan to release it in the upcoming days, but in the meantime we’ve got the list of sessions and speakers for you to check out. It’s going to be an exciting conference!
Lists of sessions and speakers are available at https://ep2026.europython.eu/👉 All conference sessions: https://ep2026.europython.eu/sessions/
👉 Speakers and tutorial leads: https://ep2026.europython.eu/speakers/
🗻 Language & Rust Summits
Summits are an opportunity for project contributors to come together during EuroPython. These are invite-only events with limited capacity at the venue, so registration is required.
🐍 Language Summit
The Python Language Summit is an event for the developers of Python implementations (CPython, PyPy, MicroPython, GraalPython, IronPython, and so on) to share information, discuss our shared problems, and — hopefully — solve them.
These issues might be related to the language itself, the standard library, the development process, the status of Python 3.15 (and plans for 3.16), the documentation, packaging, the website, and so forth. The Summit focuses on discussions and consensus-seeking, more than merely on presentations.
👉 Register for the Language Summit: https://ep2026.europython.eu/language-summit/
⚙️ Rust Summit
This full-day summit is dedicated to exploring the intersection of Rust and the Python ecosystem. Attendees can expect an intensive schedule focused specifically on integrating Rust into Python projects and the development of high-performance Python tools (e.g., using technologies like PyO3, Maturin, or writing performant native extensions).
This summit is designed for developers who already possess some practical experience in these topics and are looking to deepen their expertise, share lessons learned, and contribute to the community&aposs collective knowledge.
👉 Register for the Rust Summit: https://ep2026.europython.eu/session/rust-summit-at-europython
🗣️ Keynote Speakers
We are excited to announce a new keynote:
Leah Wasser will deliver a keynote at EuroPython 2026Leah Wasser is the Executive Director and founder of pyOpenSci, a community of 400+ researchers, engineers, and maintainers working to make developing and maintaining research software more accessible, sustainable, and human. She organizes the Maintainers Summit at PyCon US and believes the communities behind research software matter as much as the code itself.
Leah has built nationally recognized programs at the National Ecological Observatory Network (NEON) and the University of Colorado Boulder. Leah holds a PhD in ecology and is an active open source maintainer.
✋ Upcoming Call for Volunteers
We&aposre opening our Call for Volunteers next week! Want to be part of the team and help make EuroPython 2026 awesome? Keep an eye on the website, the signup form drops in just a few days. We&aposll be reviewing applications on a rolling basis, so don&apost wait – apply as soon as it goes live! Whether you&aposre a first-timer or a returning volunteer, we&aposd love to have you.
In my opinion, volunteering enriches the enjoyment of the whole event even further. There are many different roles to suit different personalities and abilities — one of them could suit you very well. Also, volunteering is about the team; you will not be left alone in any case.
Jake Balas, Onsite Volunteers Team Lead at EuroPython 2025 and this year’s Operations Team Lead
💙 Read our full interview with Jake https://blog.europython.eu/humans-of-ep-jake/
💰 Sponsorship: Diamond, Platinum, Silver Available
If you&aposre passionate about supporting EuroPython and helping make this conference accessible to a diverse, global Python community, consider becoming a sponsor or asking your employer to join us in this effort.
By sponsoring EuroPython, you’re not just backing an event – you&aposre gaining highly targeted visibility that will present your company or personal brand to one of the largest and most diverse Python communities in the world! Here’s what one of our sponsors said about their experience at EuroPython 2025:
The Apify team shares their experience sponsoring EuroPython 2025
We still have some Diamond, Platinum, and Silver slots available. Along with our main packages, there are optional add-ons and extras to craft your brand messaging in exactly the way that you need.
👉 More information at: https://ep2026.europython.eu/sponsorship/sponsor/
👉 Contact us at sponsoring@europython.eu
🚧 Speaker Orientation
Anyone interested in receiving speaker training from our experienced mentors is invited to an online workshop on the 3rd June 2026, at 18:00 CEST. We’ve designed the session for people of all experience levels, from first time speakers to seasoned presenters, and we still have spots for you.
👉 Register now to confirm your place: https://forms.gle/uZKwuAiBkUSmx7gn7
🤝 Community Partners
🇪🇸PyConES
Barcelona is calling, Pythonistas! PyConES 2026 has extended its CFP. New deadline: 17 May, 23:59 CEST. If you’re still thinking about submitting a talk, workshop, or idea to the community which will meet up in that gorgeous city, you have last days.
👉 Submit the proposal for PyConES 2026 https://pretalx.com/pycones-2026/cfp
🦬PyStok
PyStok #82 meetup lands on 20 May, 18:00 at Zmiana Klimatu in Białystok, Poland, and free registration is officially live. Grab your spot at https://pystok.org/najblizsze-wydarzenie to dive deep into RAG/LLM Wiki and the PLLuM (Polish Large Language Model) project. Between the "speed dating" networking, JetBrains giveaways and the legendary "Podlaskie afterparty", it’s the perfect spot to soak up those unique North-East Polish vibes and talk Python and AI with the local crowd.
📣 Community Outreach
🏖️PyCon US
Several members of the EuroPython Society have traveled across the ocean to join the biggest gathering of Pythonistas, which this year takes place in Long Beach, California. If you’re there this weekend, make sure to look up the EuroPython booth and say “hi” to the team!
🎁 Sponsor Spotlight
We&aposd like to thank Manychat for sponsoring EuroPython.
Manychat builds AI-powered chat automation for 1M+ creators and brands at real production scale.
View job openings at Manychat👋 Stay Connected
Follow us on social media and subscribe to our newsletter for all the updates:
👉 Sign up for the newsletter: https://blog.europython.eu/portal/signup
- LinkedIn: https://www.linkedin.com/company/europython/
- X/Twitter: https://x.com/europython
- Mastodon: https://fosstodon.org/@europython
- Bluesky: https://bsky.app/profile/europython.eu
- Instagram: https://www.instagram.com/europython/
- YouTube: https://www.youtube.com/@EuroPythonConference
We’ll be announcing more keynotes in the upcoming days, and the detailed schedule will be available soon, so you can plan your conference experience. Just eight weeks are left before we all meet in the City of Castles and Dragons. See you there! 🐍❤️
Cheers,
The EuroPython Team
Sign up for EuroPython Blog
The official blog of everything & anything EuroPython! EuroPython 2026 13-19 July, Kraków
No spam. Unsubscribe anytime.
Bob Belderbos
10 Days of Rust for Python Developers: A Recap
I ran a 10-day beginner Rust challenge on LinkedIn, one small exercise per day, written for Python developers. The goal was to give developers enough exposure to decide whether Rust deserves a real push.
This post is the recap. Each day pulls one snippet from the solutions branch and one usable idea.
Day 1: last expression is the return value
fn greet() -> String {
"Hello, Rustacean!".to_string()
}
fn double_counter() -> i32 {
let mut counter = 1;
for _ in 0..5 {
counter *= 2;
}
counter
}
Two ideas land on day one. Drop the semicolon and the last expression is the return value, no return needed. And variables are immutable by default; you opt into mutation with mut. This gets you in the mindset to ask yourself which data is allowed to change.
Day 2: format! is f-strings, with a Debug placeholder
fn describe_types() -> String {
let int = 42;
let float = 3.14;
let flag = true;
let letter = 'Z';
let pair = (7, "Rust");
format!(
"int: {}, float: {}, bool: {}, char: {}, tuple: {:?}",
int, float, flag, letter, pair
)
}
Static types with inference. {} for Display, {:?} for Debug, which you need for tuples and other compound types.
But wait, this is more like Python's str.format() than f-strings, right? It turns out, you can get to f-string levels in Rust as well:
- format!(
- "int: {}, float: {}, bool: {}, char: {}, tuple: {:?}",
- int, float, flag, letter, pair
- )
+ format!("int: {int}, float: {float}, bool: {flag}, char: {letter}, tuple: {pair:?}")
I just committed this change. Note that the pair tuple still requires the :? specifier inside the brackets to use the Debug trait. Cleaner, and this inline syntax has been stable since Rust 1.58.
Day 3: if and match are expressions
fn grade_message(score: i32) -> String {
let grade = if score >= 90 {
"Excellent"
} else if score >= 75 {
"Good"
} else if score >= 50 {
"Pass"
} else {
"Fail"
};
match grade.chars().next() {
Some('E') => "Top".to_string(),
Some('G') => "Decent".to_string(),
Some('P') => "Basic".to_string(),
_ => "None".to_string(),
}
}
Branches return values, so you can bind the result. And match is exhaustive. Remove an enum variant and every match in the codebase tells you where to fix it. One of those compiler strictness things I've come to appreciate about Rust.
Day 4: &str vs String is a mental shift
fn first_word(s: &str) -> &str {
s.split_whitespace().next().unwrap_or("")
}
fn shout(s: &str) -> String {
format!("{}!", s.to_uppercase())
}
&str is a borrowed view into existing data; String is owned and heap-allocated. The function signature tells you which one you're getting. Rust makes this explicit. Annoying at first, but it lets you manage memory deliberately and avoid unnecessary allocations.
Day 5: struct + impl, not classes
struct Temperature {
celsius: f64,
}
impl Temperature {
fn new(celsius: f64) -> Self {
Self { celsius }
}
fn to_fahrenheit(&self) -> f64 {
self.celsius * 9.0 / 5.0 + 32.0
}
fn is_fever(&self) -> bool {
self.celsius >= 38.0
}
}
Python classes bundle data and behavior. Rust splits them: struct for state, impl for methods. &self is read-only; mutation needs &mut self, visible right in the signature. No inheritance, just composition and traits. How much OOP do you really need? More on this in what Rust structs taught me about state ownership.
Day 6: Option<T> instead of None
#[derive(Debug, PartialEq)]
enum Direction {
North,
South,
East,
West,
}
fn parse_direction(c: char) -> Option<Direction> {
match c {
'N' => Some(Direction::North),
'S' => Some(Direction::South),
'E' => Some(Direction::East),
'W' => Some(Direction::West),
_ => None,
}
}
There is no null in Rust. Option<T> is either Some(value) or None, and the compiler forces you to handle both. This rules out the AttributeError: 'NoneType' object has no attribute 'x' runtime crashes we all know from Python. Custom enums make invalid states unrepresentable.
A challenge taker hit this exact wall on Day 4 and asked:
Just finished day 4 and Rust's generic/option type really gives me a headache, probably because I don't fully know the syntax yet.
&str.split_whitespace()returns an iterator. If this were Go (or Python), I'd just iterate it as usual and return the value. Butiter.next()returnsOption<T>. The compiler just tells me to add.expect("REASON")and everything works. I feel lost on this. Can you help me understand?
My reply:
The
Optionis there to protect you. Something is eitherSomeorNone, and the compiler forces you to handle it..expect()works fine, until thefirst_word("")test runs: empty string, no words,None, panic.More idiomatic:
unwrap_or,match, orif let, e.g.s.split_whitespace().next().unwrap_or("").What feels cumbersome at first is exactly what you'll thank Rust for later: it forces you to handle it. Strict as hell, usually for a good reason :)
Day 7: iterators feel Pythonic, with ? for early exit
fn score_summary(scores: &[i32]) -> Option<(i32, i32, f64)> {
let min = *scores.iter().min()?;
let max = *scores.iter().max()?;
let sum: i32 = scores.iter().sum();
let avg = sum as f64 / scores.len() as f64;
Some((min, max, avg))
}
Vec<T> is list[T]. .iter().min() returns Option<&i32> because the slice might be empty, and the ? operator acts as a declarative early return, short-circuiting to None if a value is missing. Iterator chains are zero-cost abstractions: they are lazy by default and compile into optimized machine code equivalent to manual loops.
Day 8: errors are values, ? is a one-character try/except
fn parse_score(s: &str) -> Result<u32, String> {
let n: u32 = s
.trim()
.parse()
.map_err(|_| format!("'{}' is not a valid number", s.trim()))?;
if n > 100 {
return Err(format!("{} is out of range (0-100)", n));
}
Ok(n)
}
Result<T, E> puts failure in the function signature, and as we've seen, ? propagates the error without try/except boilerplate. The compiler stops callers from ignoring it; the mindset shift I keep coming back to.
Related: Rust made me a better Python developer and the Rust compiler as an AI agent guardrail.
Day 9: closures and iterator chains
fn top_scorers(records: &[&str], threshold: u32) -> Vec<String> {
let mut pairs: Vec<(String, u32)> = records
.iter()
.filter_map(|record| {
let (name, score_str) = record.split_once(':')?;
let score: u32 = score_str.trim().parse().ok()?;
(score >= threshold).then(|| (name.trim().to_string(), score))
})
.collect();
pairs.sort_by_key(|&(_, score)| std::cmp::Reverse(score));
pairs
.into_iter()
.map(|(name, score)| format!("{} ({})", name, score))
.collect()
}
Ok here it got more complicated, but the pattern is common: iterator chains with filter_map and closures.
filter_map keeps Some and drops None, so ? inside the closure quietly skips malformed records. The closure here captures threshold from the enclosing scope, the same way Python closures (and lambdas) pick up surrounding variables. If you like the functional programming side of Python, Rust's iterators and closures will feel familiar, but with the added safety of the type system.
Day 10: HashMap::entry().or_insert(0) is Rust's Counter
fn word_count(text: &str) -> HashMap<String, usize> {
let mut map: HashMap<String, usize> = HashMap::new();
for word in text.split_whitespace() {
let clean: String = word
.chars()
.filter(|c| c.is_alphabetic())
.flat_map(|c| c.to_lowercase())
.collect();
if !clean.is_empty() {
*map.entry(clean).or_insert(0) += 1;
}
}
map
}
fn summarize(text: &str, n: usize) -> Vec<String> {
let counts = word_count(text);
top_n(&counts, n)
.into_iter()
.map(|(word, count)| format!("{word} ({count})"))
.collect()
}
Honestly, you cannot beat the conciseness of collections.Counter in Python, but *map.entry(clean).or_insert(0) += 1 is the closest Rust equivalent. The entry API is a powerful pattern for counting and grouping. The snippet above has a lot to unpack thanks to Rust's iterators and closures, the same way list and generator comprehensions do in Python.
Full file (with use std::collections::HashMap; and a top_n helper) on the solutions branch.
Idiomatic refactors from the submissions
I got some code shared in DMs, which gave me a chance to walk through a few more idiomatic patterns.
1. match on Result becomes ?
// Submitted
for record in records {
match parse_score(record) {
Ok(score) => scores.push(score),
Err(err) => return Err(err),
}
}
// Idiomatic
for record in records {
scores.push(parse_score(record)?);
}
The ? operator is exactly that match. One character does the same work.
2. .unwrap() chains become ? inside filter_map
// Submitted
.filter(|r| r.split_once(':').unwrap().1.parse::<u32>().unwrap() >= threshold)
// Idiomatic and safer
.filter_map(|r| {
let (name, score_str) = r.split_once(':')?;
let score: u32 = score_str.trim().parse().ok()?;
(score >= threshold).then(|| (name.trim().to_string(), score))
})
.unwrap() panics on None or Err. Inside a filter_map, ? quietly skips the bad record instead of crashing the program.
3. One .collect(), not three
// Submitted
records
.iter()
.filter(|r| r.contains(':'))
.collect::<Vec<_>>()
.iter()
.filter(|r| ...)
.collect::<Vec<_>>()
.iter()
.map(|r| ...)
.collect::<Vec<_>>()
// Idiomatic
records
.iter()
.filter_map(|r| ... )
.map(|r| ... )
.collect::<Vec<_>>()
Each .collect() allocates a new Vec. Iterators are lazy on purpose: keep chaining and collect once at the end.
4. One split_once call, destructure once
// Submitted
let name = record.split_once(':').unwrap().0.to_string();
let score = record.split_once(':').unwrap().1.parse::<u32>().unwrap();
// Idiomatic
let (name, score_str) = record.split_once(':')?;
let score: u32 = score_str.trim().parse().ok()?;
split_once returns an Option<(&str, &str)>. Destructure it once, name both halves, and move on. The repeated call hides intent and adds two unwraps that can panic.
5. sort_by_key + Reverse instead of .sort() + .reverse()
This was a refactoring I did myself after writing the initial version. Diff:
- pairs.sort_by(|a, b| b.1.cmp(&a.1));
+ // `sort_by_key` + `Reverse` reads cleaner than a manual `b.cmp(&a)` comparator.
+ pairs.sort_by_key(|&(_, score)| std::cmp::Reverse(score));
Here I switched from sort_by with a custom comparator to sort_by_key with Reverse; this feels similar to Python's sorted(..., key=..., reverse=True).
Let the linter teach you
Most of the patterns above get flagged by cargo clippy -- -D warnings.
Run it once and it will point you toward the idiomatic version, with a link to the rule.
Pair it with cargo fmt as you'd do in Python with ruff (and Prek).
What compounded
Ten days is not enough to write production Rust. It is enough though to feel a new discipline take hold.
Four shifts stuck for me when I started learning Rust:
- Immutable by default. You stop reaching for
mutand start asking what actually needs to change and why. - Errors as values. The function signature is more explicit about what can fail.
- Exhaustive
match. Refactoring with more confidence, because there are no unhandled cases. - Borrowed vs owned data. You think about who can mutate data, and whether you need to own it or just borrow a view.
I hope your takeaway isn't the syntax, but the new mental models Rust gives you.
This only touched the surface of Rust though. There is so much more to learn: lifetimes, traits, generics, async, error handling, and more. I'll cover those in future challenges.
Now that you have enough to be dangerous, what do you want to build with Rust? Reach out to me on LinkedIn or send me an email.
Keep reading
- Learning Rust Made Me a Better Python Developer
- What Building a JSON Tokenizer Taught Me About Rust
- A Race Condition Rust Wouldn't Have Let Me Write
The 10-day repo is here, with starter code and a solutions branch including commented walkthroughs.
If you want to go deeper, the next round will be a small project we build together over a couple of weeks. That is where it stops being exercises and starts being software. Details at scriptertorust.com.
What surprised you most? The comments are how I shape what comes next.
May 14, 2026
Kay Hayen
Nuitka Release 4.1
This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, “download now”.
This release adds many new features and corrections with a focus on async code compatibility, missing generics features, and Python 3.14 compatibility and Python compilation scalability yet again.
Bug Fixes
Python 3.14: Fix, decorators were breaking when disabling deferred annotations. (Fixed in 4.0.1 already.)
Fix, nested loops could have wrong traces lead to mis-optimization. (Fixed in 4.0.1 already.)
Plugins: Fix, run-time check of package configuration was incorrect. (Fixed in 4.0.1 already.)
Compatibility: Fix,
__builtins__lacked necessary compatibility in compiled functions. (Fixed in 4.0.1 already.)Distutils: Fix, incorrect UTF-8 decoding was used for TOML input file parsing. (Fixed in 4.0.1 already.)
Fix, multiple hard value assignments could cause compile time crashes. (Fixed in 4.0.1 already.)
Fix, string concatenation was not properly annotating exception exits. (Fixed in 4.0.2 already.)
Windows: Fix,
--verbose-outputand--show-modules-outputdid not work with forward slashes. (Fixed in 4.0.2 already.)Python 3.14: Fix, there were various compatibility issues including dictionary watchers and inline values. (Fixed in 4.0.2 already.)
Python 3.14: Fix, stack pointer initialization to
localspluswas incorrect to avoid garbage collection issues. (Fixed in 4.0.2 already.)Python 3.12+: Fix, generic type variable scoping in classes was incorrect. (Fixed in 4.0.2 already.)
Python 3.12+: Fix, there were various issues with function generics. (Fixed in 4.0.2 already.)
Python 3.8+: Fix, names in named expressions were not mangled. (Fixed in 4.0.2 already.)
Plugins: Fix, module checksums were not robust against quoting style of module-name entry in YAML configurations. (Fixed in 4.0.2 already.)
Plugins: Fix, doing imports in queried expressions caused corruption. (Fixed in 4.0.2 already.)
UI: Fix, support for
uv_buildin the--projectoption was broken. (Fixed in 4.0.2 already.)Compatibility: Fix, names assigned in assignment expressions were not mangled. (Fixed in 4.0.2 already.)
Python 3.12+: Fix, there were still various issues with function generics. (Fixed in 4.0.3 already.)
Clang: Fix, debug mode was disabled for clang generally, but only ClangCL and macOS Clang didn’t want it. (Fixed in 4.0.3 already.)
Zig: Fix,
--windows-console-mode=attach|disablewas not working when using Zig. (Fixed in 4.0.3 already.)macOS: Fix, yet another way self dependencies can look like, needed to have support added. (Fixed in 4.0.3 already.)
Python 3.12+: Fix, generic types in classes had bugs with multiple type variables. (Fixed in 4.0.3 already.)
Scons: Fix, repeated builds were not producing binary identical results. (Fixed in 4.0.3 already.)
Scons: Fix, compiling with newer Python versions did not fall back to Zig when the developer prompt MSVC was unusable, and error reporting could crash. (Fixed in 4.0.4 already.)
Zig: Fix, the workaround for Windows console mode
attachordisablewas incorrectly applied on non-Windows platforms. (Fixed in 4.0.4 already.)Standalone: Fix, linking with Python Build Standalone failed because
libHacl_Hash_SHA2was not filtered out unconditionally. (Fixed in 4.0.4 already.)Python 3.6+: Fix, exceptions like
CancelledErrorthrown into an async generator awaiting an inner awaitable could be swallowed, causing crashes. (Fixed in 4.0.4 already.)Fix, not all ordered set modules accepted generators for update. (Fixed in 4.0.5 already.)
Plugins: Disabled warning about rebuilding the
pytokensextension module. (Fixed in 4.0.5 already.)Standalone: Filtered
libHacl_Hash_SHA2from link libs unconditionally. (Fixed in 4.0.5 already.)Debugging: Disabled unusable unicode consistency checks for Python versions 3.4 to 3.6. (Fixed in 4.0.5 already.)
Python3.12+ Avoided cloning call nodes on class level which caused issues with generic functions in combination with decorators. (Added in 4.0.5 already.)
Python 3.12+: Added support for generic type variables in
async deffunctions. (Added in 4.0.5 already.)UI: Fix, flushing outputs for prompts was not working in all cases when progress bars were enabled. (Fixed in 4.0.6 already.)
UI: Fix, unused variable warnings were missing at C compile time when using
zigas a C compiler. (Fixed in 4.0.6 already.)Scons: Fix, forced stdout and stderr paths as a feature was broken. (Fixed in 4.0.6 already.)
Fix, replacing a branch did not accurately track shared active variables causing optimization crashes. (Fixed in 4.0.7 already.)
macOS: Fix, failed to remove extended attributes because files need to be made writable first. (Fixed in 4.0.7 already.)
Fix, dict
popandsetdefaultusing with:=rewrites lacked exception-exit annotations for un-hashable keys. (Fixed in 4.0.8 already.)Python 3.13: Fix, the
__parameters__attribute of generic classes was not working. (Fixed in 4.0.8 already.)Python 3.11+: Fix, starred arguments were not working as type variables. (Fixed in 4.0.8 already.)
Python2: Fix,
FileNotFoundErrorcompatibility fallback handling was not working properly. (Fixed in 4.0.8 already.)Compatibility: Fix, loop ownership check in value traces was missing, causing issues with nested loops.
Windows: Improved
--windows-console-mode=attachto properly handle console handles, enabling cases likeos.systemto work nicely.Python2: Fix, there was a compatibility issue where providing default values to the
mkdtempfunction was failing.Windows: Fix, there were spurious issues with C23 embedding in 32-bit MinGW64 by switching to
coff_objresource mode for it as well.Plugins: Fix, the
post-import-codeexecution could fail because the triggering sub-package was not yet available insys.modules.UI: Fix, listing package DLLs with
--list-package-dllswas broken due to recent plugin lifecycle changes.UI: Fix,
--list-package-exewas not working properly on non-Windows platforms failing to detect executable files correctly.UI: Handled paths starting with
{PROGRAM_DIR}the same as a relative path when parsing the--onefile-tempdir-specoption.Plugins: Followed multiprocessing
forkserverchanges for newer Python versions.Python 3.12+: Fix, generic class type parameters handling was incorrect.
Python 3.12: Fix, deferred evaluation of type aliases was failing.
Python 3.12+: Aligned
sumbuilt-in float summation with CPython’s compensated sum for better accuracy.Python 3.10+: Fix, uncompiled coroutine
throw()return handling was incorrect, restoring completed coroutine results viaStopIteration.valuerather than exposing them as ordinary return values to the outer await chain.Python 3.13+: Fix, uncompiled coroutine
cancel()/awaitsuspension handling was incorrect, improved to ensure integration compatibility.macOS: Made finding
create-dmgmore robustly by also checking the Homebrew path for Intel and fromPATHproperly.Compatibility: Fix, class frames were not exposing frame locals.
UI: Detected
static-libpythonproblems, which affected some forms of Anaconda.Distutils: Rejected
--projectmixed with--mainarguments as it is not useful.macOS: Fix,
zigfromPATHor fromziglangwas not being used.Distutils: Fix, the wrong
module-rootconfig value was being checked foruvbuild backend.macOS: Fix, was attempting to change removed (rejected) DLLs, which of course failed and errored out.
Python 3.14: Fix, tuple reuse was not fully compatible, potentially causing crashes due to outdated hash caches.
Fix, fake modules were still being attempted to located when imported by other code, which could conflict with existing modules.
Python 3.5+: Fix, failed to send uncompiled coroutines the sent in value in
yield from.Fix, older
gcccompilers lacking newer intrinsic methods had compilation issues that needed to be addressed.Standalone: Fix, multiphase module extension modules with post-load code were not working properly.
Fix, Avoid using the non-inline copy of
pkg_resourceswith the inline copy of Jinja2. These could mismatch and cause errors.Fix, loops could make releasing of previous values very unclear, causing optimization errors.
Fix,
incbinresource mode was not working with oldgccC++ fallback.Python 3.4 to 3.6: Fix, bytecode demotion was not working properly for these versions, also bytecode only files not working.
Plugins: Added a check for the broken
patchelfversions 0.10 and 0.11 to prevent breaking Qt plugins.Android: Allowed
patchelfversion 0.18 on Android.Windows: Fix, the header path for self uninstalled Python was not detected correctly.
Release: Fix, inclusion of the
pkg_resourcesinline copy for Python 2 to source distributions was missing.UI: Detected the OBS versions of SUSE Linux better.
Suse: Allowed using
patchelf0.18.0 there too.Python 3.11: Fix, package and module dicts were not aligned close enough to avoid a CPython bug.
Fix, unbound compiled methods could crash when called without an object passed.
Standalone: Fix, multiphase module extension modules with postload. (Fixed in 4.0.8 already.)
Onefile: Fix, while waiting for the child, it may already be terminated.
macOS: Removed existing absolute rpaths for Homebrew and MacPorts.
Python 3.14: Avoided warning in CPython headers.
Python 3.14: Followed allocator changes more closely.
Compatibility: Avoided using
pkg_resourcesfor Jinja2 template location for loading.No-GIL: Applied some bug fixes to get basic things to work.
Package Support
Standalone: Add support for newer
paddleversion. (Added in 4.0.1 already.)Standalone: Add workaround for refcount checks of
pandas. (Fixed in 4.0.1 already.)Standalone: Add support for newer
h5pyversion. (Added in 4.0.2 already.)Standalone: Add support for newer
scipypackage. (Added in 4.0.2 already.)Plugins: Revert accidental
os.getenvoveros.environ.getchanges in anti-bloat configurations that stopped them from working. Affected packages arenetworkx,persistent, andtensorflow. (Fixed in 4.0.5 already.)Standalone: Added missing DLLs for
openvino. (Added in 4.0.7 already.)Enhanced the package configuration YAML schema by adding the
relative_toparameter forfrom_filenamesDLL specification, avoiding error-prone purely relative paths.Standalone: Fix,
flet_desktopapp assets were missing, now preserving the packaged runtime and sidecar DLLs.Standalone: Added support for the
tyropackage.Standalone: Added data files for the
perfettopackage.Standalone: Added support for
anyioprocess forking.Standalone: Added support for the
plotly.graphpackage.Anaconda: Fix, dependencies for the
numpyconda package on Windows were incorrect.Plugins: Enhanced the auto-icon hack in PySide6 to use compatible class names.
Standalone: Fix, Qt libraries were duplicated with
PySide6WebEngine framework support on macOS.Plugins: Fix, automatic detection of
mypycruntime dependencies was including all top level modules of the containing package by accident. (Fixed in 4.0.5 already.)Anaconda: Fix,
delvewheelplugin was not working with Python 3.8+. This enhances compatibility with installed PyPI packages that use it for their DLLs. (Fixed in 4.0.6 already.)Plugins: Fix, our protection workaround could confuse methods used with
PySide6.
New Features
UI: Added the
--recommended-python-versionoption to display recommended Python versions for supported, working, or commercial usage.UI: Add message to inform users about
Nuitka[onefile]if compression is not installed. (Added in 4.0.1 already.)UI: Add support for
uv_buildin the--projectoption. (Added in 4.0.1 already.)Onefile: Allow extra includes as well. (Added in 4.0.2 already.)
UI: Add
nuitka-project-setfeature to define project variables, checking for collisions with reserved runtime variables. (Added in 4.0.2 already.)Scons: Added new option to select
--reproduciblebuilds or not. (Added in 4.0.6 already.)Python 3.10+: Added support for
importlib.metadata.package_distributions(). (Added in 4.0.8 already.)Plugins: Added support for the multiprocessing
forkservercontext. (Added in 4.0.8 already, for 4.1 Python 3.6 and earlier, as well as 3.14 support were added too.)Reports: Added structured resource usage (
rusage) performance information to compilation reports.Reports: Included individual module-level C compiler caching (
ccache/clcache) statistics in compilation reports.Added support for detecting and correctly resolving the Python prefix for the
PyEnv on HomebrewPython flavor.macOS: Added support for
rusageinformation for Scons.UI: Added the
__compiled__.extension_filenameattribute to give the real filename of the containing extension module.Windows: Added support for
--clangor ARM. (Added in 4.0.8 already.)Windows: Added support for resources names as not just integers, important when we copy them from template files.
MacPorts: Added basic support for this Python flavor. More work will be needed to get it to work fully though.
Optimization
Avoid including
importlib._bootstrapandimportlib._bootstrap_external. (Added in 4.0.1 already.)Linux: Cached the
syscallused for time keeping during compilation to avoid loadinglibcfor each trace. (Added in 4.0.8 already.)UI: Output a warning for modules that remain unfinished after the third optimization pass.
Added an extra micro pass trigger when new variables are introduced or variable usage changes severely, ensuring optimizations are fully propagated, avoiding unnecessary extra full passes.
Provided scripts to compile Python statically with PGO tailored for Nuitka on Linux, Windows, and macOS.
Added support for running the Data Composer tool from a compiled Nuitka binary without spawning an uncompiled Python process.
Enhanced the usage of
vectorcallforPyCFunctionobjects by directly checking for its presence instead of relying purely on flags, allowing more frequent use of this faster execution path.Cached frequently used declarations for top-level variables to speed up C code generation.
Sped up trace collection merging by avoiding unnecessary set creation and using a set instead of a list for escaped traces.
Optimized plugin hook execution by tracking overloaded methods and added an option to show plugin usage statistics.
Improved performance of module location by avoiding unnecessary module name reconstruction and redundant filesystem checks for pre-loaded packages.
Improved the caching of distribution name lookups to effectively avoid repeated IO operations across all package types.
Plugins: Cached callback plugin dispatch for
onFunctionBodyParsingandonClassBodyParsingto skip argument computation when no plugin overrides them.Python 3.13: Handled sub-packages of
pathlibas hard modules.Handled hard attributes through merge traces as well.
Made constant blobs more compact by avoiding repeated identifiers and unnecessary fields.
Enhanced Python compilation scripts further. (Fixed in 4.0.8 already.)
Recognized late incomplete variables better. (Fixed in 4.0.8 already.)
Made constant blobs more compact. (Fixed in 4.0.8 already.)
Optimized calls with only constant keywords and variable posargs too.
Anti-Bloat
Fix, memory bloat occurred when C compiling
sqlalchemy. (Fixed in 4.0.2 already.)Avoid using
pydocinPySimpleGUI. (Added in 4.0.2 already.)Avoided using
doctestfromzodbpickle. (Added in 4.0.5 already.)Avoided inclusion of
cythonwhen usingpyav. (Added in 4.0.7 already.)Avoided including
typing_extensionswhen usingnumpy. (Added in 4.0.7 already.)
Organizational
UI: Relocated the warning about the available source code of extension modules to be evaluated at a more appropriate time.
Debian: Remove recommendation for
libfuse2package as it is no longer useful.Debian: Used
platformdirsinstead ofappdirs.Debugging: Removed Python 3.11+ restriction for
clang-formatas it is available everywhere, even Python 2.7, and we still want nicely formatted code when we read things. (Added in 4.0.6 already.)Removed no longer useful inline copy of
wax_off. We have our own stubs generator project.Release: Added missing package to the CI container for building Nuitka Debian packages.
Developer: Updated AI instructions for creating Minimal Reproducible Examples (MRE) to skip unneeded C compilation.
Debugging: Added an internal function for checking if a string is a valid Python identifier.
AI: Added a task in Visual Studio Code to export the currently selected Python interpreter path to a file, making it available as “python” and “pip” matching the selected interpreter. This makes it easier to use a specific version with no instructions needed.
AI: Updated the rules to instruct AI to only generate useful comments that add context not present in the code.
Containers: Added template rendering support for Jinja2 (
.j2) container files in our internal Podman tools.Projects: Clarified the current status and rationale of Python 2.6 support in the developer manual.
Debugging: Added experimental flag
--experimental=ignore-extra-micro-passto allow ignoring extra micro pass detection.Visual Code: Added integration scripts for
bashandzshautocompletion of Nuitka CLI options. These are now also integrated into Visual Studio Code terminal profiles and the Debian package.RPM: Included the Python compile script for Linux.
RPM: Removed the requirement for
distutilsin the spec.
Tests
Install only necessary build tools for test cases.
Avoided spurious failures in reference counting tests due to Python internal caching differences. (Fixed in 4.0.3 already.)
Fix, the parsing of the compilation report for reflected tests was incorrect.
Python 3.14: Ignored a syntax error message change.
Python 3.14: Added test execution support options to the main test runner to use this version as well.
Fix, the runner binary path was mishandled for the third pass of reflected compilations.
Removed the usage of obsolete plugins in reflected compilation tests.
Debugging: Prevented boolean testing of
namedtuplesto avoid unexpected bugs.Added the
Testsuffix to syntax test files and disabled “python” mode and spell checking for them to resolve issues reported in IDEs.Fix, newline handling in diff outputs from the output comparison tool was incorrect.
Covered
post-import-codefunctionality with a new subpackage test case.Prevented the program test suite from running an unnecessary variant to save execution time.
macOS: Ignored differences from GUI framework error traces in headless runs in output comparisons.
Reflected test for Nuitka, where it compiles itself and compares its operation has been restored to functional state.
Used the new method to clear internal caches if available for reference counts.
Disabled running nested loops test with Python 2.6.
Containers: Detected Python 2 defaulting containers in Podman tooling.
Cleanups
UI: Fix, there was a double space in the Windows Runtime DLLs inclusion message. (Fixed in 4.0.1 already.)
Onefile: Separated files and defines for extra includes for onefile boot and Python build.
Scons: Provided nicer errors in case of “unset” variables being used, so we can tell it.
Refactored the process execution results to correctly utilize our
namedtuplesvariant, that makes it easier to understand what code does with the results.Quality: Enabled automatic conversion of em-dashes and en-dashes in code comments to the autoformat tool. AI won’t stop producing them and they can cause
SyntaxErrorfor older Python versions, nor is unnecessarily using UTF-8 welcome.Ensured that cloned outline nodes are assigned their correct names immediately upon creation, that avoids inconsistencies during their creation.
Quality: Updated to the latest versions of
blackand adopted a fasterisortexecution by caching results.Quality: Modified the PyLint wrapper to exit gracefully instead of raising an error when no matching files require checking.
Quality: Avoided checking YAML package configuration files twice, since autoformat already handles them.
Quality: Ensured that YAML package configuration checks output the original filename instead of the temporary one when a failure occurs.
Quality: Prevented pushing of tags from triggering git pre-push quality checks.
Quality: Silenced the output of
optipngandjpegoptimduring image optimization auto-formatting.Visual Code: Added the generated Python alias path file to the ignore list.
Quality: Enabled auto-formatting for the Nuitka devcontainer configuration file.
Watch: Avoided absolute paths in compilation to make reports more comparable across machines.
Quality: Changed
mdformatchecks to run only once and silently.Scons: Disabled format security errors in debug mode and moved Python-related warning disables into common build setup code.
Quality: Updated to the latest
deepdiffversion.Scons: Avoided MSVC telemetry since it can produce outputs that break CI.
Debugging: Enhanced non-deployment handler for importing excluded modules.
Split import module finding functionality into more pieces for enhanced readability.
Debugging: Added more assertions for constants loading and checking.
macOS: Dropped the
universaltarget arch.Debugging: Added more traces for deep hash verification.
Summary
This release builds on the scalability improvements established in 4.0, with enhanced Python 3.14 support, expanded package compatibility, and significant optimization work.
The --project option seems usable now.
Python 3.14 support remains experimental, but only barely made the cut, and probably will get there in hotfixes. Some of the corrections came in so late before the release, that it was just not possible to feel good about declaring it fully supported just yet.
Real Python
Quiz: Cursor vs Windsurf: Which AI Code Editor Is Best for Python?
In this quiz, you’ll test your understanding of Cursor vs Windsurf: Which AI Code Editor Is Best for Python?
By working through these questions, you’ll revisit how the two editors differ across code completion, agentic multi-file editing, and debugging.
You’ll also reconnect with the audit points worth applying whenever an AI agent writes Python on your behalf.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Python Metaclasses
In this quiz, you’ll test your understanding of Python Metaclasses.
Metaclasses sit behind every class you write in Python, and they’re one of the language’s deeper object-oriented concepts. By working through this quiz, you’ll revisit how classes are themselves objects, how type creates them, and how a custom metaclass lets you customize class creation.
You’ll also reflect on when a custom metaclass is actually the right tool and when a simpler technique does the job better.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Engineering at Microsoft
PyCon US 2026
Come See Us at PyCon US 2026!
Microsoft and GitHub will be at PyCon US 2026, May 14–17 in Long Beach, CA. Stop by our booth, say hello, and tell us about your experience with our tools and services. We’d love to meet you.
Don’t miss the Meta booth on Saturday at 1 p.m., where we’ll be showing off the integration of Pylance with Meta’s new Pyrefly type checker. The integration is currently in early preview in our Insiders build, and we can’t wait to bring it to all our users later this year.
Hands-on Labs at the Booth
Drop in for 10-minute interactive labs covering:
- GitHub Copilot
- Azure DocumentDB
- Microsoft Foundry
- Microsoft Agent Framework
- Azure PostgreSQL
- Azure AI Search
Talks and Sessions
| Date & Time | Room | Session | Speaker |
|---|---|---|---|
| Wed, May 13 · 9:00 a.m.–12:30 p.m. | 101A | Build your first MCP server in Python | Pamela Fox |
| Wed, May 13 · 1:30 p.m.–2:30 p.m. | 201B | Dungeons and Databases: Build NPC agents to work with data in DocumentDB and Postgres (Microsoft Sponsor session) | Marko Hotti, Patty Chow |
| Thu, May 14 · 2:40 p.m.–3:05 p.m. | 104C | Education Summit: Big Lessons from Small Models, Teaching Python AI with SLMs | Gwyneth Peña-Siguenza |
| Thu, May 14 · 3:40 p.m.–4:05 p.m. | 104C | Education Summit: Your Slides, But Faster, Building an AI-powered presentation workflow | Pamela Fox |
| Fri, May 15 · 3:30 p.m.–4:00 p.m. | 104C | PyCharlas: Cómo pasé de perdida a enseñar Python + IA a miles, en un año | Gwyneth Peña-Siguenza |
| Sat, May 16 · 2:30 p.m.–3:45 p.m. | 201A | Maintainer Summit Tools Track: Dev Containers | Sarah Kaiser |
| Sun, May 17 · 1:00 p.m.–1:30 p.m. | Grand Ballroom A | A bridge over (not) troubled waters: Collecting marine data from your couch | Sarah Kaiser |
Can’t wait to see you there!
The post PyCon US 2026 appeared first on Microsoft for Python Developers Blog.
Bob Belderbos
Learn agentic AI in Python with 10 small exercises
Most "build an AI agent" tutorials hand you a framework and skip the part where you actually understand what it's doing under the hood. When the abstraction breaks, you can't debug it because you never built the layer underneath. Juanjo and I think that gap is worth closing.
Yesterday we shipped 10 small browser-based exercises that walk through that layer one pattern at a time (more on how we run them in the browser with Pyodide here).
This article is the conceptual journey behind them: how you get from "I can call Claude" to a complete agent loop with a testable architecture and a human-in-the-loop workflow. Each stage builds on the previous one.
Stage 1: make a model reply (exercise 1)
Every agent app starts with the same 3-line skeleton. Build a client, call messages.create, read content[0].text. The shape doesn't change much. Only what wraps around it does.
import anthropic
client = anthropic.Anthropic()
msg = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=256,
messages=[{"role": "user", "content": "Say hi"}],
)
print(msg.content[0].text)
Why content[0].text and not .text? Because content is a list of blocks (text, tool_use, and others). That list is how tool use plugs in later without breaking the response shape. Get this mental model before anything else.
Stage 2: make the reply machine-readable (exercises 2, 3)
Raw LLM strings are unreliable. The fix is two paired habits: a specific system prompt that locks the output shape, and a Pydantic model that validates it on the way back in.
from pydantic import BaseModel
class ExpenseResult(BaseModel):
category: str
confidence: float
result = ExpenseResult.model_validate_json(msg.content[0].text)
Treat the system prompt like an API contract. Say "JSON only", show the literal shape, forbid improvisation ("no punctuation, no explanation, nothing else"). The phrase "nothing else" is doing real work; without it, models love to append a friendly sentence that breaks your parser.
Stage 3: make it remember (exercise 4)
LLMs don't remember anything. They have no state, no memory, no context beyond the current call. The "conversation" is a fiction we create by sending the whole message history every time.
To get a continuous conversation, you keep the list of {"role": ..., "content": ...} dicts and send the whole thing every turn. Append the user message before the call, the assistant reply after. Roles must alternate.
history = []
def ask(user_msg):
history.append({"role": "user", "content": user_msg})
reply = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=512,
messages=history,
).content[0].text
history.append({"role": "assistant", "content": reply})
return reply
State lives in your code, not the model. That single realization clears up most of the confusion students have about context windows and "memory."
Stage 4: give the model hands (exercise 5)
Tool use turns a chatbot into something that can act. The loop is dumber than people think:
while True:
response = client.messages.create(..., tools=TOOLS, messages=messages)
if response.stop_reason == "end_turn":
return response.content[0].text
# else: run the tool the model asked for, append the result, loop again
Two gotchas: append the full response.content as the assistant turn (it contains the tool_use blocks the model needs to see), and tool results come back wrapped in a user message, not assistant.
Stage 5: make it swappable and testable (exercises 6, 7, 8)
By exercise 6 the chatbot works, but it's also often a highly coupled mess importing external dependencies like anthropic and sqlite3 into the business logic. Time for three common patterns, applied to LLM apps:
- A
Protocolfor the LLM provider, so tests can pass aMockProviderwith a.callslist instead of an API key. - A Repository pattern for the persistence layer, so an in-memory dict satisfies the same interface as a database backend.
- A service layer that accepts both via
__init__and orchestrates: call provider, parse, save, return.
That's the four-layer agent architecture, built piece by piece instead of dumped on you all at once.
Stage 6: keep a human in the loop (exercise 9)
When the model returns a confidence score, use it. Above the threshold: auto-accept. Below: show the suggestion and let the user confirm or override.
def process(result, threshold=0.8):
if result.confidence >= threshold:
return result.category
answer = input(f"Accept '{result.category}'? (Enter to confirm): ").strip()
return answer or result.category
Make the accept path the cheapest action (empty input or y). Users pay the manual handling cost only when overriding. This is what separates a trusted assistant from one that quietly mislabels things, and it's the gap between "AI demo" and production-ready workflow.
Stage 7: generalize the loop (exercise 10)
The agent is exercise 5 with one change: replace the hardcoded function call with a TOOL_FUNCTIONS[name] lookup.
TOOL_FUNCTIONS = {
"add": lambda a, b: a + b,
"multiply": lambda a, b: a * b,
}
# inside the loop:
content = str(TOOL_FUNCTIONS[block.name](**block.input))
Now adding a tool is one schema entry plus one dict entry. Swap add/multiply for search_web, query_db, send_email and the loop is identical. Look at agent frameworks under the hood (LangChain, OpenAI Assistants) and you'll see this same pattern.
What the journey teaches
Frameworks make sense once you can write the layer underneath. Skip that, and you are stuck the first time the abstraction leaks. After coaching many developers through this, the dividing line is clear: have they ever written the loop themselves?
The 10 exercises are deliberately small. The arc matters more than any single one. Once you've done them, "agentic AI" stops being "magic" and starts being a loop, schema, and some patterns you might already know.
Try them out:
- In the browser: pythonagenticai.com/exercises. No install, no API key, no dependencies. Loads fast.
- Locally: clone the repo and work through them in your IDE.
Keep reading
- How an AI expense agent is actually structured
- What production AI agents actually require
- Build the data layer before you touch the LLM
- Book I was recommended and I am going through: Build Your Own Coding Agent: The Zero-Magic Guide to AI Agents in Pure Python
May 13, 2026
"Michiel's Blog"
httpx2!
It’s six weeks after we forked httpx and named our package httpxyz. Yesterday, the Pydantic people started their own fork, httpx2.
TL;DR: while we think httpxyz was definitely needed, we welcome httpx2 and think it should be the ‘blessed’ fork.

About httpx2
Our fork
We did a bunch of work on httpx, merging old open pull requests, forking httpcore, and making serious improvements fixing performance and other issues.
The Pydantic fork
Straight after we made our fork, I contacted Kludex, who is among other things maintainer of Starlette, about our fork. He said that he had also been thinking about doing a fork, but that he might prefer to do one himself, and also that he thought that ours could not get popular because it’s on Codeberg instead of on GitHub.
I’m not really sure about that last one. While it’s true that there are still
no big examples of popular Python packages on Codeberg, more and more projects
are currently moving there. Also, even though we are on Codeberg, every single
day we were still gaining ‘stars’ and if the Pydantic team would have backed
our fork, with their power we definitely could have made it a success. The
majority of users don’t care at what forge the code is hosted, they install
from PyPI, via pip or uv. Where the code is hosted is not really a factor
in the popularity.
The way forward
The reason I started httpxyz was because of the impasse httpx was in, and that I felt something had to be done. It’s not that I wanted to be the maintainer of an HTTP library per se ;-)
So now that Pydantic, with their skillful team and their powerful ecosystem of packages, is creating their own fork, there is no point really in trying to compete with them. We’ll keep httpxyz up; but we will support httpx2 and will urge anyone who is trying to switch away from httpx to consider httpx2.
The current situation
As it stands, httpx2 is lacking the performance improvements we added to httpxyz. But it will not be long before they will add those, too.
Also they already made some smart decisions I had been unsure about:
- they are switching from certifi to truststore
- they are switching to compression.zstd on Python 3.14+, enabling zstd compression by default
- they merged httpcore and vendored it in their repository
I have great trust in their stewardship of the module. We don’t need ‘competing’ forks; we’ll fully support httpx2 and will encourage the community to do the same!
Thanks, and have fun!
Python Software Foundation
PSF Welcomes Hudson River Trading (HRT) as a Visionary Sponsor
[May 13, 2026] – The Python Software Foundation (PSF) is excited to announce that Hudson River Trading (HRT), a global leader in quantitative trading, has made a commitment to support Python and the PSF as a Visionary Sponsor.
HRT’s "Visionary" sponsorship—our highest tier—will help to support the foundation’s core work of advancing and protecting the Python programming language and supporting a diverse and international community of Python programmers. HRT is the first quantitative trading firm to become a PSF Visionary Sponsor, alongside companies including NVIDIA, Google, Fastly, Bloomberg, Meta, and Anthropic. Contributions at this level directly fund the critical work that keeps Python thriving, including:
- CPython Development: Ensuring the core language remains fast, stable, and modern.
- PyPI Infrastructure: Maintaining the Python Package Index, which serves billions of downloads to developers worldwide.
- Community Programs: Supporting Python workshops, events, and user groups globally, as well as hosting PyCon US each year.
- Security Initiatives: Hardening the ecosystem against supply chain vulnerabilities.
A Shared Commitment to Python
Hudson River Trading is no stranger to the power of Python. As a leading multi-asset class quantitative trading firm, HRT relies on Python for research, data analysis, and engineering workflows. With this donation, HRT is giving back to the tools that empower their engineers and helping to ensure that Python remains flexible, effective, and welcoming in the ways that have made it one of the most popular programming languages in the world. Read more about Open Source at HRT on this page.
“Python is a cornerstone of HRT’s research and trading infrastructure. Our engineers use Python extensively to build cutting-edge tooling that enhances our developer workflows, and we believe strongly in contributing to the open source software that makes our work possible. We are proud to support the PSF as a Visionary Sponsor helping to safeguard Python as a robust, accessible, and community-driven language for years to come.” – Prashant Lal, Partner at Hudson River Trading
“Part of HRT's edge is our engineering, and one of our core values is 'Make It Better'. Our support of the Python Software Foundation – alongside our contributions to many other open source projects – reflects our desire to remain active, collaborative participants in the OSS engineering community over the long term, for the benefit of all.” – Hashem, Lead Software Engineer at Hudson River Trading
“At HRT, we’ve always believed that the best way to advance Python is by working hand-in-hand with the community. Our internal work on lazy imports gave us deep expertise in the problem space, and we channeled that experience directly into open collaboration by contributing to the development of PEP 810. We pride ourselves on being exemplary participants in both the trading markets and the open source community, and our sponsorship of the Python Software Foundation reflects that genuine spirit of collaboration.” – Pablo Galindo Salgado, Lead Software Engineer at Hudson River Trading
As part of its ongoing participation in the Python ecosystem, HRT will be open sourcing some of its own projects and announcing additional OSS contributions later this year. To learn more about HRT’s open engineering, research, and data science roles, visit https://www.hudsonrivertrading.com/careers/.
The PSF is grateful for Hudson River Trading’s support, alongside that of each of our Visionary Sponsors, and we hope you will join us in thanking them for their commitment to the PSF and the Python community!
About Hudson River Trading (HRT)
Hudson River Trading (HRT) is a leading quantitative trading firm at the forefront of technical innovation in global financial markets. Every day, we bring together the world’s sharpest minds to collaboratively solve challenging problems and build technology that will drive the future of trading. Leveraging one of the world’s most sophisticated computing environments for research and development, we trade across asset classes and time horizons on more than 200 markets worldwide. We are a leading voice advocating for fair and transparent markets everywhere and dedicated to creating a better trading landscape for all. For more information, visit www.hudsonrivertrading.com.
About the Python Software Foundation (PSF)
The Python Software Foundation is a US non-profit whose mission is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers. The PSF supports the Python community using corporate sponsorships, grants, and donations. Are you interested in sponsoring or donating to the PSF so we can continue supporting Python and its community? Check out our sponsorship program, donate directly, or contact our team at sponsors@python.org!
Real Python
How to Use OpenCode for AI-Assisted Python Coding
OpenCode is an open-source AI coding agent that runs in your terminal and lets you analyze and refactor a Python project through conversational commands. In this guide, you’ll install it on your system, set it up with a free Google Gemini API key, and learn the basics of how to use it in your daily programming work.
Here’s what OpenCode’s main interface looks like:
OpenCode's Initial Screen
OpenCode works as a conversational assistant you explicitly direct. Ask it to analyze functions, refactor code, or explain issues. Press Enter to send your query, and you’ll get a response with full awareness of your project context. It supports more than seventy-five AI providers, including Anthropic, OpenAI, and Google Gemini.
If you’re a Python developer who prefers working in the terminal, OpenCode offers deliberate, context-aware assistance and a customizable AGENTS.md configuration file.
Take the Quiz: Test your knowledge with our interactive “How to Use OpenCode for AI-Assisted Python Coding” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Use OpenCode for AI-Assisted Python CodingQuiz yourself on OpenCode: install it, connect an AI provider, and use it to analyze and refactor Python from your terminal.
Prerequisites
Before you start working with OpenCode, you’ll need to fulfill the following prerequisites regarding your current system and working environment:
- Python 3.11 or higher for the sample project
- A modern terminal emulator
You also need an AI provider account. In this guide, you’ll use Google AI Studio to get a free Gemini API key. The free Gemini tier lets you follow along without any additional costs. However, you can also use Anthropic, OpenAI, or GitHub Copilot if you already have subscriptions to those services.
This guide uses a sample project consisting of a dice-rolling script. You’ll find the full source code in a collapsible block at the start of Step 2. The download below includes the starting script and the final refactored version so you can compare your work when you’re done:
Get Your Code: Click here to download the free sample code you’ll use to learn about AI-assisted Python coding with OpenCode.
You’ll also need some background knowledge of Python programming and basic experience with your operating system’s terminal or command line.
Step 1: Install and Set Up OpenCode
It’s time to install OpenCode and get it talking to a model. You’ll install the tool on your system, authenticate with Gemini using a free API key, configure a default model, and verify that OpenCode responds correctly to your Python questions before you start coding with it.
Install and Launch OpenCode
The quickest way to install OpenCode is to use the official installation script, which you can do with the following command:
$ curl -fsSL https://opencode.ai/install | bash
This script detects your platform, downloads the appropriate binary, installs the tool, and adds it to your PATH.
If you prefer a package manager, you can also install OpenCode with Homebrew on macOS or Linux:
$ brew install anomalyco/tap/opencode
Note that the Homebrew team maintains the official formula and updates it less frequently than the installation script above.
Alternatively, you can install it as a Node.js package using npm if you already have this tool on your system:
$ npm install -g opencode-ai
If you’re on Windows, the best experience comes from using WSL (Windows Subsystem for Linux). Set up WSL first by following Microsoft’s WSL installation guide, then open a WSL terminal and run the curl command above. For optimal performance, you should store your project within the WSL filesystem rather than on a Windows drive.
Read the full article at https://realpython.com/opencode-guide/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm
Support for uv, Poetry, and Hatch Workspaces (Beta)
Workspaces are increasingly the go-to choice for companies and open-source teams aiming to manage shared code, enforce consistency, and simplify dependency management across multiple services. Working within massive codebases often means juggling many interdependent Python projects simultaneously.
To streamline this experience, PyCharm 2026.1.1 introduced built-in support for uv workspaces, as well as those managed by Poetry and Hatch. This new functionality – currently in Beta – allows the IDE to automatically manage dependencies and environments across your entire workspace.
Intelligent workspace detection
When you open a workspace, PyCharm can now derive its entire structure and all its dependencies directly from your pyproject.toml files. This allows the IDE to understand relationships between projects deeply, significantly reducing the amount of configuration you have to do manually.
Because this is a fundamental change to how PyCharm handles your workspace, we’ve implemented it as an opt-in feature. Here is what you need to know about the transition:
- Opt-in dialog: When you open a project, PyCharm may suggest enabling automatic detection for uv workspaces and Poetry/Hatch setups.
- Manual configuration: You can toggle workspace detection in Settings | Project Structure.
- Configuration note: If you previously manually edited settings in .idea files, those settings may be reset when you agree to the new model.
Managing workspaces and their projects
PyCharm now provides an integrated experience that handles the complexities of multi-package setups in uv workspaces automatically. When you open a uv workspace, the IDE identifies the individual projects and their interdependencies, ensuring the project structure is ready for you to work with.
Visualizing workspace dependencies
Once the workspace is loaded, you can verify how your projects relate to one another. PyCharm presents these dependencies in Settings | Project Dependencies.
These relationships are derived directly from your configuration and are shown as read-only in the UI. To make changes to the dependency graph, you can edit the pyproject.toml file manually – PyCharm will then update its internal model.
Automatic environment configuration
PyCharm prioritizes a zero-config approach to your Python SDK. When you open a .py or pyproject.toml file within a project, the IDE performs an immediate check.
If a compatible environment already exists on your system, PyCharm automatically configures it as the SDK for that project. If no environment is detected, a file-level notification will appear suggesting that you create a new uv environment and install the necessary dependencies for that project.
Maintaining environment consistency
Beyond the initial setup, PyCharm continuously monitors the health of your environment to ensure it stays in sync with your defined requirements.
If a dependency is not defined in your pyproject.toml file but is imported in your code, PyCharm will trigger a warning with a Sync project quick-fix to resolve these discrepancies.
Import management
PyCharm also assists when you are actively writing code by identifying gaps in your project configuration.
If you import a package that isn’t present in the environment and is not yet listed in the project’s pyproject.toml, the IDE will detect the omission. A quick-fix will suggest adding the package to the environment and updating the corresponding .toml file simultaneously.
Transparency via the Python Process Output tool window
While PyCharm automates the backend execution of commands – such as uv sync –all-packages – it still remains fully transparent.
You can track all executed commands and their live output in the Python Process Output tool window. If synchronization fails for an environment, you can analyze the specific error logs to quickly identify the root cause.
Poetry and Hatch workspaces
The logic for Poetry and Hatch workspaces follows this exact same workflow. PyCharm detects projects via their pyproject.toml files and manages the environments with the same automated precision.
The only minor difference is in tool selection – the suggested environment tool is determined by what you have specified in your pyproject.toml. If no tool is specified, PyCharm will prioritize uv (if installed) or a standard virtual environment to get you up and running quickly.
Looking ahead
This Beta version of the functionality is just the beginning of our focus on supporting complex workspace structures. We are already working on expanding the UI to allow creating new projects, linking dependencies, and activating the terminal for specific projects.
As we refine these features, your feedback is our best guide – please share your thoughts or report any issues on our YouTrack issue tracker.
Python GUIs
How to Add Custom Widgets to Qt Designer — Use widget promotion to integrate your own Python widgets into Qt Designer layouts
Can I use custom widgets in Qt Designer?
When you're building Python GUI applications with PyQt6 and Qt Designer, you'll reach a point where the built-in widgets aren't enough. Maybe you've created a custom plotting widget or a specialized input control in Python, and you want to place it into your Qt Designer layouts alongside all the standard widgets.
The good news is that Qt Designer supports exactly this through a feature called widget promotion. In this tutorial, you'll learn how to take any custom Python widget and integrate it into your Qt Designer .ui files, so you can position and size it visually just like any built-in widget.
The bad news is that since Qt Designer is a C++ application, it can't run your Python code. That means you won't see your custom widget rendered in the Designer preview. Instead, you'll see a placeholder (the base widget type you promoted from). Once you load the .ui file in your running Python application, your custom widget appears in all its glory.
With that caveat aside, let's look at how we can use custom widgets in Qt Designer.
What is Widget Promotion?
Widget promotion is Qt Designer's way of letting you swap a standard widget for a custom one. You start by placing a regular widget on your form, a plain QWidget for example, and then tell Qt Designer: "When this UI is actually used, replace this placeholder with my custom widget class instead."
Behind the scenes, this adds some extra information to the .ui file. When you load that file in Python using uic.loadUi() or compile it with pyuic6, the loader knows to import your custom class and use it in place of the base widget.
Creating a Custom Widget
Before we get into Qt Designer, let's create a simple custom widget in Python. We'll make a basic colored widget that draws a gradient background—something you'd never get from a standard widget.
Create a new file called custom_widgets.py:
from PyQt6.QtWidgets import QWidget
from PyQt6.QtGui import QPainter, QLinearGradient, QColor
from PyQt6.QtCore import Qt
class GradientWidget(QWidget):
"""A custom widget that displays a gradient background."""
def __init__(self, parent=None):
super().__init__(parent)
def paintEvent(self, event):
painter = QPainter(self)
gradient = QLinearGradient(0, 0, self.width(), self.height())
gradient.setColorAt(0.0, QColor("#2c3e50"))
gradient.setColorAt(1.0, QColor("#3498db"))
painter.fillRect(self.rect(), gradient)
painter.end()
This widget overrides paintEvent to draw a diagonal gradient from dark blue to lighter blue. It's a straightforward example, but the same promotion process works for any custom widget—complex plotting canvases, custom controls, or anything else you build by subclassing a Qt widget.
Setting Up Your Project Structure
For widget promotion to work, the Python file containing your custom widget needs to be importable when your application runs. The simplest way to achieve this is to keep everything in the same directory:
my_project/
&boxvr&boxh&boxh custom_widgets.py # Your custom widget classes
&boxvr&boxh&boxh mainwindow.ui # Your Qt Designer file
&boxur&boxh&boxh main.py # Your application entry point
The file name and class name matter here—you'll need to tell Qt Designer both of these during the promotion step.
Promoting a Widget in Qt Designer
Now we can open Qt Designer and set up the promotion.
Place a base widget on your form
Open Qt Designer and create a new Main Window (or open your existing .ui file). From the widget box on the left, drag a plain Widget (QWidget) onto your form. Position and resize it however you like—this is where your custom widget will appear when the application runs.
You can use any base widget class as your starting point. If your custom widget subclasses QPushButton, promote a QPushButton. If it subclasses QLabel, promote a QLabel. For our GradientWidget, which subclasses QWidget, a plain QWidget is the right choice.
Open the Promote Widgets dialog
Right-click on the widget you just placed. In the context menu, select Promote to.... This opens the Promoted Widgets dialog.

Fill in the promotion details
In the dialog, you'll see fields for three pieces of information:
-
Base class name — This should already be filled in with the type of widget you right-clicked on (e.g.,
QWidget). Leave this as is. -
Promoted class name — Enter the name of your custom Python class. For our example, type
GradientWidget. -
Header file — This is where Qt Designer's C++ heritage shows through. In C++, this would be a header file path. For Python, you enter the module import path for your widget, without the
.pyextension. Since our class lives incustom_widgets.py, typecustom_widgets.

Leave the Global include checkbox unchecked.
Add and promote
Click Add to add your class to the list of known promoted widgets. Then, with your class selected in the list, click Promote. The dialog closes, and you'll notice the widget's class name in the Object Inspector (top-right panel) now shows GradientWidget instead of QWidget.
That's it for the Designer side. Save your .ui file.
Promoting additional widgets
Once you've added a promoted class through this dialog, it becomes available for reuse. The next time you want to promote a widget to GradientWidget, just right-click the widget and you'll see it listed directly in the Promote to submenu—no need to open the full dialog again.
Loading the UI in Python
Now let's write the Python code to load the .ui file and see our custom widget in action. Create main.py:
import sys
from PyQt6.QtWidgets import QApplication, QMainWindow
from PyQt6 import uic
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
uic.loadUi("mainwindow.ui", self)
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
When you run this, uic.loadUi() reads the .ui file and sees that one of the widgets has been promoted to GradientWidget from the custom_widgets module. It automatically does the equivalent of:
from custom_widgets import GradientWidget
...and creates an instance of GradientWidget wherever you placed that promoted widget in your layout. Instead of a blank QWidget, you'll see your gradient background.
Using Compiled UI Files
If you prefer to compile your .ui files to Python using pyuic6 rather than loading them at runtime, promotion works the same way. Run:
pyuic6 mainwindow.ui -o ui_mainwindow.py
If you open the generated ui_mainwindow.py, you'll find an import line near the bottom:
from custom_widgets import GradientWidget
The compiled code creates your GradientWidget instance in the right place automatically. You can then use the generated file in your application:
import sys
from PyQt6.QtWidgets import QApplication, QMainWindow
from ui_mainwindow import Ui_MainWindow
class MainWindow(QMainWindow, Ui_MainWindow):
def __init__(self):
super().__init__()
self.setupUi(self)
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
Both approaches—runtime loading and compiled files—handle promoted widgets in the same way.
A More Practical Example: Embedding PyQtGraph
One of the most common reasons to promote widgets is to embed third-party plotting libraries like PyQtGraph into your Designer layouts. PyQtGraph's PlotWidget is a subclass of QGraphicsView, so you'd promote a QGraphicsView in Designer.
Here's how you'd fill in the promotion dialog for PyQtGraph:
- Base class name:
QGraphicsView - Promoted class name:
PlotWidget - Header file:
pyqtgraph
That's all it takes. When your application runs, the placeholder QGraphicsView becomes a fully functional PlotWidget that you can plot data on.
import sys
from PyQt6.QtWidgets import QApplication, QMainWindow
from PyQt6 import uic
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
uic.loadUi("mainwindow.ui", self)
# self.graphWidget is the promoted PlotWidget
# (use the objectName you set in Designer)
self.graphWidget.plot([1, 2, 3, 4, 5], [10, 20, 15, 30, 25])
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
Promoting Widgets from Submodules
If your custom widget lives in a submodule or package, you can use dotted import paths in the Header file field. For example, if your project structure looks like this:
my_project/
&boxvr&boxh&boxh widgets/
&boxv &boxvr&boxh&boxh __init__.py
&boxv &boxur&boxh&boxh gradient.py # contains GradientWidget
&boxvr&boxh&boxh mainwindow.ui
&boxur&boxh&boxh main.py
You would enter widgets.gradient as the header file in the promotion dialog. The loader will then do:
from widgets.gradient import GradientWidget
This keeps things organized as your project grows.
Troubleshooting Common Issues
"No module named 'custom_widgets'" — This means Python can't find the file containing your custom widget class. Make sure the module file is in the same directory as your script (or somewhere on your Python path), and that the name in the promotion dialog matches the file name exactly (without .py).
The widget appears blank or as a plain QWidget — Double-check that the promoted class name matches your Python class name exactly, including capitalization. GradientWidget and gradientwidget are different classes as far as Python is concerned.
The widget doesn't resize properly — Make sure you've added the promoted widget to a layout in Qt Designer. Widgets outside of layouts won't resize with the window, regardless of whether they're promoted or not.
Changes to your custom widget don't appear in Designer — Remember, Qt Designer can't render Python widgets. You'll always see the base widget type in the Designer preview. Run your application to see your custom widget.
Summary
Widget promotion is a straightforward way to bridge the gap between Qt Designer's visual layout tools and your custom Python widgets. The process is always the same:
- Place a base widget of the appropriate type in Qt Designer.
- Right-click and promote it, specifying your custom class name and module path.
- Save the
.uifile and load it in your Python application.
Your custom widget won't be visible in the Designer preview—that's expected. But when your application runs, the promoted widget is swapped in seamlessly, giving you the best of both worlds: visual layout design with the full power of custom Python widgets.
For an in-depth guide to building Python GUIs with PyQt6 see my book, Create GUI Applications with Python & Qt6.
Bob Belderbos
Coding exercises that run in the browser with Pyodide
I've built coding-exercise platforms before (Python, Rust). AWS API Gateway + Lambda, Docker, etc. It works great, but that's a lot of infrastructure to teach someone a four-line function.
For our new Agentic AI cohort I wanted a free warm-up: ten short Python exercises that introduce the AI vendor SDK patterns (in this case Anthropic). The hard constraint was that visitors should be able to click "Run" without signing up, without bringing an API key, and without complex third party infrastructure. As this site is built on Cloudflare Pages, that meant an in-browser Python runtime. Enter Pyodide ...
Unlike toy Python interpreters, Pyodide runs real CPython compiled to WebAssembly (listen to my interview Elmer Bulthuis why Wasm is cool), which enables broad compatibility with the Python ecosystem, including native extension packages.
Getting it working was easy with some Claude Code prototyping; the interesting part was the last 20%. Some of the challenges I faced and how I worked around them.
Mocked tests + a stubbed SDK
Every exercise has a solution.py and a test_exercise.py. The tests look like this:
from unittest.mock import MagicMock, patch
from solution import get_completion
def test_returns_text():
mock_client = MagicMock()
mock_client.messages.create.return_value.content = [MagicMock(text="Hello, Pythonista!")]
with patch("solution.anthropic.Anthropic", return_value=mock_client):
assert get_completion("Say hello") == "Hello, Pythonista!"
patch("solution.anthropic.Anthropic") replaces the class with a mock for the duration of the with block. The original Anthropic class is never instantiated. Which means the only thing the real SDK contributes is the name anthropic.Anthropic existing somewhere on the Python path.
So I don't install it. I write a tiny stub package straight to Pyodide's in-browser filesystem:
const ANTHROPIC_INIT = `
class Anthropic:
def __init__(self, *args, **kwargs):
pass
`;
const ANTHROPIC_TYPES = `
class TextBlock: ...
class MessageParam: ...
class ToolParam: ...
class ToolUseBlock: ...
`;
await pyodide.loadPackage(["pytest", "pydantic"]);
pyodide.FS.mkdirTree("/home/pyodide/anthropic");
pyodide.FS.writeFile("/home/pyodide/anthropic/__init__.py", ANTHROPIC_INIT);
pyodide.FS.writeFile("/home/pyodide/anthropic/types.py", ANTHROPIC_TYPES);
It's a package, not a single file, because some exercises also do from anthropic.types import TextBlock, which I needed to fix ty type errors. Both modules exist only so the imports resolve. The bodies never execute under test thanks to the mocking.
# Inside Pyodide, before running pytest:
sys.path.insert(0, "/home/pyodide")
# `import anthropic` finds the stub. `patch` replaces it. Tests run.
That one decision cuts ~3 seconds and several megabytes off the boot. The real anthropic package pulls in pydantic-core, httpx, httpcore, anyio, sniffio, idna, distro, certifi, typing-extensions. Every byte irrelevant to learning the pattern, because the test never lets the SDK run anyway.
If you've read build the data layer before you touch the LLM, this is the same strategy: cut the AI piece down to its smallest shape so the rest of the engineering is more flexible.
Lazy-loading the runtime
Pyodide is 5MB+ over the network. I don't want this to load on the homepage, not even on the exercise index page. Even on an exercise page, visitors might skim and leave. So the pyodide.js script tag isn't in the HTML. The page ships a ~250-line runner.js and that script injects Pyodide on demand:
// Module-level constants, defined once at the top of runner.js:
const PYODIDE_VERSION = "0.27.7";
const PYODIDE_URL = `https://cdn.jsdelivr.net/pyodide/v${PYODIDE_VERSION}/full/`;
const PYODIDE_JS_SRI = "sha384-90so5tCKvl0xs9agU29IMKlAVzhfzFX7QO//YxQkRhJG58bBZrFN+2ZTRB026X5X";
async function ensurePyodide() {
if (pyodide) return pyodide;
if (bootPromise) return bootPromise;
bootPromise = (async () => {
if (typeof loadPyodide !== "function") {
await new Promise((resolve, reject) => {
const s = document.createElement("script");
s.src = PYODIDE_URL + "pyodide.js";
s.integrity = PYODIDE_JS_SRI;
s.crossOrigin = "anonymous";
s.onload = resolve;
s.onerror = () => reject(new Error("Failed to load pyodide.js"));
document.head.appendChild(s);
});
}
pyodide = await loadPyodide({ indexURL: PYODIDE_URL });
await pyodide.loadPackage(["pytest", "pydantic"]);
// write the anthropic stub here…
return pyodide;
})();
return bootPromise;
}
Two triggers prewarm the runtime before the user clicks Run:
cm.on("focus", prewarm);
runBtn.addEventListener("mouseenter", prewarm, { once: true });
The moment they tab into the editor or hover the button, the 3-second cold start starts ticking. By the time they're done typing, the runtime is usually ready. The cached bootPromise deduplicates: focus and hover both await the same in-flight promise, never two parallel boots.
Tracking progress without a backend
No users, no database, no sessions, but I still want:
- ✓ badges on completed exercises in the list view
- A progress bar across all ten
- Draft code that survives a tab close
- A next step that only appears once all ten are green
One localStorage key holds the whole state:
const STORAGE_KEY = "pyai_progress_v1";
// { "first-api-call": { passed: true, code: "...", lastRun: 1736... } }
Three operations carry the state: saveCode(slug, code) runs on every CodeMirror change, markPassed(slug) runs when pytest returns 0, and get(slug) reads on page load to restore drafts and badges.
In a similar vein, the Solution tab stays locked until the tests pass. The point of an exercise is the struggle, not the answer.
Once markPassed(slug) writes to localStorage, it also fires a pyai:passed event, and a separate tabs.js listener flips the solution from <div data-solution-locked> to <div data-solution-revealed> and lazy-fetches solution.py for a side-by-side compare. No reload. One key, three consumers (runner, list page, solution tab).
And the key is versioned: pyai_progress_v1. The day I want to change the shape, I can bump it to _v2 and old state cleanly stops loading. No migration code, no schema check.
The list page reads the same store on render and walks the DOM:
document.querySelectorAll(".exercises-list-item").forEach((item) => {
const slug = item.dataset.exerciseSlug;
const { passed } = window.PyAIProgress.get(slug);
if (passed) item.classList.add("is-passed");
});
When passedCount() >= total, a hidden next-step block flips visible. That's the whole mechanism: ten green checks reveal one element, all computed in the browser from that one localStorage key.
All static, all local
The whole thing is a static site. Cloudflare serves the HTML, JS, and the synced exercise files. The browser does the rest. Zero extra cost. It scales for free because the load is on the client, not a server.
For development, uv runs the end-to-end check with a single command:
uv run scripts/e2e_test.py
It walks every exercise in headless Chromium, pastes the reference solution, clicks Run, asserts the test suite passes. Ten exercises in ~22 seconds. Anytime the upstream content changes I know in under half a minute whether all ten warm-ups still pass end-to-end. I will save the details of this Playwright end-to-end testing for another article.
Starter code
The site this runs on is standalone so I put together a single-file Pyodide starter gist of a mini coding platform experience: code in the browser, click "Run tests", pytest runs against your code, all in the browser. Lazy boot and the Solution/Tests tabs are wired up. The SDK stub and localStorage progress I left out for simplicity, but the core Pyodide integration is there. You can download and build on it if you want to try your hand at a browser-based Python coding experience.
Try it out
Back to the 10 exercises, you can try them out here. They cover the basics that show up in the typical production Agentic AI app: a first API call, structured outputs with Pydantic, system prompts, multi-turn state, tool use, then the architectural patterns (Protocol, Repository, Service layer, HITL, the agent loop).
Keep reading
- How an AI expense agent is actually structured
- Build the data layer before you touch the LLM
- Modern Python tooling: uv, ruff, ty
One bigger lesson I'm taking away from this: every time I've built a thing server-side over the years, I was usually paying a complexity tax for flexibility I didn't need. Sometimes the right architecture is to push the work to the client, especially where modern browsers and Wasm can handle this performantly and securely.
May 12, 2026
PyCoder’s Weekly
Issue #734: Dunder-Gets, Django Tasks in Prod, Codex CLI, and More (2026-05-12)
#734 – MAY 12, 2026
View in Browser »
Do You Get It Now?
Learn about Python’s .__getitem__(), .__getattr__(), .__getattribute__(), and .__get__(): how they’re different and where to use them.
STEPHEN GRUPPETTA
Using Django Tasks in Production
Django added a generic API for dealing with concurrent tasks in version 6. This post talks about how it has been used in production.
TIM SCHILLING
Use Codex CLI to Enhance Your Python Projects
Learn how to use Codex CLI to add features to Python projects directly from your terminal, without needing a browser or IDE plugins.
REAL PYTHON course
Depot CI: Built for the Agent era
Depot CI: A new CI engine. Fast by design. Your GitHub Actions workflows, running on a fundamentally faster engine — instant job startup, parallel steps, full debuggability, per-second billing. One command to migrate →
DEPOT sponsor
Articles & Tutorials
Handling Schema Issues in Polars
You’ve got this great data pipeline going until one day it stops working. A schema error causes by a column upstream has stopped you in your tracks. This post talks about the four different causes of schema errors and what to do about them.
THIJS NIEUWDORP
Textual: An Intro to DOM Queries (Part II)
Textual is a TUI framework library for building terminal applications. It uses a DOM to represent the widgets in the application, and that DOM is queryable. This is part 2 in a series on how to find things in your Textual DOM.
MIKE DRISCOLL
Everything You Always Wanted to Know About PyCon Sprints!
PyCon US includes coding sprints to work on CPython itself, or projects in the ecosystem like Django, Flask, and BeeWare. This post tells you all about sprints and how you can join in on the fun.
DEB NICHOLSON
Why TUIs Are Back
Terminal User Interfaces are seeing a resurgence in the tools space. This opinion piece briefly talks about the history of interfaces and why we are where we are now.
ALCIDES FONSECA
Parallel Python at Anyscale With Ray
Talk Python interviews Richard Liaw and Edward Oakes. They talk about Ray, an open source Python framework a distributed execution engine for AI workloads.
TALK PYTHON podcast
Python 3.14.5 Release Candidate
Normally nobody fusses over a release candidate of a point release, but 3.14.5 includes a major change: rolling back of the incremental garbage collector.
HUGO VAN KEMENADE
Wagtail 7.4: Custom Page Explorer, Preview Checks & More
Between autosave improvements, new ways to sort your pages, and a content checker upgrade, you’ll have a lot of reasons to move to Wagtail 7.4
MEAGEN VOSS
The Simplest MCP Example Possible in Python
This guide introduces you to connecting your code to a local LLM model. It covers Ollama and FastMCP and what you can do with these tools.
AL SWEIGART
ChatterBot: Build a Chatbot With Python
Build a Python chatbot with the ChatterBot library. Clean real conversation data, train on custom datasets, and add local AI with Ollama.
REAL PYTHON
Hardening Firefox With Claude Mythos Preview
New details about what Mozilla found and how agentic harnesses helped them reproduce real bugs and dismiss false positives.
MOZILLA
Projects & Code
Pymetrica: A Codebase Analysis Tool
GITHUB.COM/JUANJFARINA • Shared by Juan José Farina
secure: HTTP Security Headers for FastAPI, Flask, Django
GITHUB.COM/TYPEERROR • Shared by Caleb Kinney
Kirokyu: Modular Task Management System
GITHUB.COM/AMRYOUNIS • Shared by Amr Younis
Events
PyCon US 2026
May 13 to May 20, 2026
PYCON.ORG
Python Atlanta
May 14 to May 15, 2026
MEETUP.COM
Chattanooga Python User Group
May 15 to May 16, 2026
MEETUP.COM
PyDelhi User Group Meetup
May 16, 2026
MEETUP.COM
PyData London
June 5 to June 7, 2026
PYDATA.ORG • Shared by Tomara Youngblood
Happy Pythoning!
This was PyCoder’s Weekly Issue #734.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Marcos Dione
Monitoring Apache with SQL and Grafana
Ever since my last job I have been wanting to make this. I think it's not the first time I do it, but for one reason or another, I did it (again?) in two evenings only.
In that job we had an internet facing API with Apache as the router in front of several services. All
our metrics and even our billing was based on the Apache logs. We had a system that ingested the logs
into a PostgreSQL database, and we tried to create Grafana panels and alerts based on that info. At
the same time, I wanted to reproduce awstats in Grafana, and found it was
almost impossible.
Another problem is that the usual tools to solve this, Loki or Prometheus, have big problems to handle this
type of too arbitrary data (think of the referer or user_agent columns) or whose space is too big
(client is an IPv4, with 4Bi different values). They effectively suffer (in principle) of what they
call "cardinality bomb": since they build one time series database (TSDB) per combination of fields
(they call them "labels"), storage use is big and aggregation operations inter TSDBs are expensive.
Last night I sat down to reimplement the ingestion side. Instead of PostgreSQL I used SQLite mostly
because almost all of my services (with low traffic and mostly only me as user) already use it. To be fair,
and one really can't expect anything else, the script is quite straight forward. It uses regexps to
parse the logs, which for the moment is good enough. I'm "releasing" it as is, because I'm tired, but you'll
find some surprises around parsing the request line (see request_re and its handling); some janky ways
to convert from str to int or datetime; and an iteration trick to use dataclasses as execute()
argument. I omited some comments and all the testing:
#! /usr/bin/env python3 from dataclasses import dataclass from datetime import datetime, timedelta, timezone, tzinfo import pathlib import re import sqlite3 import sys # I miss dinant # no 0-255 range check since this is written by apache # if the number is not in that range, we have bigger problems octet_re = r'\d{1,3}' ip_re = r'\.'.join([ octet_re ] * 4) word_re = r'[^ ]+' identd_user_re = word_re # it can be '-' user_id_re = word_re # it can be '-' month_names = [ 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec' ] day_re = r'\d{1,2}' month_re = f"(?:{'|'.join(month_names)})" year_re = r'\d{4}' date_re = f"({day_re})/({month_re})/({year_re})" time_re = r'(\d{2}):(\d{2}):(\d{2})' utc_offset_re = r'(?:\+|\-)\d{4}' # no capture # fscking double escaping :( date_time_re = f"\\[{date_re}:{time_re} ({utc_offset_re})\\]" method_re = word_re url_re = word_re # technically not a word, but word_re is too generic proto_re = r'HTTP' # who are we kidding version_re = r'\d\.\d' # who are we kidding proto_and_version_re = f"({proto_re})/({version_re})" # idiot skrip kidz send no method or proto/version! # and re is silly? enough to produce empty matches for the ()s here # oh, but re.compile().match().groups() returns things like # (None, None, None, None, '', '\\x16\\x03\\x02\\x01o\\x01', '', '') # so we gained nothing request_re = f'"(?:({method_re}) ({url_re}) {proto_and_version_re}|()({url_re})()())"' number_re = r'\d+' http_status_re = number_re bytes_rx_re = number_re bytes_tx_re = number_re ttfb_re = f"(?:{number_re}|-)" response_time_re = number_re double_quoted_text_re = r'"([^"]+)"' referer_re = double_quoted_text_re user_agent_re = double_quoted_text_re log_line_re = re.compile(f"^({ip_re}) ({identd_user_re}) ({user_id_re}) {date_time_re} {request_re} ({http_status_re}) ({bytes_rx_re}) ({bytes_tx_re}) ({ttfb_re}) ({response_time_re}) {referer_re} {user_agent_re}$") @dataclass class LogRecord: client_ip: str # 0 indent_user: str user_id: str date_time: datetime method: str url: str # 5 protocol: str protocol_version: str # could be float, but we don't really care; besides, x.y.z? status: int bytes_rx: int bytes_tx: int # 10 ttfb: int # maybe -! response_time: int referer: str user_agent: str @classmethod def from_log_line(cls, line): match = log_line_re.match(line) if match is None: raise ValueError(f"Malformed line: {line.strip()}") data = list(match.groups()) new_data = [] group_index = 0 for field_index, (name, field) in enumerate(cls.__dataclass_fields__.items()): if field.type == datetime: # [11/May/2026:20:15:28 +0200] # convert month str to number data[group_index+1] = month_names.index(data[group_index+1]) + 1 # convert to ints data[group_index:group_index+6] = [ int(x) for x in data[group_index:group_index+6] ] new_data.append( datetime(data[group_index+2], data[group_index+1], data[group_index], data[group_index+3], data[group_index+4], data[group_index+5], 0, utc_offset2tzinfo(data[group_index+6])) ) group_index += 7 continue # handle ttfb as - if field_index == 11 and data[group_index] == '-': # data[group_index] = data[group_index+1] if data[group_index:group_index+4] == [ None, None, None, None ]: if group_index in (10, 14): # handle (None, None, None, None, '', '\\x16\\x03\\x02\\x01o\\x01', '', '') # handle ('GET', '/', 'HTTP', '1.0', None, None, None, None) # no need to add anything, it's handled by the fallback # but we still need to skip this cruft group_index += 4 else: raise ValueError(f"Got confused: {(field_index, field.type, group_index, data[group_index], new_data)}") # convert ints if field.type == int: data[group_index] = int(data[group_index]) # fallback new_data.append(data[group_index]) group_index += 1 return cls(*new_data) # implement the iterator protocol so we can mostly be passed as argument to execute() def __iter__(self): return self def __iter__(self): for value in self.__dict__.values(): # the whole protocol could be replaced with .__dataclass_fields__.values() :shrug: # but this way I can do further conversions if type(value) == datetime: value = int(value.timestamp()) yield value def utc_offset2tzinfo(offset: str) -> tzinfo: # +0200 hours = int(offset[:3]) # +02 minutes = int(offset[3:]) # 00 return timezone(timedelta(hours=hours, minutes=minutes), offset) def connect(): # if we test after sqlite3.connect(), the file is already created create = not pathlib.Path('./apache_logs.db').exists() conn = sqlite3.connect('./apache_logs.db') if create: conn.cursor().execute(''' CREATE TABLE "logs" ( "client" TEXT, "indent_user" TEXT, "user_id" TEXT, "timestamp" INTEGER, "method" TEXT, "url" TEXT, "protocol" TEXT, "protocol_version" TEXT, "status" INTEGER, "bytes_rx" INTEGER, "bytes_tx" INTEGER, "ttfb" INTEGER, "response_time" INTEGER, "referer" TEXT, "user_agent" TEXT );''') return conn def main(): conn = connect() cursor = conn.cursor() for line in sys.stdin: try: log_record = LogRecord.from_log_line(line) except ValueError as e: print(e.args[0]) continue cursor.execute('''INSERT INTO logs VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''', tuple(log_record)) conn.commit() if __name__ == '__main__': main()
One of the things I didn't do was to further play with the URLs. One could make list of different apps based on whether there is any routing to different services, like in the cases of my previous job and my own server; or even different subdivisions on a single app, like for NextCloud:
ocs/v2.php/apps/serverinfo remote.php/dav/files/USER remote.php/dav/calendars/USER/CALENDAR/
etc. I haven't really thought about it; it could be implemented either as more columns or extra tables.
Today I managed to finish the rest.
The next step is to install this so it runs constantly with the output of tail --follow=name --retry
piped into its stdin1. Left as an exercise for the reader; use SystemD units :)
Next is installing Grafana's plugin to read SQLite and declare the new Grafana datasource.
The hard part was to query the data in a way that was useful for Grafana. I managed to get a query like:
-- round to the minute SELECT timestamp/60*60 AS time, status, COUNT(status) as "count" FROM logs -- $__from and $__to are defined by Grafana based on the dashboards's time range WHERE timestamp >= $__from / 1000 and timestamp < $__to / 1000 GROUP BY timestamp/60, status
to get the count of different status codes per minute2. But this returns a table that looks like:
time | status | count 1778533620 | 200 | 30 1778533620 | 207 | 3 1778533620 | 403 | 1
while Grafana is expecting one line per sample (but remember we're aggregating data) and one column per data series:
time | 200 | 207 | 403 1778533620 | 30 | 3 | 1
I read how to pivot this in SQL, but it mostly works only if you know the different values for the
status column beforehand. This might be feasible with
HTTP status codes (I count
63 standard ones, including the joke 418 I'm a teapot), but that would be impossible for the referer
or user_agent columns.
Thanks to iRobbery#postgresql@libera.chat I found about
Grafana's Partition by values data transformation.
Aplying it to the column that defines the time series (status, etc), it gives us exactly what we want!

And one can even include a pure table with all the logs to inspect when one finds weird spikes or values. I made almost impossible queries like transferred bytes per URL, methods per client and more! One missing piece, i possible, would be to implement histograms, like last time we looked into this.
-
One could cite the UNIX philosophy, but seriously, who wants to reimplement all the corner cases of that
tailinvocation? See for instance the 113 bugs found in the coreutils Rust reimplementaiton ↩ -
One could use a dashboard variable to control this arbitrarily. One could get granularity per second! ↩
Real Python
Building Type-Safe LLM Agents With Pydantic AI
Pydantic AI is a Python framework for building LLM agents that return validated, structured outputs using Pydantic models. Instead of parsing raw strings from LLMs, you get type-safe objects with automatic validation.
If you’ve used FastAPI or Pydantic before, then you’ll recognize the familiar pattern of defining schemas with type hints and letting the framework handle the type validation for you.
By the end of this video course, you’ll understand that:
- Pydantic AI uses
BaseModelclasses to define structured outputs that guarantee type safety and automatic validation. - The
@agent.tooldecorator registers Python functions that LLMs can invoke based on user queries and docstrings. - Dependency injection with
deps_typeprovides type-safe runtime context like database connections without using global state. - Validation retries automatically rerun queries when the LLM returns invalid data, which increases reliability but also API costs.
- Google Gemini, OpenAI, and Anthropic models support structured outputs best, while other providers have varying capabilities.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
2026 Django Developers Survey
The Django Software Foundation is once again partnering with JetBrains to run the 2026 Django Developers Survey 🌈
It’s an important metric of Django usage and is immensely helpful to guide future technical and community decisions.
After the survey closes, we will publish the aggregated results. JetBrains will also randomly select 10 winners (from those who complete the survey in full with meaningful answers) who will each receive a $100 Amazon voucher or the equivalent in local currency.
How you can help
Once you’ve done the survey, take a moment to re-share on socials and with your communities. The more diverse the answers, the better the results for all of us.
Please use the following links:
-
Bluesky
https://surveys.jetbrains.com/s3/bs-django-developers-survey-2026 -
Django Forum
https://surveys.jetbrains.com/s3/df-django-developers-survey-2026 -
LinkedIn
https://surveys.jetbrains.com/s3/li-django-developers-survey-2026 -
Mastodon
https://surveys.jetbrains.com/s3/md-django-developers-survey-2026 -
Reddit
https://surveys.jetbrains.com/s3/r-django-developers-survey-2026 -
X / Twitter
https://surveys.jetbrains.com/s3/x-django-developers-survey-2026
Real Python
Quiz: Building Type-Safe LLM Agents With Pydantic AI
In this quiz, you’ll test your understanding of Building Type-Safe LLM Agents With Pydantic AI.
By working through this quiz, you’ll revisit how Pydantic AI returns structured outputs from LLMs, how validation retries improve reliability, how tools and function calling work, how dependency injection flows through RunContext, and what trade-offs to expect when running agents in production.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: The LEGB Rule & Understanding Python Scope
In this quiz, you’ll test your understanding of The LEGB Rule & Understanding Python Scope.
By working through this quiz, you’ll revisit how Python resolves names using the LEGB rule, what the local, enclosing, global, and built-in scopes look like in practice, and how the global and nonlocal statements let you reach across scope boundaries.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Software Foundation
Announcing PSF Community Service Award Recipients!
The PSF Community Service Awards (CSAs) are a formal way for the PSF Board of Directors to offer recognition of work which, in its opinion, significantly improves the Foundation's fulfillment of its mission to build a vibrant, welcoming, global Python community. These awards shine a light on the incredible people who are the heart and soul of our community– those whose dedication, creativity, and generosity help the PSF fulfill its mission. The PSF CSAs celebrate individuals who have been truly invaluable, inspiring others through their example, and demonstrates that service to the Python community leads to recognition and reward. If you know of someone in the Python community deserving of a PSF CSA award, please submit them to the PSF Board via psf@python.org at any time. You can read more about PSF CSA’s on our website.
The PSF Board is excited to announce 5 new CSAs, awarded to Inessa Pawson, Kafui Alordo, Kalyan Prasad, Maria Jose Molina Contreras, and Paul Everitt, for their contributions to the Python community. Read more about their work and impact below.
Inessa Pawson
Inessa Pawson has been a tireless and dedicated contributor to the Python ecosystem for over eight years. She has led the PyCon US Maintainers Summit since 2020, not only shaping the event but actively opening doors for others to participate–onboarding new contributors and supporting attendees with characteristic warmth and care.
Beyond PyCon US, Inessa has spearheaded the Maintainers and Community Track, the mentorship program, and the Teen Track at the SciPy Conference, and co-founded the Contributor Experience project, reflecting her deep commitment to making the Python community more inclusive and accessible. She brings that same dedication to her roles on the NumPy Steering Committee, the scikit-learn survey team, and the SPEC (Scientific Python Ecosystem Coordination) Steering Committee. As a leader on the pyOpenSci Advisory Council, Inessa has been instrumental in advancing the organization's mission to support open and reproducible science.
Kafui Alordo
Kafui Alordo has spent years building and nurturing the Python community in Ho, in the Volta Region of Ghana. What began for Kafui as volunteer coaching at the first Django Girls Ho workshop grew into co-organizing the second and third editions, and eventually leading the workshop as its primary organizer, while also lending his expertise as a coach and co-organizer at Django Girls events across Ghana. Recognizing that sustainable community growth starts with welcoming total beginners, Kafui introduced a coding bootcamp initiative for his user group that has broadened participation and helped new learners find their footing in Python.
Kafui’s landmark achievement came with the organization of PyHo, the first-ever regional Python conference in Ho, which drew attendees from diverse backgrounds across the country. His impact has also extended well beyond Ghana, most recently stepping into the role of remote chair on the PyCascades organizing team.
Kalyan Prasad
Kalyan Prasad's journey in the Python community began in 2019 as a volunteer with the Hyderabad Python User Group (HydPy), one of India's largest Python communities, and he has grown steadily into one of its most consequential leaders. His dedication to PyConf Hyderabad has been especially remarkable–contributing across the CFP, program, and sponsorship teams, serving as co-chair in 2022, and stepping up as chair in both 2025 and 2026, representing four consecutive years of conference leadership at the regional and national level.
At the national scale, Kalyan also served as co-chair for PyCon India 2023. Kalyan's commitment extends well beyond India, as he actively contributes to the broader Python ecosystem as a reviewer, mentor, and program committee member for conferences around the world. His care for community safety is further reflected in two years of service on the NumFOCUS Code of Conduct squad, ensuring that Python spaces remain welcoming and respectful for everyone. Kalyan has also joined the PSF Diversity & Inclusion Working Group this year, contributing to inclusion efforts.
Maria Jose Molina Contreras
Maria Jose Molina Contreras has been a dedicated and wide-ranging contributor to the Python community, with deep roots in both Spanish-language and PyLadies initiatives. She has been a core organizer of PyLadiesCon since its inaugural edition in 2023, serving as co-chair in 2024 and 2025, and her tireless leadership helped make the most recent edition the most successful in the conference's history, raising over $55,000 in funds to support PyLadies members and chapters around the world.
Maria’s commitment to Spanish-speaking Pythonistas is equally impressive: she contributes to the Python Docs ES initiative, coordinates events for Python en Español on Discord, and co-founded the PyLadies en Español initiative, including leading the PyLadies presence at PyCon US. At EuroPython, Maria has volunteered since 2023 and taken on growing responsibility, leading community booths, PyLadies events, and community organizer efforts in 2024 and 2025. She has also served as a reviewer for PyCon US Charlas since 2020 and has been a speaker at numerous conferences including PyCon US, EuroPython, and PyConES, sharing her expertise with audiences across the global community.
Paul Everitt
Paul Everitt's relationship with Python stretches back to the very beginning! Paul was present at the early PyCons and played a foundational role as an incorporating member and director on the PSF's first Board of Directors, helping to establish the organization that supports Python to this day. Decades later, his commitment to the community remains as strong as ever, demonstrated through his long tenure as a Developer Advocate at JetBrains/PyCharm, where he has championed the company's sustained investment in Python open source.
Paul’s advocacy extends beyond any one project, as he has provided support to smaller but important ecosystem projects like HTMX and remained a regular, encouraging presence at Python conferences and on podcasts. Most recently, Paul proved that his contributions are not merely historical–he co-authored PEP 750, introducing template strings (t-strings) as a significant new feature in Python 3.14, demonstrating a continued willingness to roll up his sleeves and shape the language itself. Whether writing PEPs, giving conference talks, or simply championing the people who make Python great, Paul’s generous and enthusiastic spirit is an invaluable gift to the Python community.
Bob Belderbos
A Race Condition Rust Wouldn't Have Let Me Write
A two-agent Python service ran fine in tests. Two concurrent users hit it and one user's search results showed up in the other user's response. The pattern looked safe. The Rust port doesn't compile.
The pattern that looked fine
A former student walked me through this one. It's another case of module-level globals biting in concurrent code.
The agent in this service had a tool-call budget per query. Five tool calls, then stop. The implementation was the kind of thing I see in a lot of Python codebases:
_call_count = 0
_sources: list[str] = []
_lock = threading.Lock()
def reset() -> None:
global _call_count, _sources
with _lock:
_call_count = 0
_sources = []
def _check_and_increment() -> int:
global _call_count
with _lock:
if _call_count >= MAX_CALLS:
raise ToolCallLimitExceeded()
_call_count += 1
return _call_count
def _add_source(source: str) -> None:
with _lock:
_sources.append(source)
Every operation locks. Looks safe. Locally it is.
The orchestrator runs the Cypher and Mongo agents in parallel via asyncio.gather. A single user's request is fine because the two agents touch different modules. Streamlit puts each session on its own thread, so when two users query at the same time, both threads share the same _call_count, _sources, and _lock. Because Python modules are cached in sys.modules, _call_count isn't just a variable; it's a piece of memory shared by every thread in that process.
The race
Two users, each plans four tool calls (within their own five-call budget). Output from the repro:
[userA] DONE: made=2 expected=4 | sources: 2 own + 2 foreign
[userA] LEAKED foreign sources: ['userB:q0', 'userB:q1']
[userB] DONE: made=3 expected=4 | sources: 3 own + 2 foreign
[userB] LEAKED foreign sources: ['userA:q0', 'userA:q1']
Two failures at once.
The shared counter hits 5 before either user finishes, so each one gets their budget eaten. And get_sources() returns whatever happens to be in the shared list, mixed across users.
A timeline makes the leak obvious:
T+0 userA: lock, count 0->1, unlock # userA's call 1 of 4
T+1 userB: lock, count 1->2, unlock # userB's call 1 of 4
T+2 userA: lock, count 2->3, unlock # userA's call 2 of 4
T+3 userB: lock, count 3->4, unlock # userB's call 2 of 4
T+4 userA: lock, count 4->5, unlock # userA's call 3 of 4
T+5 userB: lock, sees 5 >= MAX, raises # userB barely started, budget gone
userA looks at the counter after two of its own increments and sees 4. "Wait, why is my count already 4?" Because userB has been incrementing the same number.
The locks were doing their job. Each individual op is atomic. They don't give per-request isolation, because there is no per-request anything. The data is one global.
The fix: contextvars
contextvars.ContextVar was built for this. Each thread, and each asyncio Task, gets its own copy. Default values give every fresh context a clean slate.
This matters more in asyncio than in threads. threading.local would catch the threaded case, but every asyncio task runs on the same thread — they all share one threading.local. Picture two tasks on one event loop: task A sets foo = 2, hits await, the loop runs task B, B reads foo and sees 2. There's no isolation, because there's no separate thread to key off. ContextVar keys on context instead, and asyncio.Task copies the context when it's created, so each Task gets its own slot. A's set() is invisible to B even though they're on the same thread.
import contextvars
_call_count: contextvars.ContextVar[int] = contextvars.ContextVar(
"call_count", default=0
)
_sources: contextvars.ContextVar[tuple[str, ...]] = contextvars.ContextVar(
"sources", default=()
)
def reset() -> None:
_call_count.set(0)
_sources.set(())
def _check_and_increment() -> int:
n = _call_count.get()
if n >= MAX_CALLS:
raise ToolCallLimitExceeded()
_call_count.set(n + 1)
return n + 1
def _add_source(source: str) -> None:
_sources.set(_sources.get() + (source,))
Same demo, fixed:
[userA] DONE: made=4 expected=4 | sources: 4 own + 0 foreign
[userB] DONE: made=4 expected=4 | sources: 4 own + 0 foreign
One subtle point: _sources is a tuple, not a list. With ContextVar(default=[]), every context that hasn't called set() shares the same default list object. A stray cv.get().append(x) would silently leak across contexts, mutating the default that every other context still points at. Tuples make that mistake non-expressible, which is close to Rust where immutable data is the default and mutable state has to be explicitly marked (mut).
What would Rust make of this?
If you mostly write Python, the gist of Rust's model is: by default everything is immutable, and the type system tracks who is allowed to read or write each piece of memory. That tracking is what blocks the bug shape from existing. Porting the Python pattern naively, the compiler refuses it four different ways.
1. Module globals can't just exist.
The Python _call_count = 0 at module scope has no clean Rust equivalent. The closest thing is static mut CALL_COUNT: u32 = 0 (static is the Rust word for a true module-level value, mut opts into mutability), and every read or write of it requires an unsafe { ... } block. The compiler is flagging the same risk we hit in Python (module-level mutable state is shared by every thread), but it forces you to acknowledge it in the syntax. You cannot accidentally write the buggy pattern.
2. Two threads cannot share a &mut reference.
In Python you pass an object reference into a thread and trust that locks will sort it out at runtime. Rust tracks references at compile time. The rule is aliasing XOR mutability: a value is either readable by many or writable by one, never both at once. &mut T is the "writable by one" case — while it exists, no other reference of any kind is allowed. That single rule is what blocks the Python bug; two threads writing the same counter is exactly the case it forbids. Moving the same Tracker into two thread::spawn closures doesn't compile:
error[E0382]: use of moved value
The exact aliasing the Python bug relied on, two threads writing to one shared counter, is not a thing the type system will let you express.
3. Shared global state must be wrapped in a lock.
Try a global without a lock and the compiler refuses with a different error:
error: `Tracker` cannot be shared between threads safely
Rust calls this the Sync trait, "safe to access from multiple threads at once". Tracker doesn't qualify because mutating its fields would race. To opt in, you wrap it: Mutex<Tracker>, similar to a threading.Lock in Python but with a critical difference. The lock wraps the data, not the operations. In our Python version we had a _lock and three free functions that called it; nothing prevented a fourth function from forgetting. In Rust, the only way to read the counter is to call .lock() on the mutex first, because the counter lives inside it. The bug class of "I forgot to take the lock here" is structurally absent.
4. The idiomatic version doesn't share state at all.
The cleanest Rust port doesn't go anywhere near a global. Each thread owns its own Tracker on its own stack:
fn run_agent(user_id: String) {
let mut tracker = Tracker::new();
for i in 0..CALLS_PER_USER {
call_tool(&mut tracker, &user_id, &format!("q{i}"));
}
}
There is no global to reset. The Python reset() race, where userA's reset zeroes userB's mid-flight counter, has no syntax in this design. This is the kind of explicitness that I described in what Rust structs taught me about state ownership: the compiler refuses to let you store state in places it shouldn't live.
What Rust doesn't save you from
Worth pinning down two terms that get conflated:
| Bug class | What it is | Does Rust prevent it? |
|---|---|---|
| Data race | Two threads touching the same memory without synchronization | Yes, won't compile |
| Race condition | Logic that breaks because operations interleave in an unexpected order | No, even with Arc<Mutex<T>> |
Wrap the tracker in Arc<Mutex<Tracker>> and share it across users, and the compiler is satisfied. Two pieces of jargon there, but they map directly onto Python ideas:
Arc<T> is Python's reference counting, made explicit. When you write data = [] in Python and pass it to two threads, both hold the same list — CPython tracks how many references exist and frees it when the count hits zero. That bookkeeping is automatic and invisible. Rust doesn't do it for you by default. When you genuinely want "many owners, last one out cleans up" across threads, you opt in with Arc (Atomic Reference Count). Same model as Python; you just ask for it by name.
Mutex<T> is threading.Lock, except the lock owns the data. In Python you write:
lock = threading.Lock()
data = []
# somewhere else, hopefully:
with lock:
data.append(x)
Two separate objects, held together by convention. Nothing stops a caller from touching data without the with. In Rust the data lives inside the mutex:
let mutex = Mutex::new(Vec::new());
let mut guard = mutex.lock().unwrap();
guard.push(x);
The only way to reach the vec is to call .lock(), which hands back a guard that auto-releases when it goes out of scope. "I forgot the with lock:" doesn't compile.
Arc<Mutex<T>> is the two combined. Think of it as the Python idiom (threading.Lock(), shared_data) welded into one type, with the compiler enforcing that you never use the data without the lock.
No data race. But you have reintroduced the Python bug at a higher level, because userA's reset() still clobbers userB's counter under that same lock. Rust rules out memory unsafety. Per-request isolation is still your design decision.
The fix is the same in both languages: one tracker per request. Rust rules out the data-corruption variant at compile time.
Keep reading
- What Rust Structs Taught Me About State Ownership
- Learning Rust Made Me a Better Python Developer
- The Rust Compiler as an AI Coding Agent Guardrail
The bug here is hard to see in tests. It needs concurrent traffic to fire, and that's the painful kind. Lesson: in Python you can guard against it, but it takes knowledge and discipline. In Rust, the compiler does more of the work: it makes illegal states unrepresentable. The more bugs you can design out of the syntax, the fewer you debug at runtime.
May 11, 2026
PyCon
Introducing the 8 Companies on Startup Row at PyCon US 2026
Each year at PyCon US, Startup Row highlights a select group of early-stage companies building ambitious products with Python at their core. The 2026 cohort reflects a rapidly evolving landscape, where advances in AI, data infrastructure, and developer tooling are reshaping how software is built, deployed, and secured.
This year’s companies aim to solve an evolving set of problems facing independent developers and large-scale organizations alike: securing AI-driven applications, managing multimodal data, orchestrating autonomous agents, automating complex workflows, and extracting insight from increasingly unstructured information. Across these domains, Python continues to serve as a unifying layer: encouraging experimentation, enabling systems built to scale, and connecting open-source innovation with real-world impact.
Startup Row brings these emerging teams into direct conversation with the Python community at PyCon US. Throughout the conference, attendees can meet founders, explore new tools, and see firsthand how these companies are applying Python to solve meaningful problems. For the startups in attendance, it’s an opportunity to share their work, connect with users and collaborators, and contribute back to the ecosystem that helped shape them. Register now to experience Startup Row and much more at PyCon US 2026.
Supporting Startups at PyCon US
There are many ways to support Startup Row companies, during PyCon US and long after the conference wraps:- Stop by Startup Row: Spend a few minutes with each team, ask what they’re building, and see their products in action.
- Try their tools: Whether it’s an open-source library or a hosted service, hands-on usage (alongside constructive feedback) is one of the most valuable forms of support. If a startup seems compelling, consider a pilot project and become a design partner.
- Share feedback: Early-stage teams benefit enormously from thoughtful questions, real-world use cases, and honest perspectives from the community.
- Contribute to their open source projects: Many Startup Row companies are deeply rooted in open source. Startup Row companies with open-source roots welcome bug reports, documentation improvements, and pull requests. Contributions and constructive feedback are always appreciated.
- Help spread the word: If you find something interesting, tell a friend, post about it, or share it with your team. (And if you're posting to social media, consider using tags like #PyConUS and #StartupRow to share the love.)
- Explore opportunities to work together: Many of these companies are hiring, looking for design partners, or open to collaborations; don’t hesitate to ask.
- But, most importantly, be supportive. Building a startup is hard, and every team is learning in real time. Curiosity, patience, and encouragement make a meaningful difference.
Meet Startup Row at PyCon US 2026
We’re excited to introduce the companies selected for Startup Row at PyCon US 2026.Arcjet
Embedding security directly into application code is fast becoming as indispensable as logging, especially as AI services open new attack surfaces. Arcjet offers a developer‑first platform that lets teams add bot detection, rate limiting and data‑privacy checks right where the request is processed.The service ships open‑source JavaScript and Python SDKs that run a WebAssembly module locally before calling Arcjet’s low‑latency decision API, ensuring full application context informs every security verdict. Both SDKs are released under a permissive open‑source license, letting developers integrate the primitives without vendor lock‑in while scaling usage through Arcjet’s SaaS tiered pricing.
The JavaScript SDK alone has earned ≈1.7 k GitHub stars and the combined libraries have attracted over 1,000 developers protecting more than 500 production applications. Arcjet offers a free tier and usage‑based paid plans, mirroring Cloudflare’s model to serve startups and enterprises alike.
Arcjet is rolling out additional security tools and deepening integrations with popular frameworks such as FastAPI and Flask, aiming to broaden adoption across AI‑enabled services. In short, Arcjet aims to be the security‑as‑code layer every modern app ships with.
CapiscIO
As multi‑agent AI systems become the backbone of emerging digital workflows, developers lack a reliable way to verify agent identities and enforce governance. CapiscIO steps into that gap, offering an open‑core trust layer built for the nascent agent economy.CapiscIO offers cryptographic Trust Badges, policy enforcement, and tamper‑evident chain‑of‑custody wrapped in a Python SDK. Released under Apache 2.0, it ships a CLI, LangChain integration, and an MCP SDK that let agents prove identity without overhauling existing infrastructure.
The capiscio‑core repository on GitHub hosts the open‑source core and SDKs under Apache 2.0, drawing early contributors building agentic pipelines.
Beon de Nood, Founder & CEO, brings two decades of enterprise development experience and a prior successful startup to the table. “AI governance should be practical, not bureaucratic. Organizations need visibility into what they have, confidence in what they deploy, and control over how agents behave in production,” he says.
CapiscIO is continuously adding new extensions, expanding its LangChain and MCP SDKs, and preparing a managed agent‑identity registry for enterprises. In short, CapiscIO aims to be the passport office of the agent economy, handing each autonomous component an unspoofable ID and clear permissions.
Chonkie
The explosion of retrieval‑augmented generation (RAG) is unlocking AI’s ability to reason over ever‑larger knowledge bases. Yet the first step of splitting massive texts into meaningful pieces still lags behind.Chonkie offers an open‑core suite centered on Memchunk, a Python library with Cython acceleration that delivers up to 160 GB/s throughput and ten chunking strategies under a permissive license. It also ships Catsu, a unified embeddings client for nine providers, and a lightweight ingestion layer; the commercial Chonkie Labs service combines them into a SaaS that monitors the web and synthesises insights.
Co‑founder and CEO Shreyash Nigam, who grew up in India and met his business partner in eighth grade, reflects the team’s open‑source ethos, saying “It’s fun to put a project on GitHub and see a community of developers crowd around it.” That enthusiasm underpins Chonkie’s decision to release its core tooling openly while building a commercial deep‑research service.
Backed by Y Combinator’s Summer 2025 batch, Chonkie plans to grow from four to six engineers and launch the next version of Chonkie Labs later this year, adding real‑time web crawling and multi‑modal summarization. In short, Chonkie aims to be the Google of corporate intelligence.
Phemeral
Running production‑grade Python services used to mean wrestling with containers, VMs, or complex CI pipelines.
Phemeral, launched in April 2026, offers Python developers a managed hosting platform that turns a GitHub repo into an instantly deployable, scale‑to‑zero backend.
Phemeral provides builds for popular frameworks (like Django, Flask, and FastAPI), integrations with popular package managers (e.g. uv, Pip, and Poetry), as well as continuous deployment on every push while charging only for actual request execution under a usage‑based model.
Founder & CEO Chinmaya Joshi says, "Building with Python is easier than ever, but hosting and deployment remain a pain. Phemeral is building the easiest way to deploy Python web apps."
Joshi is focused on expanding framework support and refining the platform so that Python developers (from vibe-coders and solo devs, to agencies and enterprises) can enjoy the same zero‑config experience modern front‑end platforms provide.
Pixeltable
Multimodal generative AI is turning simple datasets into sprawling collections of video, images, audio and text, forcing engineers to stitch together ad‑hoc pipelines just to keep data flowing. That complexity has created a new bottleneck for teams trying to move from prototype to production.The open‑source Python library from Pixeltable offers a declarative table API that lets developers store, query and version multimodal assets side by side while embedding custom Python functions. Built with incremental update capabilities, combined lineage and schema tracking, and a development‑to‑production mirror, the platform also provides orchestration capabilities that keep pipelines reproducible without rewriting code.
The project has earned ≈1.6 k GitHub stars and a growing contributor base, closed a $5.5 million seed round in December 2024, and is already used by early adopters such as Obvio and Variata to streamline computer‑vision workflows.
Co‑founder and CTO Marcel Kornacker, who previously founded Apache Impala and co-founded Apache Parquet, says “Just as relational databases revolutionized web development, Pixeltable is transforming AI application development.”
The company's roadmap centers on launching Pixeltable Cloud, a serverless managed service that will extend the open core with collaborative editing, auto‑scaling storage and built‑in monitoring. In short, Pixeltable aims to be the relational database of multimodal AI data.
SubImage
The sheer complexity of modern multi‑cloud environments turns security visibility into a labyrinth, and SubImage offers a graph‑first view that cuts through the noise.It builds an infrastructure graph using the open‑source Cartography library (Apache‑2.0, Python), then highlights exploit chains as attack paths and applies AI models to prioritize findings based on ownership and contextual risk.
Cartography, originally developed at Lyft and now a Cloud Native Computing Sandbox project, has ≈3.7 k GitHub stars, is used by over 70 organizations, and SubImage’s managed service already protects security teams at Veriff and Neo4j; the company closed a $4.2 million seed round in November 2025.
Co‑founder Alex Chantavy, an offensive‑security engineer, says “The most important tool was our internal cloud knowledge graph because it showed us a map of the easiest attack paths … One of the most effective ways to defend an environment is to see it the same way an attacker would.”
The startup is focusing on scaling its managed service and deepening AI integration as it targets larger enterprise customers. In short, SubImage aims to be the map of the cloud for defenders.
Tetrix
Private‑market data pipelines still rely on manual downloads and spreadsheet gymnastics, leaving analysts chasing yesterday’s numbers. Tetrix’s AI investment intelligence platform is part of a wave that brings automation to this lagging workflow.Built primarily in Python, Tetrix automates document collection from fund portals and other sources, extracts structured data from PDFs and other unstructured sources using tool-using language models, then presents exposures, cash flows, and benchmarks through an interactive dashboard that also accepts natural‑language queries.
The company is growing quickly, doubling revenue quarter over quarter and, at least so far, maintains an impressive record of zero customer churn. In the coming year or so, Tetrix plans to triple its headcount from fifteen to forty‑five employees.
TimeCopilot
Time‑series forecasting has long been a tangled mix of scripts, dashboards, and domain expertise, and the recent surge in autonomous agents is finally giving it a unified voice. Enter TimeCopilot, an open‑source framework that brings agentic reasoning to the heart of forecasting.The platform, built in Python under a permissive open‑source license, lets users request forecasts in plain English. It automatically orchestrates more than thirty models from seven families, including Chronos and TimesFM, while weaving large language model reasoning into each prediction. Its declarative API was born from co‑founder Azul Garza‑Ramírez’s economics background and her earlier work on TimeGPT for Nixtla (featured SR'23), evolving from a weekend experiment started nearly seven years ago.
The TimeCopilot/timecopilot repository has amassed roughly 420 stars on GitHub, with the release of OpenClaw marking a notable spike in community interest.
Upcoming plans include a managed SaaS offering with enterprise‑grade scaling and support, the rollout of a benchmarking suite to measure agentic forecast quality, and targeted use cases such as predicting cloud‑compute expenses for AI workloads.
Thank You's and Acknowledgements
Startup Row is a volunteer-driven program, co-led by Jason D. Rowley and Shea Tate-Di Donna (SR'15; Zana, acquired Startups.com), in collaboration with the PyCon US organizing team. Thanks to everyone who makes PyCon US possible.We also extend a gracious thank-you to all startup founders who submitted applications to Startup Row at PyCon US this year. Thanks again for taking the time to share what you're building. We hope to help out in whatever way we can.
Good luck to everyone, and see you in Long Beach, CA!
Talk Python to Me
#548: Event Sourcing Design Pattern
What if your database worked more like Git? Every change captured as an immutable event you can replay, instead of a single mutating row that quietly forgets its own history. That's event sourcing, and Chris May is back on Talk Python, fresh off our Datastar panel, to walk us through what it actually looks like in Python. We'll cover the core patterns, the libraries to reach for, when not to use it, and why event sourcing turns out to be a surprisingly good fit for AI-assisted coding.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code talkpython26</a><br> <a href='https://talkpython.fm/temporal'>Temporal</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guest</strong><br/> <strong>Chris May</strong>: <a href="https://everydaysuperpowers.dev?featured_on=talkpython" target="_blank" >everydaysuperpowers.dev</a><br/> <br/> <strong>Intro to event sourcing e-book</strong>: <a href="https://everydaysuperpowers.gumroad.com/l/es_intro?featured_on=talkpython" target="_blank" >everydaysuperpowers.gumroad.com</a><br/> <br/> <strong>Domain-Driven Design: The Power of CQRS and Event Sourcing: How CQRS/ES Redefine Building Scalable System</strong>: <a href="https://ricofritzsche.me/cqrs-event-sourcing-projections/?featured_on=talkpython" target="_blank" >ricofritzsche.me</a><br/> <strong>DDD</strong>: <a href="https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215?featured_on=talkpython" target="_blank" >www.amazon.com</a><br/> <strong>Understanding Eventsourcing (Martin Dilger)</strong>: <a href="https://www.amazon.com/Understanding-Eventsourcing-Planning-Implementing-Eventmodeling/dp/B0DNXQJM9Z/ref=sr_1_1?dib=eyJ2IjoiMSJ9.LqdaOIXJSPbgGuz_Akil-snFyMZVys1Y2IhnqvPv_CGK3R6Vwvu6AN1PHBi6twz-c3bPG5mdbhLJQyYs30LXh2pT6wiqXPrz0RKmfeYzq_sT18tc2UAWVG8rFBN1C-H46AHiiDqusp6SyDm2W15n4ZBKn11xW4yNvazjq3pg369c53KDFONnWqe9AB4xzAF2VeQ4n64hOk30-GmG_1K6_zIPBw4PXkVX9UDYq0QDIAQ.0Kvsl2V8aqDO4Av47g881GGoRPCpF0gCrbF6GJZbjRE&dib_tag=se&keywords=understanding+event+sourcing&qid=1777078561&sbo=RZvfv%2F%2FHxDF%2BO5021pAnSA%3D%3D&sr=8-1&featured_on=talkpython" target="_blank" >www.amazon.com</a><br/> <strong>Event Sourcing Explained using Football Video</strong>: <a href="https://www.youtube.com/watch?v=xPmQxYIi5fA" target="_blank" >www.youtube.com</a><br/> <strong>Why I finally embraced event sourcing and why you should too article</strong>: <a href="https://everydaysuperpowers.dev/articles/why-i-finally-embraced-event-sourcingand-why-you-should-too/?featured_on=talkpython" target="_blank" >everydaysuperpowers.dev</a><br/> <strong>valkey</strong>: <a href="https://valkey.io/?featured_on=talkpython" target="_blank" >valkey.io</a><br/> <strong>diskcache</strong>: <a href="https://talkpython.fm/episodes/show/534/diskcache-your-secret-python-perf-weapon" target="_blank" >talkpython.fm</a><br/> <strong>eventsourcing package</strong>: <a href="https://github.com/pyeventsourcing/eventsourcing?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>eventsourcing docs</strong>: <a href="https://eventsourcing.readthedocs.io/en/stable/topics/tutorial/part1.html?featured_on=talkpython" target="_blank" >eventsourcing.readthedocs.io</a><br/> <strong>John Bywater</strong>: <a href="https://github.com/johnbywater?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Datastar</strong>: <a href="https://data-star.dev/?featured_on=talkpython" target="_blank" >data-star.dev</a><br/> <strong>Microconf</strong>: <a href="https://microconf.com/?featured_on=talkpython" target="_blank" >microconf.com</a><br/> <strong>Event Modeling & Event Sourcing Podcast</strong>: <a href="https://podcast.eventmodeling.org?featured_on=talkpython" target="_blank" >podcast.eventmodeling.org</a><br/> <strong>Python Package Guides for AI Agents</strong>: <a href="https://github.com/mikeckennedy/python-package-guides-for-agents?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Iodine tablets AI joke</strong>: <a href="https://x.com/pr0grammerhum0r/status/2046650199930458334?s=46&featured_on=pythonbytes" target="_blank" >x.com</a><br/> <strong>KurrentDb</strong>: <a href="https://www.kurrent.io?featured_on=talkpython" target="_blank" >www.kurrent.io</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=s37d6yN2P70" target="_blank" >youtube.com</a><br/> <strong>Episode #548 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/548/event-sourcing-design-pattern#takeaways-anchor" target="_blank" >talkpython.fm/548</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/548/event-sourcing-design-pattern" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
