skip to navigation
skip to content

Planet Python

Last update: April 09, 2026 01:43 PM UTC

April 09, 2026


Rodrigo Girão Serrão

uv skills for coding agents

This article shares two skills you can add to your coding agents so they use uv workflows.

I have fully adopted uv into my workflows and most of the time I want my coding agents to use uv workflows as well, like when running any Python code or managing and running scripts that may or may not have dependencies.

To make this more convenient for me, I created two SKILL.md files for two of the most common workflows that the coding agents get wrong on the first few tries:

  1. python-via-uv: this skill tells the agent that it should use uv whenever it wants to run any piece of Python code, be it one-liners or scripts. This is relevant because I don't even have the command python/python3 in the shell path, so whenever the LLM tries running something with python ..., it fails.
  2. uv-script-workflow: this skill is specifically for when the agent wants to create and run a script. It instructs the LLM to initalise the script with uv init --script ... and then tells it about the relevant commands to manage the script dependencies.

The two skills also add a note about sandboxing, since uv's default cache directory will be outside your sandbox. When that's the case, the agent is already instructed to use a valid temporary location for the uv cache.

Installing a skill usually just means dropping a Markdown file in the correct folder, but you should check the documentation for the tools you use.

Here are the two skills for you to download:

  1. Skill for python-via-uv
  2. Skill for uv-script-workflow

I also included the skills verbatim here, for your convenience:

Skill for python-via-uv
---
name: python-via-uv
description: Enforce Python execution through `uv` instead of direct interpreter calls. Use when Codex needs to run Python scripts, modules, one-liners, tools, test runners, or package commands in a workspace and should avoid invoking `python` or `python3` directly.
---

# Python Via Uv

Use `uv` for every Python command.

Do not run `python`.
Do not run `python3`.
Do not suggest `python` or `python3` in instructions unless the user explicitly requires them and the constraint must be called out as a conflict.

## Execution Rules

When sandboxed, set `UV_CACHE_DIR` to a temporary directory the agent can write to before running `uv` commands.

Prefer these patterns:

- Run a script: `UV_CACHE_DIR=/tmp/uv-cache uv run path/to/script.py`
- Run a module: `UV_CACHE_DIR=/tmp/uv-cache uv run -m package.module`
- Run a one-liner: `UV_CACHE_DIR=/tmp/uv-cache uv run python -c "print('hello')"`
- Run a tool exposed by dependencies: `UV_CACHE_DIR=/tmp/uv-cache uv run tool-name`
- Add a dependency for an ad hoc command: `UV_CACHE_DIR=/tmp/uv-cache uv run --with <package> python -c "..."`

## Notes

Using `python` inside `uv run ...` is acceptable because `uv` is still the entrypoint controlling interpreter selection and environment setup.

If the workspace already defines a project-specific temporary cache directory, prefer that over `/tmp/uv-cache`.

If a command example or existing documentation uses `python` or `python3` directly, translate it to the closest `uv` form before executing it....

April 09, 2026 12:19 PM UTC


Ahmed Bouchefra

Let’s be honest. There’s a huge gap between writing code that works and writing code that’s actually good. It’s the number one thing that separates a junior developer from a senior, and it’s something a surprising number of us never really learn.

If you’re serious about your craft, you’ve probably felt this. You build something, it functions, but deep down you know it’s brittle. You’re afraid to touch it a year from now.

Today, we’re going to bridge that gap. I’m going to walk you through eight design principles that are the bedrock of professional, production-level code. This isn’t about fancy algorithms; it’s about a mindset. A way of thinking that prepares your code for the future.

And hey, if you want a cheat sheet with all these principles plus the code examples I’m referencing, you can get it for free. Just sign up for my newsletter from the link in the description, and I’ll send it right over.

Ready? Let’s dive in.

1. Cohesion & Single Responsibility

This sounds academic, but it’s simple: every piece of code should have one job, and one reason to change.

High cohesion means you group related things together. A function does one thing. A class has one core responsibility. A module contains related classes.

Think about a UserManager class. A junior dev might cram everything in there: validating user input, saving the user to the database, sending a welcome email, and logging the activity. At first glance, it looks fine. But what happens when you want to change your database? Or swap your email service? You have to rip apart this massive, god-like class. It’s a nightmare.

The senior approach? Break it up. You’d have:

Then, your main UserService class delegates the work to these other, specialized classes. Yes, it’s more files. It looks like overkill for a small project. I get it. But this is systems-level thinking. You’re anticipating future changes and making them easy. You can now swap out the database logic or the email provider without touching the core user service. That’s powerful.

2. Encapsulation & Abstraction

This is all about hiding the messy details. You want to expose the behavior of your code, not the raw data.

Imagine a simple BankAccount class. The naive way is to just have public attributes like balance and transactions. What could go wrong? Well, another developer (or you, on a Monday morning) could accidentally set the balance to a negative number. Or set the transactions list to a string. Chaos.

The solution is to protect your internal state. In Python, we use a leading underscore (e.g., _balance) as a signal: “Hey, this is internal. Please don’t touch it directly.”

Instead of letting people mess with the data, you provide methods: deposit(), withdraw(), get_balance(). Inside these methods, you can add protective logic. The deposit() method can check for negative amounts. The withdraw() method can check for sufficient funds.

The user of your class doesn’t need to know how it all works inside. They just need to know they can call deposit(), and it will just work. You’ve hidden the complexity and provided a simple, safe interface.

3. Loose Coupling & Modularity

Coupling is how tightly connected your code components are. You want them to be as loosely coupled as possible. A change in one part shouldn’t send a ripple effect of breakages across the entire system.

Let’s go back to that email example. A tightly coupled OrderProcessor might create an instance of EmailSender directly inside itself. Now, that OrderProcessor is forever tied to that specific EmailSender class. What if you want to send an SMS instead? You have to change the OrderProcessor code.

The loosely coupled way is to rely on an “interface,” or what Python calls an Abstract Base Class (ABC). You define a generic Notifier class that says, “Anything that wants to be a notifier must have a send() method.”

Then, your OrderProcessor just asks for a Notifier object. It doesn’t care if it’s an EmailNotifier or an SmsNotifier or a CarrierPigeonNotifier. As long as the object you give it has a send() method, it will work. You’ve decoupled the OrderProcessor from the specific implementation of the notification. You can swap them in and out interchangeably.


A quick pause. I want to thank boot.dev for sponsoring this discussion. It’s an online platform for backend development that’s way more interactive than just watching videos. You learn Python and Go by building real projects, right in your browser. It’s gamified, so you level up and unlock content, which is surprisingly addictive. The core content is free, and with the code techwithtim, you get 25% off the annual plan. It’s a great way to put these principles into practice. Now, back to it.

4. Reusability & Extensibility

This one’s a question you should always ask yourself: Can I add new functionality without editing existing code?

Think of a ReportGenerator function that has a giant if/elif/else block to handle different formats: if format == 'text', elif format == 'csv', elif format == 'html'. To add a JSON format, you have to go in and add another elif. This is not extensible.

The better way is, again, to use an abstract class. Create a ReportFormatter interface with a format() method. Then create separate classes: TextFormatter, CsvFormatter, HtmlFormatter, each with their own format() logic.

Your ReportGenerator now just takes any ReportFormatter object and calls its format() method. Want to add JSON support? You just create a new JsonFormatter class. You don’t have to touch the ReportGenerator at all. It’s extensible without being modified.

5. Portability

This is the one everyone forgets. Will your code work on a different machine? On Linux instead of Windows? Without some weird version of C++ installed?

The most common mistake I see is hardcoding file paths. If you write C:\Users\Ahmed\data\input.txt, that code is now guaranteed to fail on every other computer in the world.

The solution is to use libraries like Python’s os and pathlib to build paths dynamically. And for things like API keys, database URLs, and other environment-specific settings, use environment variables. Don’t hardcode them! Create a .env file and load them at runtime. This makes your code portable and secure.

6. Defensibility

Write your code as if an idiot is going to use it. Because someday, that idiot will be you.

This means validating all inputs. Sanitizing data. Setting safe default values. Ask yourself, “What’s the worst that could happen if someone provides bad input?” and then guard against it.

In a payment processor, don’t have debug_mode=True as the default. Don’t set the maximum retries to 100. Don’t forget a timeout. These are unsafe defaults.

And for the love of all that is holy, validate your inputs! Don’t just assume the amount is a number or that the account_number is valid. Check it. Raise clear errors if it’s wrong. Protect your system from bad data.

7. Maintainability & Testability

The most expensive part of software isn’t writing it; it’s maintaining it. And you can’t maintain what you can’t test.

Code that is easy to test is, by default, more maintainable.

Look at a complex calculate function that parses an expression, performs the math, handles errors, and writes to a log file all at once. How do you even begin to test that? There are a million edge cases.

The answer is to break it down. Have a separate OperationParser. Have simple add, subtract, multiply functions. Each of these small, pure components is incredibly easy to test. Your main calculate function then becomes a simple coordinator of these tested components.

8. Simplicity (KISS, DRY, YAGNI)

Finally, after all that, the highest goal is simplicity.

Phew, that was a lot. But these patterns are what it takes to level up. It’s a shift from just getting things done to building things that last.

If you enjoyed this, let me know. I’d love to make more advanced videos like this one. See you in the next one.

April 09, 2026 12:14 PM UTC


Real Python

Quiz: Reading Input and Writing Output in Python

In this quiz, you’ll test your understanding of Reading Input and Writing Output in Python.

By working through this quiz, you’ll revisit taking keyboard input with input(), showing results with print(), formatting output, and handling basic input types.

This quiz helps you practice building simple interactive scripts and reinforces best practices for clear console input and output.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 09, 2026 12:00 PM UTC


James Bennett

Let’s talk about LLMs

Everybody seems to agree we’re in the middle of something, though what, exactly, seems to be up for debate. It might be an unprecedented revolution in productivity and capabilities, perhaps even the precursor to a technological “singularity” beyond which it’s impossible to guess what the world might look like. It might be just another vaporware hype cycle that will blow over. It might be a dot-com-style bubble that will lead to a big crash but still leave us with something useful (the way the dot-com bubble drove mass adoption of the web). It might be none of those things.

Many thousands of words have already been spent arguing variations of these positions. So of course today I’m going to throw a few thousand more words at it, because that’s what blogs are for. At least all the ones you’ll read here were written by me (and you can pry my em-dashes from my cold, dead hands).

Terminology, and picking a lane

But first, a couple quick notes:

I’m going to be using the terms “LLM” and “LLMs” almost exclusively in this post, because I think the precision is useful. “AI” is a vague and overloaded term, and it’s too easy to get bogged down in equivocations and debates about what exactly someone means by “AI”. And virtually everything that’s contentious right now about programming and “AI” is really traceable specifically to the advent of large language models. I suppose a slightly higher level of precision might come from saying “GPT” instead, but OpenAI keeps trying to claim that one as their own exclusive term, which is a different sort of unwelcome baggage. So “LLMs” it is.

And when I talk about “LLM coding”, I mean use of an LLM to generate code in some programming language. I use this as an umbrella term for all such usage, whether done under human supervision or not, whether used as the sole producer of code (with no human-generated code at all) or not, etc.

I’m also going to try to limit my comments here to things directly related to technology and to programming as a profession, because that’s what I know (I have a degree in philosophy, so I’m qualified to comment on some other aspects of LLMs, but I’m deliberately staying away from them in this post because I find a lot of those debates tedious and literally sophomoric, as in reminding me of things I was reading and discussing when I was a sophomore).

If you’re using an LLM in some other field, well, I probably don’t know that field well enough to usefully comment on it. Having seen some truly hot takes from people who didn’t follow this principle, I’ve thought several times that we really need some sort of cute portmanteau of “LLM” and “Gell-Mann Amnesia” for the way a lot of LLM-related discourse seems to be people expecting LLMs to take over every job and field except their own.

No silver bullet

A few years ago I wrote about Fred Brooks’ No Silver Bullet, and said I think it may have been the best thing Brooks ever wrote. If you’ve never read No Silver Bullet, I strongly recommend you do so, and I recommend you read the whole thing for yourself (rather than just a summary of it).

No Silver Bullet was published at a time when computing hardware was advancing at an incredible rate, but our ability to build software was not even close to keeping up. And so Brooks made a bold prediction about software:

There is no single development, in either technology or management technique, which by itself promises even a single order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.

To support this he looked at sources of difficulty in software development, and assigned them to two broad categories (emphasis as in the original):

Following Aristotle, I divide them into essence—the difficulties inherent in the nature of the software—and accidents—those difficulties that today attend its production but that are not inherent.

A classic example is memory management: some programming languages require the programmer to manually allocate, keep track of, and free memory, which is a source of difficulty. And this is accidental difficulty, because there’s nothing which inherently requires it; plenty of other programming languages have automatic memory management.

But other sources of difficulty are different, and seem to be inherent to software development itself. Here’s one of the ways Brooks summarizes it (emphasis matches what’s in my copy of No Silver Bullet):

The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions. This essence is abstract, in that the conceptual construct is the same under many different representations. It is nonetheless highly precise and richly detailed.

I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared to the conceptual errors in most systems.

If this is true, building software will always be hard. There is inherently no silver bullet.

And to drive the point home, he also explains the diminishing returns of only addressing accidental difficulty:

How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement.

This is a straightforward mathematical argument. If its two empirical premises—that the accidental/essential distinction is real and that the accidental difficulty remaining today does not represent 90%+ of total—are true, then the conclusion which rules out an order-of-magnitude gain from reducing accidental difficulty follows automatically.

I think most programmers believe the first premise, at least implicitly, and once the first premise is accepted it becomes very difficult to argue against the second. In fact, I’d personally go further than the minimum required for Brooks’ argument. His math holds up as long as accidental difficulty doesn’t reach that 90%+ mark, since anything lower makes a 10x improvement from eliminating accidental difficulty impossible. But I suspect accidental difficulty, today, is a vastly smaller proportion of the total than that. In a lot of mature domains of programming I’d be surprised if there’s even a doubling of productivity still available from a complete elimination of remaining accidental difficulty.

There’s also a section in No Silver Bullet about potential “hopes for the silver” which addresses “AI”, though what Brooks considered to be “AI” (and there is a tangent about clarifying exactly what the term means) was significantly different from what’s promoted today as “AI”. The most apt comparison to LLMs in No Silver Bullet is actually not the discussion of “AI”, it’s the discussion of automatic programming, which has meant a lot of different things over the years, but was defined by Brooks at the time as “the generation of a program for solving a problem from a statement of the problem specifications”. That’s pretty much the task for which LLMs are currently promoted to programmers.

But Brooks quotes David Parnas on the topic: “automatic programming always has been a euphemism for programming with a higher-level language than was presently available to the programmer.” And Brooks did not believe higher-level languages on their own could be a silver bullet. As he put it in a discussion of the Ada language:

It is, after all, just another high-level language, and the biggest payoff from such languages came from the first transition, up from the accidental complexities of the machine into the more abstract statement of step-by-step solutions. Once those accidents have been removed, the remaining ones are smaller, and the payoff from their removal will surely be less.

Many people are currently promoting LLMs as a revolutionary step forward for software development, but are doing so based almost exclusively on claims about LLMs’ ability to generate code at high speed. The No Silver Bullet argument poses a problem for these claims, since it sets a limit on how much we can gain from merely generating code more quickly.

In chapter 2 of The Mythical Man-Month, Brooks suggested as a scheduling guideline that five-sixths (83%) of time on a “software task” would be spent on things other than coding, which puts a pretty low cap on productivity gains from speeding up just the coding. And even if we assume LLMs reduce coding time to zero, and go with the more generous No Silver Bullet formulation which merely predicts no order-of-magnitude gain from a single development, that’s still less than the gain Brooks himself believed could come from hiring good human programmers. From chapter 3 of The Mythical Man-Month:

Programming managers have long recognized wide productivity variations between good programmers and poor ones. But the actual measured magnitudes have astounded all of us. In one of their studies, Sackman, Erikson, and Grant were measuring performances of a group of experienced programmers. Within just this group the ratios between best and worst performances averaged about 10:1 on productivity measurements and an amazing 5:1 on program speed and space measurements!

(although I’m personally skeptical of the “10x programmer” concept, the software industry overall does seem to accept it as true)

Anecdote time: much of what I’ve done over my career as a professional programmer is building database-backed web applications and services, and I don’t see much of a gain from LLMs. I suppose it looks impressive, if you’re not familiar with this field of programming, to auto-generate the skeleton of an entire application and the basic create/retrieve/update/delete HTTP handlers from no more than a description of the data you want to work with. But that capability predates LLMs: Rails’ scaffolding, for example, could do it twenty years ago.

And not just raw code generation, but also the abstractions available to work with, have progressed to the point where I basically never feel like the raw speed of production of code is holding me back. Just as Fred Brooks would have predicted, the majority of my time is spent elsewhere: talking to people who want new software (or who want existing software to be changed); finding out what it is they want and need; coming up with an initial specification; breaking it down into appropriately-sized pieces for programmers (maybe me, maybe someone else) to work on; testing the first prototype and getting feedback; preparing the next iteration; reviewing or asking for review, etc. I haven’t personally tracked whether it matches Brooks’ five-sixths estimate, but I wouldn’t be at all surprised if it did.

Given all that, just having an LLM churn out code faster than I would have myself is not going to offer me an order of magnitude improvement, or anything like it. Or as a recent popular blog post by the CEO of Tailscale put it:

AI’s direct impact on this problem is minimal. Okay, so Claude can code it in 3 minutes instead of 30? That’s super, Claude, great work.

Now you either get to spend 27 minutes reviewing the code yourself in a back-and-forth loop with the AI (this is actually kinda fun); or you save 27 minutes and submit unverified code to the code reviewer, who will still take 5 hours like before, but who will now be mad that you’re making them read the slop that you were too lazy to read yourself. Little of value was gained.

More simply: throwing more patches into the review queue, when the review queue still drains at the same rate as before, is not a recipe for increased velocity. Real software development involves not just a review queue but all the other steps and processes I outlined above, and more, and having an LLM generate code more quickly does not increase the speed or capacity of all those other things.

So as someone who accepts Brooks’ argument in No Silver Bullet, I am committed to believe on theoretical grounds that LLMs cannot offer “even a single order-of-magnitude improvement … in productivity, in reliability, in simplicity”. And my own experience matches up with that prediction.

Practice makes (im)perfect

But enough theory. What about the empirical actual reality of LLM coding?

Every fan of LLMs for coding has an anecdote about their revolutionary qualities, but the non-anecdotal data points we have are a lot more mixed. For example, several times now I’ve been linked to and asked to read the DORA report on the “State of AI-assisted Software Development”. And initially it certainly seems like it’s declaring the effects of LLMs are settled, in favor of the LLMs. From its executive summary (page 3):

[T]he central question for technology leaders is no longer if they should adopt AI, but how to realize its value.

And elsewhere it makes claims like (page 34) “AI is the new normal in software development”.

But then, going back to the executive summary, things start sounding less uniformly positive:

The research reveals a critical truth: AI’s primary role in software development is that of an amplifier. It magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones.

And then (still on page 3):

The greatest returns on AI investment come not from the tools themselves, but from a strategic focus on the underlying organizational system: the quality of the internal platform, the clarity of workflows, and the alignment of teams. Without this foundation, AI creates localized pockets of productivity that are often lost to downstream chaos.

Continuing on to page 4:

AI adoption now improves software delivery throughput, a key shift from last year. However, it still increases delivery instability. This suggests that while teams are adapting for speed, their underlying systems have not yet evolved to safely manage AI-accelerated development.

“Delivery instability” is defined (page 13) in terms of two factors:

Later parts of the report get into more detail on this. Page 38 charts the increase in delivery instability, for example. And elsewhere in the section containing that chart, there’s a discussion of whether increases in throughput (defined by DORA as a combination of lead time for changes, deployment frequency, and failed deployment recovery time) are enough to offset or otherwise make up for this increase in instability (page 41, emphasis added by me):

Some might argue that instability is an acceptable trade-off for the gains in development throughput that AI-assisted development enables.

The reasoning is that the volume and speed of AI-assisted delivery could blunt the detrimental effects of instability, perhaps by enabling such rapid bug fixes and updates that the negative impact on the end-user is minimized.

However, when we look beyond pure software delivery metrics, this argument does not hold up. To assess this claim, we checked whether AI adoption weakens the harms of instability on our outcomes which have been hurt historically by instability.

We found no evidence of such a moderating effect. On the contrary, instability still has significant detrimental effects on crucial outcomes like product performance and burnout, which can ultimately negate any perceived gains in throughput.

And the chart on page 38 appears to show the increase in instability as quite a bit larger than the increase in throughput, in any case.

Curiously, that chart also claims a significant increase in “code quality”, and other parts of the report (page 30, for example) claim a significant increase in “productivity”, alongside the significant increase in delivery instability, which seems like it ought to be a contradiction. As far as I can tell, DORA’s source for both “productivity” and “code quality” is perceived impact as self-reported by survey respondents. Other studies and reports have designed less subjective and more quantitative ways to measure these things. For example, this much-discussed study on adoption of the Cursor LLM coding tool used the results of static analysis of the code to measure quality and complexity. And self-reported productivity impacts, in particular, ought to be a deeply suspect measure. From (to pick one relevant example) the METR early-2025 study (emphasis added by me):

This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.

LLM coding advocates have often criticized this particular study’s finding of slower development for being based on older generations of LLMs (more on that argument in a bit), but as far as I’m aware nobody’s been able to seriously rebut the finding that developers are not very effective at self-estimating their productivity. So to see DORA relying on self-estimated productivity is disappointing.

The DORA report goes on to provide a seven-part “AI capabilities model” for organizations (begins on page 49), which consists of recommendations like: strong version control practices, working in small batches, quality internal platforms, user-centric focus… all of which feel like they should be table stakes for any successful organization regardless of whether they also happen to be using LLMs.

Suppose, for sake of a silly example, that someone told you a new technology is revolutionizing surgery, but the gains are not uniformly distributed, and the best overall outcomes are seen in surgical teams where in addition to using the new thing, team members also wash their hands prior to operating. That’s not as extreme a comparison as it might sound: the sorts of practices recommended for maximizing LLM-related gains in the DORA report, and in many other similar whitepapers and reports and studies, are or ought to be as fundamental to software development as hand-washing is to surgery. The Joel Test was recommending quite a few of these practices a quarter-century ago, the Agile Manifesto implied several of them, and even back then they weren’t really new; if you dig into the literature on effective software development you can find variations of much of the DORA advice going all the way back to the 1970s and even earlier.

For a more recent data point, I’ve seen a lot of people talking about and linking me to CircleCI’s 2026 “State of Software Delivery” which, like the DORA report, claims an uneven distribution of benefits from LLM adoption, and even says (page 8) “the majority of teams saw little to no increase in overall throughput”. The CircleCI report also raises a worrying point that echoes the increase in “delivery instability” seen in the DORA report (CircleCI executive summary, page 3):

Key stability indicators show that AI-driven changes are breaking more often and taking teams longer to fix, making validation and integration the primary bottleneck.

CircleCI further reports (page 11) that, year-over-year, they see a 13% increase in recovery time for a broken main branch, and a 25% increase for broken feature branches. And (page 12) they also say failures are increasing:

[S]uccess rates on the main branch fell to their lowest level in over 5 years, to 70.8%. In other words, attempts at merging changes into production code bases now fail 30% of the time.

For comparison, their own recommended benchmark of success for main branches is 90%.

The cost of these increasing failures and the increasing time to resolve them is quantified (emphasis matches the report, page 14):

For a team pushing 5 changes to the main branch per day, going from a 90% success rate to 70% is the difference between one showstopping breakage every two days to 1.5 every single day (a 3x increase).

At just 60 minutes recovery time per failure, you’re looking at an additional 250 hours in debugging and blocked deployments every year. And that’s at a relatively modest scale. Teams pushing 500 changes per day would lose the equivalent of 12 full-time engineers.

The usual response to reports like these is to claim they’re based on people using older LLMs, and the models coming out now are the truly revolutionary ones, which won’t have any of those problems. For example, this is the main argument that’s been leveled against the METR study I mentioned above. But that argument was flimsy to begin with (since it’s rarely accompanied by the kind of evidence needed to back up the claim), and its repeated usage is self-discrediting: if the people claiming “this time is the world-changing revolutionary leap, for sure” were wrong all the prior times they said that (as they have to have been, since if any prior time had actually been the revolutionary leap they wouldn’t need to say this time will be), why should anyone believe them this time?

Also, I’ve read a lot of studies and reports on LLM coding, and these sorts of findings—uneven or inconsistent impact, quality/stability declines, etc.—seem to be remarkably stable, across large numbers of teams using a variety of different models and different versions of those models, over an extended period of time (DORA does have a bit of a messy situation with contradictory claims that “code quality” is increasing while “delivery instability” is increasing even more, but as noted above that seems to be a methodological problem). The two I’ve quoted most extensively in this post (the DORA and CircleCI reports) were chosen specifically because they’re often recommended to me by advocates of LLM coding, and seem to be reasonably pro-LLM in their stances.

The other expected response to these findings is a claim that it’s not necessarily older models but older workflows which have been obsoleted, that the state of the art is no longer to just prompt an LLM and accept its output directly, but rather involves one LLM (or LLM-powered agent) generating code while one or more layers of “adversarial” ones review and fix up the code and also review each other’s reviews and responses and fixes, thus introducing a mechanism by which the LLM(s) will automatically improve the quality of the output.

I’m unaware of rigorous studies on these approaches (yet), but several well-publicized early examples do not inspire confidence. I’ll pick on Cloudflare here since they’ve been prominent advocates for using LLMs in this fashion. In their LLM rebuild of Next.js:

We wired up AI agents for code review too. When a PR was opened, an agent reviewed it. When review comments came back, another agent addressed them. The feedback loop was mostly automated.

But their public release of it, vetted through this process and, apparently, some amount of human review on top, was initially unable to run even the basic default Next.js application, and also was apparently riddled with security issues. From one disclosure post (emphasis added by me):

AI is now very good at getting a system to the point where it looks complete.

One specific problem cited was that the LLM rebuild simply did not pull in all the original tests, and therefore could miss security-critical cases those tests were checking. From the same disclosure post:

The process was feature-first: decide which viNext features existed, then port the corresponding Next.js tests. That is a sensible way to move quickly. It gives you broad happy-path coverage.

But it does not guarantee that you bring over the ugly regression tests, missing-export cases, and fail-open behavior checks that mature frameworks accumulate over years.

So middleware could look “covered” while the one test that proves it fails safely never made it over.

For example, Next.js has a dedicated test directory (test/e2e/app-dir/proxy-missing-export/) that validates what happens when middleware files lack required exports. That test was never ported because middleware was already considered “covered” by other tests.

On the whole, that post is somewhat optimistic, but considering that the Next.js rebuild was carried out by presumably knowledgeable people who presumably were following good modern practices and prompting good modern LLMs to perform a type of task those LLMs are supposed to be extremely good at—a language and framework well-represented in training data, well-documented, with a large existing test suite written in the target language to assist automated verification—I have a hard time being that optimistic.

And though I haven’t personally read through the recent alleged leak of the Claude Code source, I’ve read some commentary and analysis from people who have, and again it seems like a team that should be as well-positioned as anyone to take maximum advantage of the allegedly revolutionary capabilities of LLM coding isn’t managing to do so.

So the consistent theme here, in the studies and reports and in more recent public examples, is that being able to generate code much more quickly than before, even in 2026 with modern LLMs and modern practices, is still no guarantee of being able to deliver software much more quickly than before. As the CircleCI report puts it (page 3):

The data points to a clear conclusion: success in the AI era is no longer determined by how fast code can be written. The decisive factor is the ability to validate, integrate, and recover at scale.

And if that sounds like the kind of thing Fred Brooks used to say, that’s because it is the kind of thing Fred Brooks used to say. Raw speed of generating code is not and was not the bottleneck in software development, and speeding that up or even reducing the time to generate code to effectively zero does not have the effect of making all the other parts of software development go away or go faster.

So at this point it seems clear to me that in practice as well as in theory LLM coding does not represent a silver bullet, and it seems highly unlikely to transform into one at any point in the near future.

On being left behind

When expressing skepticism about LLM coding, a common response is that not adopting it, or even just delaying slightly in adopting it, will inevitably result in being “left behind”, or even stronger effects (for example, words like “obliterated” have been used, more than once, by acquaintances of mine who really ought to know better). LLMs are the future, it’s going to happen whether you like it or not, so get with the program before it’s too late!

I said I’ll stick to the technical mode here, but I’ll just mention in passing that the “it’s going to happen whether you like it or not” framing is something I’ve encountered a lot and found to be pretty disturbing and off-putting, and not at all conducive to changing my mind. And milder forms like “It’s undeniable that…” are rhetorically suspect. The burden of proof ought to be on the person making the claim that LLMs truly are revolutionary, but framing like this tries to implicitly shift that burden and is a rare example of literally begging the question: it assumes as given the conclusion (LLMs are in fact revolutionary) that it needs to prove.

Meanwhile, I see two possible outcomes:

  1. The skeptical position wins. LLM coding tools do not achieve revolutionary silver-bullet status. Perhaps they become another tool in the toolbox, like TDD or pair programming, where some people and companies are really into them. Perhaps they become just another feature of IDEs, providing functionality like boilerplate generators to bootstrap a new project (if your favorite library/framework doesn’t provide its own bootstrap anyway).
  2. The skeptical position loses. LLM coding tools do achieve true revolutionary silver-bullet status or beyond (consistently delivering one or more orders of magnitude improvement in software development productivity), and truly become a mandatory part of every working programmer’s tools and workflows, taking over all or nearly all generation of code.

In the first case, delayed adoption has no downside unless someone happens to be working at one of the companies that decide to mandate LLM use. And they can always pick it up at that point, if they don’t mind or if they don’t feel like looking for a new job.

As to the second case: based on what I’ve argued above about the status and prospects of LLMs up to now, I obviously think that continuing the type of progress in models and practices that’s been seen to date does not offer any viable path to a silver bullet. Which means a truly revolutionary breakthrough will have to be something sufficiently different from the current state of the art that it will necessarily invalidate many (or perhaps even all) prior LLM-based workflows in addition to invalidating non-LLM-based workflows.

And even if that doesn’t result in a completely clean-slate starting point with everyone equal—even if experience with older LLM workflows is still an advantage in the post-silver-bullet world—I don’t think it can ever be the sort of insurmountable advantage it’s often assumed to be. For one thing, even with vastly higher average productivity, there likely would not be sufficient people with sufficient pre-existing LLM experience to fill the vastly expanded demand for software that would result (this is why a lot of LLM advocates, across many fields, spend so much time talking about the Jevons paradox). For another, any true silver-bullet breakthrough would have to attack and reduce the essential difficulty of building software, rather than the accidental difficulty. Let us return once again to Brooks:

I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation.

Much of the skill required of human LLM users today consists of exactly this: specifying and designing the software as a “conceptual construct”, albeit in specific ways that can be placed into an LLM’s context window in order to have it generate code. In any true silver-bullet world, much or all of that skillset would have to be rendered obsolete, which significantly reduces the penalty for late adoption if and when the silver bullet is finally achieved.

Power to the people?

Aside from impact on professional programmers and professional software-development teams, another claim often made in favor of LLM coding is that it will democratize access to software development. With LLM coding tools, people who aren’t experienced professional programmers can produce software that solves problems they face in their day-to-day jobs and lives. Surely that’s a huge societal benefit, right? And it’s tons of fun, too!

Setting aside that the New York Times piece linked above was written by someone who is an experienced professional, I’m not convinced of this use case either.

Mostly I think this is a situation where you can’t have it both ways. It seems to be widely agreed among advocates of LLM coding that it’s a skill which requires significant understanding, practice, and experience before one is able to produce consistent useful results (this is the basis of the “adopt now or be left behind” claim dealt with in the previous section); strong prior knowledge of how to design and build good software is also generally recommended or assumed. But that’s very much at odds with the democratized-software claim: that someone with no prior programming knowledge or experience will simply pick up an LLM, ask it in plain non-technical natural language to build something, and receive a sufficiently functional result.

I think the most likely result is that a non-technical user will receive something that’s obviously not fit for purpose, since they won’t have the necessary knowledge to prompt the LLM effectively. They won’t know how to set up directories of Markdown files containing instructions and skill definitions and architectural information for their problem. They won’t have practice at writing technical specifications (whether for other humans or for LLMs) to describe what they want in sufficient detail. They won’t know how to design and architect good software. They won’t know how to orchestrate multiple LLMs or LLM-powered agents to adversarially review each other. In short, they won’t have any of the skills that are supposed to be vital for successful LLM coding use.

There’s also the possibility that “natural” human language alone will never be sufficient to specify programs, even to much more advanced LLMs or other future “AI” systems, due to inherent ambiguity and lack of precision. In that case, some type of specialized formal language for specifying programs would always be necessary. Edsger W. Dijkstra, for example, took this position and famously derided what he called “the foolishness of ‘natural language programming’”, which is worth reading for some classic Dijkstra-isms like:

When all is said and told, the “naturalness” with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious.

Another possible outcome for LLM coding by non-programmers is the often-mentioned analogy to 3D printing, which also was hyped up as a great democratizer that would let anyone design and make anything, but never delivered on that promise and, at the individual level, became a niche hobby for the small number of enthusiasts who were willing and able to put in the time, money, and effort to get moderately good at it.

But the nightmare result is that non-programmer LLM users will receive something that seems to work, and only reveals its shortcomings much later on. Given how often I see it argued that LLMs will democratize coding and write utility programs for people working in fields where privacy and confidentiality are both vital and legally mandated, I’m terrified by that potential failure mode. And I think one of the worst possible things that could happen for advocates of LLM adoption is to have the news full of stories of well-meaning non-technical people who had their lives ruined by, say, accidentally enabling a data breach with their LLM-coded helper programs, or even “just” turning loose a subtly-incorrect financial model on their business. So even if I were an advocate of LLM coding, I’d be very wary of pushing it to non-programmers.

But ultimately, the only situation in which LLMs could meaningfully democratize access to software development is one where they achieve a true silver bullet, by significantly reducing or removing essential difficulty from the software development process. And as noted above, LLM advocates seem to believe that even in the silver-bullet situation there would still be such a gap between those with pre-existing LLM usage skills and those without, that those without could never meaningfully catch up. Although I happen to disagree with that belief, it remains the case that advocates can’t have it both ways: either LLM coding will be an exclusive club for those who built up the necessary skills, XOR it will be a great democratizer and do away with the need for those skills.

Takeaways

I’m already over 6,000 words in this post, and though I could easily write many more, I should probably wrap it up.

If I had to summarize my position on LLM coding in one sentence, it would be “Please go read No Silver Bullet”. I think Brooks’ argument there is both theoretically correct and validated by empirical results, and sets some pretty strong limits on the impact LLM coding, or any other tool or technique which solely or primarily attacks accidental difficulty, can have.

Of course, limits on what we can do or gain aren’t necessarily the end of the world. Many of the foundations of computer science, from On Computable Numbers to Rice’s theorem and beyond, place inflexible limits on what we can do, but we still write software nonetheless, and we still work to advance the state of our art. So the No Silver Bullet argument is not the same as arguing that LLMs are necessarily useless, or that no gains can possibly be realized from them. But it is an argument that any gains we do realize are likely going to be incremental and evolutionary, rather than the world-changing revolution many people seem to be expecting.

Correspondingly, I think there is not a huge downside, right now, to slow or delayed adoption of LLM coding. Very few organizations have the strong fundamentals needed to absorb even a relatively moderate, incremental increase in the amount of code they generate, which I suspect is why so many studies and reports find mixed results and lots of broken CI pipelines. Not only is there no silver bullet, there especially is no quick or magical gain to be had from rushing to adopt LLM coding without first working on those fundamentals. In fact, the evidence we have says you’re more likely to hurt than help your productivity by doing so.

I also don’t think LLMs are going to meaningfully democratize coding any time soon; even if they become indispensable tools for programmers, they are likely to continue requiring users to “think like a programmer” when specifying and prompting. We would be much better served by teaching many more people how to think rigorously and reason about abstractions (and they would be much better served, too) than we would by just plopping them as-is in front of LLMs.

As for what you should be doing instead of rushing to adopt LLM coding out of fear that you’ll be left behind: I think you should be listening to what all those whitepapers and reports and studies are actually telling you, and working on fundamentals. You should be adopting and perfecting solid foundational software development practices like version control, comprehensive test suites, continuous integration, meaningful documentation, fast feedback cycles, iterative development, focus on users, small batches of work… things that have been known and proven for decades, but are still far too rare in actual real-world software shops.

If the skeptical position is wrong and it turns out LLMs truly become indispensable coding tools in the long term, well, the available literature says you’ll be set up to take the greatest possible advantage of them. And if it turns out they don’t, you’ll still be in much better shape than you were, and you’ll have an advantage over everyone who chased after wild promises of huge productivity gains by ordering their teams to just chew through tokens and generate code without working on fundamentals, and who likely wrecked their development processes by doing so.

Or as Fred Brooks put it:

The first step toward the management of disease was replacement of demon theories and humours theories by the germ theory. That very step, the beginning of hope, in itself dashed all hopes of magical solutions. It told workers that progress would be made stepwise, at great effort, and that a persistent, unremitting care would have to be paid to a discipline of cleanliness. So it is with software engineering today.

April 09, 2026 06:27 AM UTC

April 08, 2026


Real Python

Dictionaries in Python

Python dictionaries are a powerful built-in data type that allows you to store key-value pairs for efficient data retrieval and manipulation. Learning about them is essential for developers who want to process data efficiently. In this tutorial, you’ll explore how to create dictionaries using literals and the dict() constructor, as well as how to use Python’s operators and built-in functions to manipulate them.

By learning about Python dictionaries, you’ll be able to access values through key lookups and modify dictionary content using various methods. This knowledge will help you in data processing, configuration management, and dealing with JSON and CSV data.

By the end of this tutorial, you’ll understand that:

  • A dictionary in Python is a mutable collection of key-value pairs that allows for efficient data retrieval using unique keys.
  • Both dict() and {} can create dictionaries in Python. Use {} for concise syntax and dict() for dynamic creation from iterable objects.
  • dict() is a class used to create dictionaries. However, it’s commonly called a built-in function in Python.
  • .__dict__ is a special attribute in Python that holds an object’s writable attributes in a dictionary.
  • Python dict is implemented as a hashmap, which allows for fast key lookups.

To get the most out of this tutorial, you should be familiar with basic Python syntax and concepts such as variables, loops, and built-in functions. Some experience with basic Python data types will also be helpful.

Get Your Code: Click here to download the free sample code that you’ll use to learn about dictionaries in Python.

Take the Quiz: Test your knowledge with our interactive “Dictionaries in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Dictionaries in Python

Test your knowledge of Python's dict data type: how to create, access, and modify key-value pairs using built-in methods and operators.

Getting Started With Python Dictionaries

Dictionaries are one of Python’s most important and useful built-in data types. They provide a mutable collection of key-value pairs that lets you efficiently access and mutate values through their corresponding keys:

Python
>>> config = {
...     "color": "green",
...     "width": 42,
...     "height": 100,
...     "font": "Courier",
... }

>>> # Access a value through its key
>>> config["color"]
'green'

>>> # Update a value
>>> config["font"] = "Helvetica"
>>> config
{
    'color': 'green',
    'width': 42,
    'height': 100,
    'font': 'Helvetica'
}

A Python dictionary consists of a collection of key-value pairs, where each key corresponds to its associated value. In this example, "color" is a key, and "green" is the associated value.

Dictionaries are a fundamental part of Python. You’ll find them behind core concepts like scopes and namespaces as seen with the built-in functions globals() and locals():

Python
>>> globals()
{
    '__name__': '__main__',
    '__doc__': None,
    '__package__': None,
    ...
}

The globals() function returns a dictionary containing key-value pairs that map names to objects that live in your current global scope.

Python also uses dictionaries to support the internal implementation of classes. Consider the following demo class:

Python
>>> class Number:
...     def __init__(self, value):
...         self.value = value
...

>>> Number(42).__dict__
{'value': 42}

The .__dict__ special attribute is a dictionary that maps attribute names to their corresponding values in Python classes and objects. This implementation makes attribute and method lookup fast and efficient in object-oriented code.

You can use dictionaries to approach many programming tasks in your Python code. They come in handy when processing CSV and JSON files, working with databases, loading configuration files, and more.

Python’s dictionaries have the following characteristics:

  • Mutable: The dictionary values can be updated in place.
  • Dynamic: Dictionaries can grow and shrink as needed.
  • Efficient: They’re implemented as hash tables, which allows for fast key lookup.
  • Ordered: Starting with Python 3.7, dictionaries keep their items in the same order they were inserted.

The keys of a dictionary have a couple of restrictions. They need to be:

  • Hashable: This means that you can’t use unhashable objects like lists as dictionary keys.
  • Unique: This means that your dictionaries won’t have duplicate keys.

In contrast, the values in a dictionary aren’t restricted. They can be of any Python type, including other dictionaries, which makes it possible to have nested dictionaries.

Dictionaries are collections of pairs. So, you can’t insert a key without its corresponding value or vice versa. Since they come as a pair, you always have to insert a key with its corresponding value.

Note: In some situations, you may want to add keys to a dictionary without deciding what the associated value should be. In those cases, you can use the .setdefault() method to create keys with a default or placeholder value.

Read the full article at https://realpython.com/python-dicts/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 08, 2026 02:00 PM UTC

Quiz: Implementing the Factory Method Pattern in Python

In this quiz, you’ll test your understanding of Factory Method Pattern.

This quiz guides you through the Factory Method pattern: how it separates object creation from use, the roles of clients and products, when to apply it, and how to implement flexible, maintainable Python classes.

Test your ability to spot opportunities for the pattern and build reusable, decoupled object creation solutions.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 08, 2026 12:00 PM UTC


Armin Ronacher

Mario and Earendil

Today I’m very happy to share that Mario Zechner is joining Earendil.

First things first: I think you should read Mario’s post. This is his news more than it is ours, and he tells his side of it better than I could. What I want to do here is add a more personal note about why this matters so much to me, how the last months led us here, and why I am so excited to have him on board.

Last year changed the way many of us thought about software. It certainly changed the way I did. I spent much of 2025 building, probing, and questioning how to build software, and in many more ways what I want to do. If you are a regular reader of this blog you were along for the ride. I wrote a lot, experimented a lot, and tried to get a better sense for what these systems can actually do and what kinds of companies make sense to build around them. There was, and continues to be, a lot of excitement in the air, but also a lot of noise. It has become clear to me that it’s not a question of whether AI systems can be useful but what kind of software and human-machine interactions we want to bring into the world with them.

That is one of the reasons I have been so drawn to Mario’s work and approaches.

Pi is, in my opinion, one of the most thoughtful coding agents and agent infrastructure libraries in this space. Not because it is trying to be the loudest or the fastest, but because it is clearly built by someone who cares deeply about software quality, taste, extensibility, and design. In a moment where much of the industry is racing to ship ever more quickly, often at the cost of coherence and craft, Mario kept insisting on making something solid. That matters to me a great deal.

I have known Mario for a long time, and one of the things I admire most about him is that he does not confuse velocity with progress. He has a strong sense for what good tools should feel like. He cares about details. He cares about whether something is well made. And he cares about building in a way that can last. Mario has been running Pi in a rather unusual way. He exerts back-pressure on the issue tracker and the pull requests through OSS vacations and other means.

The last year has also made something else clearer to me: these systems are not only exciting, they are also capable of producing a great deal of damage. Sometimes that damage is obvious; sometimes it looks like low-grade degradation everywhere at once. More slop, more noise, more disingenuous emails in my inbox. There is a version of this future that makes people more distracted, more alienated, and less careful with one another.

That is not a future I want to help build.

At Earendil, Colin and I have been trying to think very carefully about what a different path might look like. That is a big part of what led us to Lefos.

Lefos is our attempt to build a machine entity that is more thoughtful and more deliberate by design. Not an agent whose main purpose is to make everything a little more efficient so that we can produce even more forgettable output, but one that can help people communicate with more care, more clarity, and joy.

Good software should not aim to optimize every minute of your life, but should create room for better and more joyful experiences, better relationships, and better ways of relating to one another. Especially in communication and software engineering, I think we should be aiming for more thought rather than more throughput. We should want tools that help people be more considerate, more present, and more human. If all we do is use these systems to accelerate the production of slop, we will have missed the opportunity entirely.

This is also why Mario joining Earendil feels so meaningful to me. Pi and Lefos come from different starting points. There was a year of distance collaboration, but they are animated by a similar instinct: that quality matters, that design matters, and that trust is earned through care rather than captured through hype.

I am very happy that Pi is coming along for the ride. Me and Colin care a lot about it, and we want to be good stewards of it. It has already played an important role in our own work over the last months, and I continue to believe it is one of the best foundations for building capable agents. We will have more to say soon about how we think about Pi’s future and its relationship to Lefos, but the short version is simple: we want Pi to continue to exist as a high-quality, open, extensible piece of software, and we want to invest in making that future real. As for our thoughts of Pi’s license, read more here and our company post here.

April 08, 2026 12:00 AM UTC

April 07, 2026


PyCoder’s Weekly

Issue #729: NumPy Music, Ollama, Iterables, and More (April 7, 2026)

#729 – APRIL 7, 2026
View in Browser »

The PyCoder’s Weekly Logo


NumPy as Synth Engine

Kenneth has “recorded” a song in a Python script. The catch? No sampling, no recording, no pre-recorded sound. Everything was done through generating wave functions in NumPy. Learn how to become a mathematical musician.
KENNETH REITZ

How to Use Ollama to Run Large Language Models Locally

Learn how to use Ollama to run large language models locally. Install it, pull models, and start chatting from your terminal without needing API keys.
REAL PYTHON

Ship AI Agents With Accurate, Fresh Web Search Data

alt

Stop building scrapers just to feed your AI app web search data. SerpApi returns structured JSON from Google and 100+ search engines via a simple GET request. No proxy management, no CAPTCHAs. Power product research, price tracking, or agentic search in minutes. Used by Shopify, NVIDIA, and Uber →
SERPAPI sponsor

Indexable Iterables

Learn how objects are automatically iterable if you implement integer indexing.
RODRIGO GIRÃO SERRÃO

Claude Code for Python Developers (Live Course)

“This is one of the best training sessions I’ve joined in the last year across multiple platforms.” Two-day course where you build a complete Python project with an AI agent in your terminal. Next session April 11–12.
REAL PYTHON

PEP 803: "abi3t": Stable ABI for Free-Threaded Builds (Accepted)

PYTHON.ORG

PEP 829: Structured Startup Configuration via .site.toml Files (Added)

PYTHON.ORG

Articles & Tutorials

Fire and Forget at Textual

In this follow up to a previous article (Fire and forget (or never) with Python’s asyncio, Michael discusses a similar article by Will McGugan as it relates to Textual. He found the problematic pattern in over 500K GitHub files.
MICHAEL KENNEDY'

pixi: One Package Manager for Python Libraries

uv is great for pure Python projects, but it can’t install compiled system libraries like GDAL or CUDA. pixi fills that gap by managing both PyPI and conda-forge packages in one tool, with fast resolution, automatic lockfiles, and project-level environments.
CODECUT.AI • Shared by Khuyen Tran

Dignified Python: Pytest for Agent-Generated Code

alt

Learn how to define clear pytest patterns for agent-generated tests: separate fast unit vs integration, use fakes, constrain generation, and avoid brittle patterns to keep tests reliable and maintainable →
DAGSTER LABS sponsor

Learning Rust Made Me a Better Python Developer

Bob thinks that learning Rust made him a better Python developer. Not because Rust is better, but because it made him think differently about how he has been writing Python. The compiler forced him to confront things he’d been ignoring.
BOB BELDERBOS • Shared by Bob Belderbos

Django bulk_update Memory Issue

Recently, Anže had to write a Django migration to update hundreds of thousands of database objects. With some paper-napkin math he calculated it could fit in memory, but that turned out not to be the case. Read on to find out why.
ANŽE'S BLOG

Catching Up With the Python Typing Council

Talk Python interviews Carl Meyer, Jelle Zijstra, and Rebecca Chen, three members of the Python Typing Council. They talk about how the typing system is governed and just how much is the right amount of type hinting in your code.
TALK PYTHON podcast

Python 3.3: The Version That Quietly Rewired Everything

yield from, venv, and namespace packages are three features from Python 3.3 that looked minor when they came out in 2012, but turned out to be the scaffolding modern Python is built on.
TUREK SENTURK

Incident Report: LiteLLM/Telnyx Supply-Chain Attacks

This post from the PyPI blog outlines two recent supply chain attacks, how they were different, and how you can protect yourself from future incidents.
PYPI.ORG

Python Classes: The Power of Object-Oriented Programming

Learn how to define and use Python classes to implement object-oriented programming. Dive into attributes, methods, inheritance, and more.
REAL PYTHON

Timesliced Reservoir Sampling for Profilers

Reservoir sampling lets you pick a sample from an unlimited stream of events; learn how it works, and a new variant useful for profilers.
ITAMAR TURNER-TRAURING

Adding Python to PATH

Learn how to add Python to your PATH environment variable on Windows, macOS, and Linux so you can run Python from the command line.
REAL PYTHON course

Projects & Code

OracleTrace: Visualize Function Flows

GITHUB.COM/KAYKCAPUTO • Shared by Kayk Aparecido de Paula Caputo

pywho: Explain Your Python Environment and Detect Shadows

GITHUB.COM/AHSANSHERAZ • Shared by Ahsan Sheraz

asyncstdlib: The Missing Toolbox for an Async World

GITHUB.COM/MAXFISCHER2781

nitro-pandas: pandas-Compatible Polars Wrapper

GITHUB.COM/WASSIM17LABDI

django-mail-auth: Django Auth via Login URLs

GITHUB.COM/CODINGJOE

Events

Weekly Real Python Office Hours Q&A (Virtual)

April 8, 2026
REALPYTHON.COM

PyCon Lithuania 2026

April 8 to April 11, 2026
PYCON.LT

Python Atlanta

April 9 to April 10, 2026
MEETUP.COM

DFW Pythoneers 2nd Saturday Teaching Meeting

April 11, 2026
MEETUP.COM

PyCon DE & PyData 2026

April 14 to April 18, 2026
PYCON.DE

DjangoCon Europe 2026

April 15 to April 20, 2026
DJANGOCON.EU

PyTexas 2026

April 17 to April 20, 2026
PYTEXAS.ORG


Happy Pythoning!
This was PyCoder’s Weekly Issue #729.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

April 07, 2026 07:30 PM UTC


Python Engineering at Microsoft

Write SQL Your Way: Dual Parameter Style Benefits in mssql-python

Python SQL img image

Reviewed by: Sumit Sarabhai

If you’ve been writing SQL in Python, you already know the debate: positional parameters (?) or named parameters (%(name)s)? Some developers swear by the conciseness of positional. Others prefer the clarity of named. With mssql-python, you no longer need to choose  we support both. 
 
We’ve added dual parameter style support to mssql-python, enabling both qmark and pyformat parameter styles in Python applications that interact with SQL Server and Azure SQL. This feature is especially useful if you’re building complex queries, dynamically assembling filters, or migrating existing code that already uses named parameters with other DBAPI drivers.

Try it here

You can install driver using pip install mssql-python

Calling all Python + SQL developers! We invite the community to try out mssql-python and help us shape the future of high-performance SQL Server connectivity in Python.!

What Are Parameter Styles? 

The DB-API 2.0 specification (PEP 249) defines several ways to pass parameters to SQL queries. The two most popular are: 

# qmark style 
cursor.execute("SELECT * FROM users WHERE id = ? AND status = ?", (42, "active")) 
 
# pyformat style 
cursor.execute("SELECT * FROM users WHERE id = %(id)s AND status = %(status)s", 
               {"id": 42, "status": "active"}) 

Business Requirement 

Previously, mssql-python only supported qmark. It works fine for simple queries, but as parameters multiply, tracking their order becomes error-prone: 

# Which ? corresponds to which value? 
cursor.execute( 
    "UPDATE users SET name=?, email=?, age=? WHERE id=? AND status=?", 
    (name, email, age, user_id, status) 
) 

Mix up the order and it’s easy to introduce subtle, hard to spot bugs. 

Why Named Parameters? 

qmark — 6 parameters, which is which? 
cursor.execute( """INSERT INTO employees (first_name, last_name, email, department, salary, hire_date) VALUES (?, ?, ?, ?, ?, ?)""", ("Jane", "Doe", "jane.doe@company.com", "Engineering", 95000, "2025-03-01") ) 
pyformat — every value is labeled 
cursor.execute( """INSERT INTO employees (first_name, last_name, email, department, salary, hire_date) VALUES (%(first_name)s, %(last_name)s, %(email)s, %(dept)s, %(salary)s, %(hire_date)s)""", {"first_name": "Jane", "last_name": "Doe", "email": "jane.doe@company.com", "dept": "Engineering", "salary": 95000, "hire_date": "2025-03-01"} ) 
Audit log: record who made the change and when 
cursor.execute( """UPDATE orders SET status = %(new_status)s, modified_by = %(user)s, approved_by = %(user)s, modified_at = %(now)s, approved_at = %(now)s WHERE order_id = %(order_id)s""", {"new_status": "approved", "user": "admin@company.com", "now": datetime.now(), "order_id": 5042} ) 
3 unique values, used 5 times — no duplication needed 
def search_orders(customer=None, status=None, min_total=None, date_from=None): 
    query_parts = ["SELECT * FROM orders WHERE 1=1"] 
    params = {} 
  
    if customer: 
        query_parts.append("AND customer_id = %(customer)s") 
        params["customer"] = customer 
  
    if status: 
        query_parts.append("AND status = %(status)s") 
        params["status"] = status 
  
    if min_total is not None: 
        query_parts.append("AND total >= %(min_total)s") 
        params["min_total"] = min_total 
  
    if date_from: 
        query_parts.append("AND order_date >= %(date_from)s") 
        params["date_from"] = date_from 
  
    query_parts.append("ORDER BY order_date DESC") 
    cursor.execute(" ".join(query_parts), params) 
    return cursor.fetchall() 
  
# Callers use only the filters they need 
recent_big_orders = search_orders(min_total=500, date_from="2025-01-01") 
pending_for_alice = search_orders(customer=42, status="pending") 

The same parameter dictionary can drive multiple queries:

report_params = {"region": "West", "year": 2025, "status": "active"} 
  
# Summary count 
cursor.execute( 
    """SELECT COUNT(*) FROM customers 
       WHERE region = %(region)s AND status = %(status)s""", 
    report_params 
) 
total = cursor.fetchone()[0] 
  
# Revenue breakdown 
cursor.execute( 
    """SELECT department, SUM(revenue) 
       FROM sales 
       WHERE region = %(region)s AND fiscal_year = %(year)s 
       GROUP BY department 
       ORDER BY SUM(revenue) DESC""", 
    report_params 
) 
breakdown = cursor.fetchall() 
  
# Top performers 
cursor.execute( 
    """SELECT name, revenue 
       FROM sales_reps 
       WHERE region = %(region)s AND fiscal_year = %(year)s AND status = %(status)s 
       ORDER BY revenue DESC""", 
    report_params 
) 
top_reps = cursor.fetchall() 
# Same dict, three different queries — change the filters once, all queries update 

The Solution: Automatic Detection 

mssql-python now detects which style you’re using based on the parameter type: 

No configuration needed. Existing qmark code requires zero changes. 

from mssql_python import connect 
 
# qmark - works exactly as before 
cursor.execute("SELECT * FROM users WHERE id = ?", (42,)) 
 
# pyformat - just pass a dict! 
cursor.execute("SELECT * FROM users WHERE id = %(id)s", {"id": 42})

How It Works 

When you pass a dict to execute(), the driver: 

  1. Scans the SQL for %(name)s placeholders (context-aware – skips string literals, comments, and bracketed identifiers). 
  2. Validates that every placeholder has a matching key in the dict. 
  3. Builds a positional tuple in placeholder order (duplicating values for reused parameters). 
  4. Replaces each %(name)s with ? and sends the rewritten query to ODBC. 
User Code                                  ODBC Layer 
─────────                                  ────────── 
cursor.execute(                            SQLBindParameter(1, "active") 
  "WHERE status = %(status)s               SQLBindParameter(2, "USA") 
   AND country = %(country)s",      →      SQLExecute( 
  {"status": "active",                       "WHERE status = ? 
   "country": "USA"}                          AND country = ?" 
)                                          ) 

The ODBC layer always works with positional ? placeholders. The pyformat conversion is purely a developer-facing convenience with zero overhead to database communication. 

Clear Error Messages 

Mismatched styles or missing parameters produce actionable errors – not cryptic database exceptions: 

cursor.execute("WHERE id = %(id)s AND name = %(name)s", {"id": 42}) 
# KeyError: Missing required parameter(s): 'name'. 
 
cursor.execute("WHERE id = ?", {"id": 42}) 
# TypeError: query uses positional placeholders (?), but dict was provided. 
 
cursor.execute("WHERE id = %(id)s", (42,)) 
# TypeError: query uses named placeholders (%(name)s), but tuple was provided.

Real-World Examples 

Example 1: Web Application

def add_user(name, email): 
    with connect(connection_string) as conn: 
        with conn.cursor() as cursor: 
            cursor.execute( 
                "INSERT INTO users (name, email) VALUES (%(name)s, %(email)s)", 
                {"name": name, "email": email} 
            ) 

Example 2: Batch Operations 

cursor.executemany( 
    "INSERT INTO users (name, age) VALUES (%(name)s, %(age)s)", 
    [{"name": "Alice", "age": 30}, {"name": "Bob", "age": 25}] 
) 

Example 3: Financial Transactions 

def transfer_funds(from_acct, to_acct, amount): 
    with connect(connection_string) as conn: 
        with conn.cursor() as cursor: 
            cursor.execute( 
                "UPDATE accounts SET balance = balance - %(amount)s WHERE id = %(id)s", 
                {"amount": amount, "id": from_acct} 
            ) 
            cursor.execute( 
                "UPDATE accounts SET balance = balance + %(amount)s WHERE id = %(id)s", 
                {"amount": amount, "id": to_acct} 
            ) 
    # Automatic commit on success, rollback on failure 

Things to Keep in Mind 

# Mixing styles - raises TypeError 
cursor.execute( "SELECT * FROM users WHERE id = ? AND name = %(name)s", {"name": "Alice"} # Driver finds %(name)s but also sees unmatched ? ) 
# ODBC error: parameter count mismatch (2 placeholders, 1 value) 
# Pick one style and use it consistently 
cursor.execute( "SELECT * FROM users WHERE id = %(id)s AND name = %(name)s", {"id": 42, "name": "Alice"} ) 
cursor.execute( 
    "SELECT * FROM users WHERE name LIKE %(pattern)s", 
    {"pattern": "%alice%"}  # The % inside the VALUE is fine 
) 
 
# But if you need a literal %(...)s in SQL text itself, use %% 
cursor.execute( 
    "SELECT '%%(example)s' AS literal WHERE id = %(id)s", 
    {"id": 42} 
)  

Compatibility at a Glance 

Feature  qmark (?)  pyformat (%(name)s) 
cursor.execute()  ✅  ✅ 
cursor.executemany()  ✅  ✅ 
connection.execute()  ✅  ✅ 
Parameter reuse  ❌  ✅ 
Stored procedures  ✅  ✅ 
All SQL data types  ✅  ✅ 
Backward compatible with qmark paramstyle  ✅  N/A (new) 

Takeaway 

Use ? for quick, simple queries. Use %(name)s for complex, multi-parameter queries where clarity and reuse matter. You don’t have to pick a side – use whichever fits the situation. The driver handles the rest. 

Whether you’re building dynamic queries, or simply want more readable SQL, dual paramstyle support makes mssql-python work the way you already think. 

Try It and Share Your Feedback! 

We invite you to:

  1. Check-out the mssql-python driver and integrate it into your projects.
  2. Share your thoughts: Open issues, suggest features, and contribute to the project.
  3. Join the conversation: GitHub Discussions | SQL Server Tech Community.

Use Python Driver with Free Azure SQL Database

You can use the Python Driver with the free version of Azure SQL Database!

✅ Deploy Azure SQL Database for free

✅ Deploy Azure SQL Managed Instance for free Perfect for testing, development, or learning scenarios without incurring costs.

 

The post Write SQL Your Way: Dual Parameter Style Benefits in mssql-python appeared first on Microsoft for Python Developers Blog.

April 07, 2026 04:12 PM UTC


Django Weblog

Django security releases issued: 6.0.4, 5.2.13, and 4.2.30

In accordance with our security release policy, the Django team is issuing releases for Django 6.0.4, Django 5.2.13, and Django 4.2.30. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.

Django 4.2 has reached the end of extended support

Note that with this release, Django 4.2 has reached the end of extended support. All Django 4.2 users are encouraged to upgrade to Django 5.2 or later to continue receiving fixes for security issues.

See the downloads page for a table of supported versions and the future release schedule.

CVE-2026-3902: ASGI header spoofing via underscore/hyphen conflation

ASGIRequest normalizes header names following WSGI conventions, mapping hyphens to underscores. As a result, even in configurations where reverse proxies carefully strip security-sensitive headers named with hyphens, such a header could be spoofed by supplying a header named with underscores.

Under WSGI, it is the responsibility of the server or proxy to avoid ambiguous mappings. (Django's runserver was patched in CVE-2015-0219.) But under ASGI, there is not the same uniform expectation, even if many proxies protect against this under default configuration (including nginx via underscores_in_headers off;).

Headers containing underscores are now ignored by ASGIRequest, matching the behavior of Daphne, the reference server for ASGI.

This issue has severity "low" according to the Django Security Policy.

Thanks to Tarek Nakkouch for the report.

CVE-2026-4277: Privilege abuse in GenericInlineModelAdmin

Add permissions on inline model instances were not validated on submission of forged POST data in GenericInlineModelAdmin.

This issue has severity "low" according to the Django Security Policy.

Thanks to N05ec@LZU-DSLab for the report.

CVE-2026-4292: Privilege abuse in ModelAdmin.list_editable

Admin changelist forms using ModelAdmin.list_editable incorrectly allowed new instances to be created via forged POST data.

This issue has severity "low" according to the Django Security Policy.

CVE-2026-33033: Potential denial-of-service vulnerability in MultiPartParser via base64-encoded file upload

When using django.http.multipartparser.MultiPartParser, multipart uploads with Content-Transfer-Encoding: base64 that include excessive whitespace may trigger repeated memory copying, potentially degrading performance.

This issue has severity "moderate" according to the Django Security Policy.

Thanks to Seokchan Yoon for the report.

CVE-2026-33034: Potential denial-of-service vulnerability in ASGI requests via memory upload limit bypass

ASGI requests with a missing or understated Content-Length header could bypass the DATA_UPLOAD_MAX_MEMORY_SIZE limit when reading HttpRequest.body, potentially loading an unbounded request body into memory and causing service degradation.

This issue has severity "low" according to the Django Security Policy.

Thanks to Superior for the report.

Affected supported versions

  • Django main
  • Django 6.0
  • Django 5.2
  • Django 4.2

Resolution

Patches to resolve the issue have been applied to Django's main, 6.0, 5.2, and 4.2 branches. The patches may be obtained from the following changesets.

CVE-2026-3902: ASGI header spoofing via underscore/hyphen conflation

CVE-2026-4277: Privilege abuse in GenericInlineModelAdmin

CVE-2026-4292: Privilege abuse in ModelAdmin.list_editable

CVE-2026-33033: Potential denial-of-service vulnerability in MultiPartParser via base64-encoded file upload

CVE-2026-33034: Potential denial-of-service vulnerability in ASGI requests via memory upload limit bypass

The following releases have been issued

The PGP key ID used for this release is Jacob Walls: 131403F4D16D8DC7

General notes regarding security reporting

As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum. Please see our security policies for further information.

April 07, 2026 02:00 PM UTC


Real Python

Using Loguru to Simplify Python Logging

Logging is a vital programming practice that helps you track, understand, and debug your application’s behavior. Loguru is a Python library that provides simpler, more intuitive logging compared to Python’s built-in logging module.

Good logging gives you insights into your program’s execution, helps you diagnose issues, and provides valuable information about your application’s health in production. Without proper logging, you risk missing critical errors, spending countless hours debugging blind spots, and potentially undermining your project’s overall stability.

By the end of this video course, you’ll understand that:

After watching this course, you’ll be able to quickly implement better logging in your Python applications. You’ll spend less time wrestling with logging configuration and more time using logs effectively to debug issues. This will help you build production-ready applications that are easier to troubleshoot when problems occur.

To get the most from this course, you should be familiar with Python concepts like functions, decorators, and context managers. You might also find it helpful to have some experience with Python’s built-in logging module, though this isn’t required.

Don’t worry if you’re new to logging in Python. This course will guide you through everything you need to know to get started with Loguru and implement effective logging in your applications.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 07, 2026 02:00 PM UTC


Django Weblog

Could you host DjangoCon Europe 2027? Call for organizers

We are looking for the next group of organizers to own and lead the 2027 DjangoCon Europe conference. Could your town's football stadium, theatre, cinema, city hall, circus tent or a private island host this wonderful community event?

DjangoCon Europe is a major pillar of the Django community, as people from across the world meet and share. Many qualities make it a unique event: Unconventional and conventional venues, creative happenings, a feast of talks and a dedication to inclusion and diversity.

Hosting a DjangoCon is an ambitious undertaking. It's hard work, but each year it has been successfully run by a team of community volunteers, not all of whom have had previous experience - more important is enthusiasm, organizational skills, the ability to plan and manage budgets, time and people - and plenty of time to invest in the project.

For 2027, rest assured that we will be there to answer questions and put you in touch with previous organizers through the brand new DSF Events Support Working Group (a reboot of the previous DjangoCon Europe Support Working Group).

Step 1: Submit your expression of interest

If you're considering organizing DjangoCon Europe (🙌 great!), fill in our DjangoCon Europe 2027 expression of interest form with your contact details. No need to fill in all the information at this stage if you don't have it all already, we'll reach out and help you figure it out.

Express your interest in organizing

Step 2: We're here to help!

We've set up a DjangoCon Europe support working group of previous organizers that you can reach out to with questions about organizing and running a DjangoCon Europe.

The group will be in touch with everyone submitting the expression of interest form, or you can reach out to them directly: events-support@djangoproject.com

We'd love to hear from you as soon as possible, so your proposal can be finalized and sent to the DSF board by June 1st 2026.

Step 3: Submitting the proposal

The more detailed and complete your final proposal is, the better. Basic details include:

We also like to see:

Have a look at our proposed (draft, feedback welcome) DjangoCon Europe 2027 Licensing Agreement for the fine print on contractual requirements and involvement of the Django Software Foundation.

Submit your completed proposal by June 1st 2026 via our DjangoCon Europe 2027 expression of interest form, this time filling in as many fields as possible. We look forward to reviewing great proposals that continue the excellence the whole community associates with DjangoCon Europe.

Q&A

Can I organize a conference alone?

We strongly recommend that a team of people submit an application.

Depending on your jurisdiction, this is usually not a problem. But please share your plans about the entity you will use or form in your application.

Do I/we need experience with organizing conferences?

The support group is here to help you succeed. From experience, we know that many core groups of 2-3 people have been able to run a DjangoCon with guidance from previous organizers and help from volunteers.

What is required in order to announce an event?

Ultimately, a contract with the venue confirming the dates is crucial, since announcing a conference makes people book calendars, holidays, buy transportation and accommodation etc. This, however, would only be relevant after the DSF board has concluded the application process. Naturally, the application itself cannot contain any guarantees, but it's good to check concrete dates with your venues to ensure they are actually open and currently available, before suggesting these dates in the application.

Do we have to do everything ourselves?

No. You will definitely be offered lots of help by the community. Typically, conference organizers will divide responsibilities into different teams, making it possible for more volunteers to join. Local organizers are free to choose which areas they want to invite the community to help out with, and a call will go out through a blog post announcement on djangoproject.com and social media.

What kind of support can we expect from the Django Software Foundation?

The DSF regularly provides grant funding to DjangoCon organizers, to the extent of $6,000 in recent editions. We also offer support via specific working groups:

In addition, a lot of Individual Members of the DSF regularly volunteer at community events. If your team aren't Individual Members, we can reach out to them on your behalf to find volunteers.

What dates are possible in 2027?

For 2027, DjangoCon Europe should happen between January 4th and April 26th, or June 3rd and June 27th. This is to avoid the following community events' provisional dates:

We also want to avoid the following holidays:

What cities or countries are possible?

Any city in Europe. This can be a city or country where DjangoCon Europe has happened in the past (Athens, Vigo, Edinburgh, Porto, Copenhagen, Heidelberg, Florence, Budapest, Cardiff, Toulon, Warsaw, Zurich, Amsterdam, Berlin), or a new locale.

References

Past calls

April 07, 2026 01:06 PM UTC


Real Python

Quiz: Building a Python GUI Application With Tkinter

In this quiz, you’ll test your understanding of Building a Python GUI Application With Tkinter.

Test your Tkinter knowledge by identifying core widgets, managing layouts, handling text with Entry and Text widgets, and connecting buttons to Python functions.

This quiz also covers event loops, widget sizing, and file dialogs, helping you solidify the essentials for building interactive, cross-platform Python GUI apps.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 07, 2026 12:00 PM UTC

Quiz: Using Loguru to Simplify Python Logging

In this quiz, you’ll test your understanding of Using Loguru to Simplify Python Logging.

By working through this quiz, you’ll revisit key concepts like the pre-configured logger, log levels, format placeholders, adding context with .bind() and .contextualize(), and saving logs to files.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 07, 2026 12:00 PM UTC


PyCharm

How to Train Your First TensorFlow Model in PyCharm

This is a guest post from Iulia Feroli, founder of the Back To Engineering community on YouTube.

How to Train Your First TensorFlow Model in PyCharm

TensorFlow is a powerful open-source framework for building machine learning and deep learning systems. At its core, it works with tensors (a.k.a multi‑dimensional arrays) and provides high‑level libraries (like Keras) that make it easy to transform raw data into models you can train, evaluate, and deploy.

TensorFlow helps you handle the full pipeline: loading and preprocessing data, assembling models from layers and activations, training with optimizers and loss functions, and exporting for serving or even running on edge devices (including lightweight TensorFlow Lite models on Raspberry Pi and other microcontrollers). 

If you want to make data-driven applications, prototyping neural networks, or ship models to production or devices, learning TensorFlow gives you a consistent, well-supported toolkit to go from idea to deployment.

If you’re brand new to TensorFlow, start by watching the short overview video where I explain tensors, neural networks, layers, and why TensorFlow is great for taking data → model → deployment, and how all of this can be explained with a LEGO-style pieces sorting example. 

In this blog post, I’ll walk you through a first, stripped-down TensorFlow implementation notebook so we can get started with some practical experience. You can also watch the walkthrough video to follow along.

We’ll be exploring a very simple use case today: load the Fashion MNIST dataset, build two very simple Keras models, train and compare them, then dig into visualizations (predictions, confidence bars, confusion matrix). I kept the code minimal and readable so you can focus on the ideas – and you’ll see how PyCharm helps along the way.

Training TensorFlow models step by step

Getting started in PyCharm

We’ll be leveraging PyCharm’s native Notebook integration to build out our project. This way, we can inspect each step of the pipeline and use some supporting visualization along the way. We’ll create a new project and generate a virtual environment to manage our dependencies. 

If you’re running the code from the attached repo, you can install directly from the requirements file. If you wish to expand this example with additional visualizations for further models, you can easily add more packages to your requirements as you go by using the PyCharm package manager helpers for installing and upgrading.

Load Fashion MNIST and inspect the data

Fashion MNIST is a great starter because the images are small (28×28 pixels), visually meaningful, and easy to interpret. They represent various garment types as pixelated black-and-white images, and provide the relevant labels for a well-contained classification task. We can first take a look at our data sample by printing some of these images with various matplotlib functions:

Load Fashion MNIST and inspect the data
```
fig, axes = plt.subplots(2, 5, figsize=(10, 4))
for i, ax in enumerate(axes.flat):
    ax.imshow(x_train[i], cmap='gray')
    ax.set_title(class_names[y_train[i]])
    ax.axis('off')
plt.show()
```
# Two simple models (a quick experiment)
```
model1 = models.Sequential([
    layers.Flatten(input_shape=(28, 28)),
    layers.Dense(128, activation='relu'),
    layers.Dense(10, activation='softmax')
])
model2 = models.Sequential([
    layers.Flatten(input_shape=(28, 28)),
    layers.Dense(128, activation='relu'),
    layers.Dense(128, activation='relu'),
    layers.Dense(10, activation='softmax')
])
```

Compile and train your first model

From here, we can compile and train our first TensorFlow model(s). With PyCharm’s code completion features and documentation access, you can get instant suggestions for building out these simple code blocks.

For a first try at TensorFlow, this allows us to spin up a working model with just a few presses of Tab in our IDE. We’re using the recommended standard optimizer and loss function, and we’re tracking for accuracy. We can choose to build multiple models by playing around with the number or type of layers, along with the other parameters. 

```
model1.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy']
)
model1.fit(x_train, y_train, epochs=10)
model2.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy']
)
model2.fit(x_train, y_train, epochs=15)
```

Evaluate and compare your TensorFlow model performance

```
loss1, accuracy1 = model1.evaluate(x_test, y_test)
print(f'Accuracy of model1: {accuracy1:.2f}')
loss2, accuracy2 = model2.evaluate(x_test, y_test)
print(f'Accuracy of model2: {accuracy2:.2f}')
```

Once the models are trained (and you can see the epochs progressing visually as each cell is run), we can immediately evaluate the performance of the models.

In my experiment, model1 sits around ~0.88 accuracy, and while model2 is a little higher than that, it took 50% longer to train. That’s the kind of trade‑off you should be thinking about: Is a tiny accuracy gain worth the additional compute and complexity? 

We can dive further into the results of the model run by generating a DataFrame instance of our new prediction dataset. Here we can also leverage built-in functions like `describe` to quickly get some initial statistical impressions:

Evaluate and compare your TensorFlow model performance
```
predictions = model1.predict(x_test)
import pandas as pd
df_pred = pd.DataFrame(predictions, columns=class_names)
df_pred.describe()
```

However, the most useful statistics will compare our model’s prediction with the ground truth “real” labels of our dataset. We can also break this down by item category:

```
y_pred = model1.predict(x_test).argmax(axis=1)
cm = confusion_matrix(y_test, y_pred)
plt.figure(figsize=(8,6))
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=class_names, yticklabels=class_names)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.title('Confusion Matrix')
plt.show()
print('Classification report:')
print(classification_report(y_test, y_pred, target_names=class_names))
```

From here, we can notice that the accuracy differs quite a bit by type of garment. A possible interpretation of this is that trousers are quite a distinct type of clothing from, say, t-shirts and shirts, which can be more commonly confused. 

This is, of course, the type of nuance that, as humans, we can pick up by looking at the images, but the model only has access to a matrix of pixel values. The data does seem, however, to confirm our intuition. We can further build a more comprehensive visualization to test this hypothesis. 

```
import numpy as np
import matplotlib.pyplot as plt
# pick 8 wrong examples
y_pred = predictions.argmax(axis=1)
wrong_idx = np.where(y_pred != y_test)[0][:8]  # first 8 mistakes
n = len(wrong_idx)
fig, axes = plt.subplots(n, 2, figsize=(10, 2.2 * n), constrained_layout=True)
for row, idx in enumerate(wrong_idx):
    p = predictions[idx]
    pred = int(np.argmax(p))
    true = int(y_test[idx])
    axes[row, 0].imshow(x_test[idx], cmap="gray")
    axes[row, 0].axis("off")
    axes[row, 0].set_title(
        f"WRONG  P:{class_names[pred]} ({p[pred]:.2f})  T:{class_names[true]}",
        color="red",
        fontsize=10
    )
    bars = axes[row, 1].bar(range(len(class_names)), p, color="lightgray")
    bars[pred].set_color("red")
    axes[row, 1].set_ylim(0, 1)
    axes[row, 1].set_xticks(range(len(class_names)))
    axes[row, 1].set_xticklabels(class_names, rotation=90, fontsize=8)
    axes[row, 1].set_ylabel("conf", fontsize=9)
plt.show()
```

This table generates a view where we can explore the confidence our model had in a prediction: By exploring which weight each class was given, we can see where there was doubt (i.e. multiple classes with a higher weight) versus when the model was certain (only one guess). These examples further confirm our intuition: top-types appear to be more commonly confused by the model. 

Conclusion

And there we have it! We were able to set up and train our first model and already drive some data science insights from our data and model results. Using some of the PyCharm functionalities at this point can speed up the experimentation process by providing access to our documentation and applying code completion directly in the cells. We can even use AI Assistant to help generate some of the graphs we’ll need to further evaluate the TensorFlow model performance and investigate our results.

You can try out this notebook yourself, or better yet, try to generate it with these same tools for a more hands-on learning experience.

Where to go next

This notebook is a minimal, teachable starting point. Here are some practical next steps to try afterwards:

Frequently asked questions

When should I use TensorFlow?

TensorFlow is best used when building machine learning or deep learning models that need to scale, go into production, or run across different environments (cloud, mobile, edge devices). 

TensorFlow is particularly well-suited for large-scale models and neural networks, including scenarios where you need strong deployment support (TensorFlow Serving, TensorFlow Lite). For research prototypes, TensorFlow is viable, but it’s more commonplace to use lightweight frameworks for easier experimentation.

Can TensorFlow run on a GPU?

Yes, TensorFlow can run GPUs and TPUs. Additionally, using a GPU can significantly speed up training, especially for deep learning models with large datasets. The best part is, TensorFlow will automatically use an available GPU if it’s properly configured.

What is loss in TensorFlow?

Loss (otherwise known as loss function) measures how far a model’s predictions are from the actual target values. Loss in TensorFlow is a numerical value representing the distance between predictions and actual target values. A few examples include: 

How many epochs should I use?

There’s no set number of epochs to use, as it depends on your dataset and model. Typical approaches cover: 

An epoch is one full pass through your training data. Not enough passes through leads to underfitting, and too many can cause overfitting. The sweet spot is where your model generalizes best to unseen data. 

About the author

Iulia Feroli

Iulia’s mission is to make tech exciting, understandable, and accessible to the new generation.

With a background spanning data science, AI, cloud architecture, and open source, she brings a unique perspective on bridging technical depth with approachability.

She’s building her own brand, Back To Engineering, through which she creates a community for tech enthusiasts, engineers, and makers. From YouTube videos on building robots from scratch, to conference talks or keynotes about real, grounded AI, and technical blogs and tutorials Iulia shares her message worldwide on how to turn complex concepts into tools developers can use every day.

April 07, 2026 10:36 AM UTC


PyCon

Stories from the PyCon US Hotels

Friendships, collaborations, and breakthroughs 

The fun, the learning, and the inspiration don't stop when you walk out of the convention center. Some of the most memorable moments from PyCon US happen in the lobby at 10 pm, laughing with someone you only knew as a username until an hour ago; over breakfast, where a casual conversation turns into a collaboration that lasts years; and on the walks to and from the conference. PyCon US hotels have their own lore.

We asked people about their experiences and were overwhelmed, it turns out that everyone has a story!

"One story stands out to me beyond getting to know each other and sharing ideas. When I was getting ready to give my first PyCon talk in Montreal, Selena Deckelmann offered to help review my slides and listen to me practice. We spent a few hours on the floor of her hotel room prepping while her very young daughter crawled around on the floor and chewed on my PyCon badge since she was teething. It's still one of my favorite PyCon and PyLadies memories.” - Carol Willing, Willing Consulting

“The hotel lobby last year turned into a makeshift meetup after the PyLadies Auction. People were having a great time at the auction and kept the energy going in the lobby afterward. Everyone was there, even those who hadn't attended the auction. Luckily, the hotel also sold my favorite chocolate milk in the lobby, so I got to end my evening drinking milk and chatting with Python friends.” - Cheuk Ting Ho (the PyLady who loves the auction and karaoke)

"In Pittsburgh a couple of years ago I was having breakfast at the hotel, when a guy I didn't know spotted my Python T-shirt and introduced himself. It was his first PyCon and my 21st, and we ended up having breakfast together. I gave him a few tips on enjoying a PyCon, but it turned out he was also a guitarist, so we spent most of breakfast talking about music and playing guitar.” - Naomi Ceder, former board chair and loooong time PyCon goer

"I ran into Trey Hunner during my first PyCon US in the hotel lobby as a PSF employee. He was running a Cabo game. He immediately welcomed me and showed me how to play. (He’s a great teacher, so I won three rounds in a row!) I also met a bunch of lovely people who have been attending PyCon US for years and years, and I learned that there is almost always a Cabo game in the hotel lobby." - Deb Nicholson (PSF Executive Director & resident Cabo shark)

“One of my most memorable hotel lobby moments was a chance encounter with Thomas Wouters. We fell into a natural conversation about his work and his deep, genuine pride in the Python Software Foundation community. He spoke warmly about the people who make the community what it is and what it means to him to be part of it. What I had no idea at the time was that just three days later, he would be called up on stage and announced as a Distinguished Service Award recipient — one of the highest honors the Python Software Foundation gives.” - Abigail Dogbe, PSF Board Member

“Juggling in the hotel lobby turned into an unexpected highlight of the conference. We had started teaching each other — my fault entirely for bringing the juggling balls — when a teenager and his mom wandered through on their way to see Pearl Jam. The kid's eyes lit up the moment he saw us, so I waved them over and started teaching him. Turns out they'd booked that very hotel hoping to cross paths with the band. He was excited about everything, and she was right there with him, every bit as thrilled.” - Ned Batchelder, Python Core Team and Netflix, Software Engineer

And this year, instead of sitting in LA-to-Long Beach traffic, consider staying in the official conference hotel block because there's too much to miss if you're too far away.

Real Talk: Why booking a room via PyCon US matters 

If you're planning to attend PyCon US, please consider booking your stay within the official conference hotel block.

When attendees reserve rooms through the block, it helps the conference meet its contractual commitments with the venue, which directly impacts the overall cost of hosting the event.

Strong participation in the hotel block helps PyCon US:

Keep registration prices as low as possible while continuing to invest in programs that support our community, like travel grants, accessibility services, and community events.

When rooms go unfilled in the block, the conference incurs major financial penalties that ultimately make the event more expensive to run for everyone.

By booking in the hotel block, you are giving back and helping keep PyCon US sustainable and affordable for the entire Python community.


PSST! Exclusive swag when you book a room. We can't say more.

Attendees who book within the official hotel block this year will receive a special mystery swag item. We can't tell you what it is. That's why it's called mystery swag. But we can tell you the only way to get it is to book in the official PyCon US hotel block. 

Where to stay: official PyCon US 2026 hotel block

All hotels are in Long Beach, within easy reach of the Long Beach Convention Center.

The Westin Long Beach Spacious rooms and great amenities, and the block still has availability.  Book here

Hyatt Regency Long Beach is the conference headquarters hotel. Closest to the convention center–just about connected.  Book here

Marriott Long Beach Downtown A solid choice with easy access to the convention center and the waterfront.  Book here

Courtyard by Marriott Long Beach Downtown A comfortable, more affordable option still within the block.  Book here

April 07, 2026 10:30 AM UTC


Stéphane Wirtel

Ce livre Python que je voulais juste mettre à jour

Ce livre Python que je voulais juste mettre à jour

En août dernier, j’annonçais la relance de ce livre avec une certaine naïveté : j’avais retrouvé mon PDF de 2014, extrait le Markdown avec Docling, et assemblé un pipeline Longform → Pandoc → Typst. Je me disais que ce serait l’affaire de quelques semaines — mettre à jour les versions, ajouter quelques chapitres, boucler.

Huit mois plus tard, le périmètre a triplé, la chaîne d’outils a été réécrite, et la façon dont je travaille a complètement changé. Ce n’est pas ce que j’avais prévu. C’est mieux.

April 07, 2026 12:00 AM UTC

April 06, 2026


ListenData

How to Use Gemini API in Python

Integrating Gemini API with Python

In this tutorial, you will learn how to use Google's Gemini AI model through its API in Python.

Updated (April 3, 2026): This tutorial has been updated for the latest Gemini models, including Gemini 3.1 Flash and Gemini 3.1 Pro. It now supports real-time search, multimodal generation, and the latest Flash/Pro model aliases such as gemini-flash-latest and gemini-pro-latest.
Steps to Access Gemini API

Follow the steps below to access the Gemini API and then use it in python.

  1. Visit Google AI Studio website.
  2. Sign in using your Google account.
  3. Create an API key.
  4. Install the Google AI Python library for the Gemini API using the command below :
    pip install google-genai
    .
To read this article in full, please click here

April 06, 2026 06:14 PM UTC


PyCon

Python and the Future of AI: Agents, Inference, and Edge AI

Finding AI insights and education at PyCon US 2026

While AI content is sprinkled throughout the event, how could it not be, PyCon US features a dedicated The Future of AI with Python track, new this year, and programmed by Elaine Wong, PyCon US Chair, Jon Banafato, PyCon US Co-Chair, and Philip Gagnon, Program Committee Chair. According to JetBrains' State of Developer Ecosystem 2025 report, 85% of developers now regularly use AI tools for coding and development (which tell us that you are probably doing that), and 62% rely on at least one AI coding assistant, agent, or code editor. Looking ahead, nearly half of all developers (49%) plan to try AI coding agents in the coming year, and the eight sessions in this track map onto those priorities, covering everything from running LLMs on your laptop to building real-time voice agents. Take a look at the big themes and the sessions and tutorials you won't want to miss in our new track and throughout the event. 

Let’s start with newbies, if you or someone on your team is just getting started with ML, Corey Wade's Wednesday tutorial Your First Machine Learning Models: How to Build Them in Scikit-learn is the perfect entry point, a hands-on introduction to the building blocks that underpin so much of what's discussed in the talks.

LLMs Are Moving to the Edge

One of the most significant shifts in AI right now is the move toward running models locally on laptops, browsers, and devices, rather than in centralized cloud infrastructure. Want to know more? Check out: Running Large Language Models on Laptops: Practical Quantization Techniques in Python from Aayush Kumar JVS, a hands-on look at how quantization makes large models practical on consumer hardware. Fabio Pliger takes a look at the role of the browser with Distributing AI with Python in the Browser: Edge Inference and Flexibility Without Infrastructure, exploring how Python-powered inference can run client-side with no server required. If you've been watching the open-weights model explosion and wondering how to actually deploy these things, these two talks are for you.

Want to go deeper before the conference even starts? On Wednesday, May 13th, Isabel Michel's tutorial Implementing RAG in Python: Build a Retrieval-Augmented Generation System gives you hands-on experience building a retrieval-augmented generation pipeline from scratch, the practical foundation underneath a lot of modern LLM applications.

AI Agents and Async Python

Agentic AI, systems that take multi-step actions autonomously, is one of the defining developments of 2025 and continues to take the world by storm in 2026. But building agents that actually work in production requires getting async right. Aditya Mehra's Don't Block the Loop: Python Async Patterns for AI Agents digs into the concurrency pitfalls that trip up so many teams when they move from demo to deployment. This talk bridges a gap that many tutorials leave open: the gap between "I have a working agent" and "my agent works reliably at scale."

If you want a running start, Pamela Fox's Wednesday tutorial Build Your First MCP Server in Python is the perfect on-ramp, MCP (Model Context Protocol) is quickly becoming the standard way to give AI agents access to tools and data, and building one yourself is the fastest way to understand how agentic systems actually work under the hood.

AI and Open Source Sustainability

AI-Assisted Contributions and Maintainer Load by Paolo Melchiorre tackles a genuinely thorny question: as AI tools make it easier to generate pull requests, what happens to the maintainers on the receiving end? Drawing on real examples from projects like GNOME, OCaml, Python, and Django, Melchiorre examines how AI-generated contributions are shifting workload onto already time-constrained maintainers and what the open source community is doing about it. 

High-Performance Inference in Python

Python performance engineering is no longer optional for AI workloads. Yineng Zhang's High-Performance LLM Inference in Pure Python with PyTorch Custom Ops walks through the techniques for squeezing real speed out of inference pipelines without leaving the Python ecosystem (This one isn’t in the track, but it is so on point, I had to add it). Paired with Santosh Appachu Devanira Poovaiah's What Python Developers Need to Know About Hardware: A Practical Guide to GPU Memory, Kernel Scheduling, and Execution Models, Friday's track offers a practical hardware-to-application view of the performance stack that's increasingly essential for anyone building production AI systems.

Also, Catherine Nelson and Robert Masson's Thursday tutorial Going from Notebooks to Production Code is a great complement; bridging the gap between exploratory AI work and the kind of reliable, maintainable code that actually makes it into production systems.

Explainability and Responsible AI

As AI systems make more consequential decisions, the demand for explainability is only growing from regulators, from users, and from the developers building these systems. Jyoti Yadav's Building AI That Explains Itself: Why Your Card Got Declined uses a familiar real-world example to demonstrate how Python developers can build transparency into AI-driven decisions. It's a topic at the heart of current conversations about AI trust, and one that every practitioner should be thinking about.

Two tutorials round this theme out nicely: Neha's Wednesday session Causal Inference with Python teaches you how to move beyond correlation and reason about cause and effect in your data, a foundational skill for anyone building AI systems that need to explain why they made a decision. And on Thursday, Juliana Ferreira Alves' When KPIs Go Weird: Anomaly Detection with Python gives you practical tools for catching when your AI-powered systems go off the rails before your users do.

Voice AI and Multimodal Interfaces

Real-time voice is one of the fastest-moving areas in applied AI, and Camila Hinojosa Añez and Elizabeth Fuentes close out the AI track Friday evening with How to Build Your First Real-Time Voice Agent in Python (Without Losing Your Mind). This practical session covers the building blocks of voice agents in Python, a skill set that's quickly becoming table stakes for developers building consumer-facing AI products.

AI and Education

Sonny Mupfuni's AI-Powered Python Education: Towards Adaptive and Inclusive Learning explores how Python can power learning that adapts to the student, and Gift Ojeabulu's Making African Languages Visible: A Python-Based Guide to Low-Resource Language ID takes on one of NLP's most persistent blind spots, the languages that dominant datasets routinely leave out. Don't skip these. These sessions represent Python's role not just in building AI products, but in democratizing access to AI's benefits.

Friday's AI track is a rare chance to hear from practitioners who are building real things in production, not just demoing prototypes. Whether you're a Python developer who's been watching the AI wave from the sidelines or a team already shipping AI features who wants to sharpen your craft, clear your schedule and pull up a chair. And a big THANK YOU to Anaconda and NVIDIA for sponsoring this track!

Register for PyCon US 2026

We'll see you in Long Beach.


PyCon US 2026 takes place May 13–19 in Long Beach, California. The Future of AI with Python talk track runs Friday, May 15th.


April 06, 2026 05:36 PM UTC


Trey Hunner

Using a ~/.pdbrc file to customize the Python Debugger

Did you know that you can customize the Python debugger (PDB) by creating custom aliases within a .pdbrc file in your home directory or Python’s current working directory?

I recently learned this and I’d like to share a few helpful aliases that I now have access to in my PDB sessions thanks to my new ~/.pdbrc file.

The aliases in my ~/.pdbrc file

Here’s my new ~/.pdbrc file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# Custom PDB aliases

# dir obj: print non-dunder attributes and methods
alias dir print(*(f"%1.{n} = {v!r}" for n, v in __import__('inspect').getmembers(%1) if not n.startswith("__")), sep="\n")

# attrs obj: print non-dunder data attributes
alias attrs import inspect as __i ;; print(*(f"%1.{n} = {v!r}" for n, v in __i.getmembers(%1, lambda v: not __i.isroutine(v)) if not n.startswith("__")), sep="\n") ;; del __i

# vars obj: print instance variables (object must have __dict__)
alias vars print(*(f"%1.{k} = {v!r}" for k, v in vars(%1).items()), sep="\n")

# src obj: print source file, line number, and code where a class/function was defined
alias src import inspect as __i;; print(f"{__i.getsourcefile(%1)} on line {__i.getsourcelines(%1)[1]}:\n{''.join(__i.getsource(%1))}") ;; del __i

# loc: print local variables from current frame
alias loc print(*(f"{name} = {value!r}" for name, value in vars().items() if not name.startswith("__")), sep="\n")

This allows me to use:

You might wonder “Can’t I use dir(x) instead of dir x and vars(x) instead of vars x and locals() instead of loc?”

You can!… but those aliases print things out in a nicer format.

A demo of these 5 aliases

Let’s use -m pdb -m calendar to launch Python’s calendar module from the command line while dropping into PDB immediately:

1
2
3
4
$ python -m pdb -m calendar
> /home/trey/.local/share/uv/python/cpython-3.15.0a3-linux-x86_64-gnu/lib/python3.15/calendar.py(1)<module>()
-> """Calendar printing functions
(Pdb)

Then we’ll set a breakpoint after lots of stuff has been defined and continue to that breakpoint:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
(Pdb) b 797
Breakpoint 1 at /home/trey/.local/share/uv/python/cpython-3.15.0a3-linux-x86_64-gnu/lib/python3.15/calendar.py:797
(Pdb) c
> /home/trey/.local/share/uv/python/cpython-3.15.0a3-linux-x86_64-gnu/lib/python3.15/calendar.py(797)<module>()
-> firstweekday = c.getfirstweekday
(Pdb) l
792
793
794     # Support for old module level interface
795     c = TextCalendar()
796
797 B-> firstweekday = c.getfirstweekday
798
799     def setfirstweekday(firstweekday):
800         if not MONDAY <= firstweekday <= SUNDAY:
801             raise IllegalWeekdayError(firstweekday)
802         c.firstweekday = firstweekday

The string representation for that c variable doesn’t tell us much:

1
2
(Pdb) !c
<__main__.TextCalendar object at 0x7416de93af90>

If we use the dir alias, we’ll see every attribute that’s accessible on c printed in a pretty friendly format:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
(Pdb) dir c
c._firstweekday = 0
c.firstweekday = 0
c.formatday = <bound method TextCalendar.formatday of <__main__.TextCalendar object at 0x7416de93af90>>
c.formatmonth = <bound method TextCalendar.formatmonth of <__main__.TextCalendar object at 0x7416de93af90>>
c.formatmonthname = <bound method TextCalendar.formatmonthname of <__main__.TextCalendar object at 0x7416de93af90>>
c.formatweek = <bound method TextCalendar.formatweek of <__main__.TextCalendar object at 0x7416de93af90>>
c.formatweekday = <bound method TextCalendar.formatweekday of <__main__.TextCalendar object at 0x7416de93af90>>
c.formatweekheader = <bound method TextCalendar.formatweekheader of <__main__.TextCalendar object at 0x7416de93af90>>
c.formatyear = <bound method TextCalendar.formatyear of <__main__.TextCalendar object at 0x7416de93af90>>
c.getfirstweekday = <bound method Calendar.getfirstweekday of <__main__.TextCalendar object at 0x7416de93af90>>
c.itermonthdates = <bound method Calendar.itermonthdates of <__main__.TextCalendar object at 0x7416de93af90>>
c.itermonthdays = <bound method Calendar.itermonthdays of <__main__.TextCalendar object at 0x7416de93af90>>
c.itermonthdays2 = <bound method Calendar.itermonthdays2 of <__main__.TextCalendar object at 0x7416de93af90>>
c.itermonthdays3 = <bound method Calendar.itermonthdays3 of <__main__.TextCalendar object at 0x7416de93af90>>
c.itermonthdays4 = <bound method Calendar.itermonthdays4 of <__main__.TextCalendar object at 0x7416de93af90>>
c.iterweekdays = <bound method Calendar.iterweekdays of <__main__.TextCalendar object at 0x7416de93af90>>
c.monthdatescalendar = <bound method Calendar.monthdatescalendar of <__main__.TextCalendar object at 0x7416de93af90>>
c.monthdays2calendar = <bound method Calendar.monthdays2calendar of <__main__.TextCalendar object at 0x7416de93af90>>
c.monthdayscalendar = <bound method Calendar.monthdayscalendar of <__main__.TextCalendar object at 0x7416de93af90>>
c.prmonth = <bound method TextCalendar.prmonth of <__main__.TextCalendar object at 0x7416de93af90>>
c.prweek = <bound method TextCalendar.prweek of <__main__.TextCalendar object at 0x7416de93af90>>
c.pryear = <bound method TextCalendar.pryear of <__main__.TextCalendar object at 0x7416de93af90>>
c.setfirstweekday = <bound method Calendar.setfirstweekday of <__main__.TextCalendar object at 0x7416de93af90>>
c.yeardatescalendar = <bound method Calendar.yeardatescalendar of <__main__.TextCalendar object at 0x7416de93af90>>
c.yeardays2calendar = <bound method Calendar.yeardays2calendar of <__main__.TextCalendar object at 0x7416de93af90>>
c.yeardayscalendar = <bound method Calendar.yeardayscalendar of <__main__.TextCalendar object at 0x7416de93af90>>

If we use attrs we’ll see just the non-method attributes:

1
2
3
(Pdb) attrs c
c._firstweekday = 0
c.firstweekday = 0

And vars will show us just the attributes that live as proper instance attributes in that object’s __dict__ dictionary:

1
2
(Pdb) vars c
c._firstweekday = 0

The src alias can be used to see the source code for a given method:

1
2
3
4
5
6
7
(Pdb) src c.prmonth
/home/trey/.local/share/uv/python/cpython-3.15.0a3-linux-x86_64-gnu/lib/python3.15/calendar.py on line 404:
    def prmonth(self, theyear, themonth, w=0, l=0):
        """
        Print a month's calendar.
        """
        print(self.formatmonth(theyear, themonth, w, l), end='')

And the loc alias will show us all the local variables defined in the current scope:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
(Pdb) loc
APRIL = __main__.APRIL
AUGUST = __main__.AUGUST
Calendar = <class '__main__.Calendar'>
DECEMBER = __main__.DECEMBER
Day = <enum 'Day'>
FEBRUARY = __main__.FEBRUARY
FRIDAY = __main__.FRIDAY
HTMLCalendar = <class '__main__.HTMLCalendar'>
IllegalMonthError = <class '__main__.IllegalMonthError'>
IllegalWeekdayError = <class '__main__.IllegalWeekdayError'>
IntEnum = <enum 'IntEnum'>
JANUARY = __main__.JANUARY
JULY = __main__.JULY
JUNE = __main__.JUNE
LocaleHTMLCalendar = <class '__main__.LocaleHTMLCalendar'>
LocaleTextCalendar = <class '__main__.LocaleTextCalendar'>
MARCH = __main__.MARCH
MAY = __main__.MAY
MONDAY = __main__.MONDAY
Month = <enum 'Month'>
NOVEMBER = __main__.NOVEMBER
OCTOBER = __main__.OCTOBER
SATURDAY = __main__.SATURDAY
SEPTEMBER = __main__.SEPTEMBER
SUNDAY = __main__.SUNDAY
THURSDAY = __main__.THURSDAY
TUESDAY = __main__.TUESDAY
TextCalendar = <class '__main__.TextCalendar'>
WEDNESDAY = __main__.WEDNESDAY
_CLIDemoCalendar = <class '__main__._CLIDemoCalendar'>
_CLIDemoLocaleCalendar = <class '__main__._CLIDemoLocaleCalendar'>
_get_default_locale = <function _get_default_locale at 0x7416de7c0930>
_locale = <module 'locale' from '/home/trey/.local/share/uv/python/cpython-3.15.0a3-linux-x86_64-gnu/lib/python3.15/locale.py'>
_localized_day = <class '__main__._localized_day'>
_localized_month = <class '__main__._localized_month'>
_monthlen = <function _monthlen at 0x7416de7c0720>
_nextmonth = <function _nextmonth at 0x7416de7c0880>
_prevmonth = <function _prevmonth at 0x7416de7c07d0>
_validate_month = <function _validate_month at 0x7416de7c05c0>
c = <__main__.TextCalendar object at 0x7416de93af90>
datetime = <module 'datetime' from '/home/trey/.local/share/uv/python/cpython-3.15.0a3-linux-x86_64-gnu/lib/python3.15/datetime.py'>
day_abbr = <__main__._localized_day object at 0x7416de941090>
day_name = <__main__._localized_day object at 0x7416de93acf0>
different_locale = <class '__main__.different_locale'>
error = <class 'ValueError'>
global_enum = <function global_enum at 0x7416dee17480>
isleap = <function isleap at 0x7416dea92770>
leapdays = <function leapdays at 0x7416de7c0460>
mdays = [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
month_abbr = <__main__._localized_month object at 0x7416de9411d0>
month_name = <__main__._localized_month object at 0x7416de93ae40>
monthrange = <function monthrange at 0x7416de7c0670>
repeat = <class 'itertools.repeat'>
standalone_month_abbr = <__main__._localized_month object at 0x7416de954fc0>
standalone_month_name = <__main__._localized_month object at 0x7416de941310>
sys = <module 'sys' (built-in)>
weekday = <function weekday at 0x7416de7c0510>

~/.pdbrc isn’t as powerful as PYTHONSTARTUP

I also have a custom PYTHONSTARTUP file, which is launched every time I start a new Python REPL (see Handy Python REPL modifications). A PYTHONSTARTUP file is just Python code, which makes it easy to customize.

A ~/.pdbrc file is not Python code… it’s a very limited custom file format.

You may notice that every alias line defined in my ~/.pdbrc file is a bunch of code shoved all on one line. That’s because there’s no way to define an alias over multiple lines.

Also any variables assigned in an alias will leak into the surrounding scope… so I have a del statement in a couple of those aliases to clean up a stray variable assignment (from an import).

See the documentation on alias and the top of the debugger commands for more on how ~/.pdbrc files work.

April 06, 2026 02:30 PM UTC


Real Python

D-Strings Could End Your textwrap.dedent() Days and Other Python News for April 2026

If you’ve ever wrapped a multiline string in textwrap.dedent() and wondered why Python can’t just handle that for you, then your PEP has arrived. PEP 822 proposes d-strings, a new d"""...""" prefix that automatically strips leading indentation. It’s one of those small quality-of-life ideas that make you wonder why it didn’t exist already. The PEP is currently a draft proposal.

March also delivered Python 3.15.0 alpha 7 with lazy imports you can finally test and security patches across three older branches. On the ecosystem side, GPT-5.4 landed with a tool search feature that changes agentic workflows. Meanwhile, the Python Insider blog migration moved 307 posts to a new home without breaking a single URL. It’s time to get into the biggest Python news from the past month.

Join Now: Click here to join the Real Python Newsletter and you’ll never miss another Python tutorial, course, or news update.

Python Releases and PEP Highlights

March brought the penultimate alpha of Python 3.15 with a long-awaited feature that finally lets Python developers defer imports cleanly. On top of that, security patches landed for three older branches, and a fresh PEP proposal showed up that could clean up your multiline strings for good.

Python 3.15.0 Alpha 7: Lazy Imports Land

Python 3.15.0a7 dropped on March 10, the second-to-last alpha before the beta freeze on May 5. The headline feature you can finally test is PEP 810, explicit lazy imports. The Steering Council accepted PEP 810 back in November, but this is the first alpha where the implementation is available to try.

The idea is straightforward: prefix any import statement with lazy, and the module won’t actually load until you first access an attribute on it:

Python
lazy import json
lazy from datetime import timedelta

# The json module isn't loaded yet, so no startup cost

# Later, when you actually use it:
data = json.loads(payload)  # Now it loads

The PEP authors note that 17 percent of standard library imports are already placed inside functions to defer loading. Tools like Django’s management commands, Click-based CLIs, and codebases heavy on type checking often spend hundreds of milliseconds on imports they might never use. Lazy imports make that optimization explicit and clean, without scattering imports deep inside function bodies.

Note: Alpha 7 also continues to ship the JIT compiler improvements from earlier alphas, with 3–4 percent geometric mean gains on x86-64 Linux and 7–8 percent on AArch64 macOS. Alpha 8 is scheduled for April 7, with the beta phase starting May 5.

Security Releases: Python 3.12.13, 3.11.15, and 3.10.20

On March 3, Thomas Wouters released security-only patches across three older Python branches. The updates fix several CVEs, including two XML parsing vulnerabilities (CVE-2026-24515 and CVE-2026-25210), patched by upgrading the bundled libexpat to 2.7.4. Additional fixes cover an XML memory amplification bug and the rejection of control characters in HTTP headers and URL parsing.

If you’re still running Python 3.12 or older in production, applying these patches is highly recommended. Python 3.12 is now in security-fixes-only mode, so no binary installers are provided. You’ll need to build from source.

PEP 822: Dedented Multiline Strings (D-Strings)

PEP 822, authored by Inada Naoki, proposes a new d"""...""" string prefix that automatically strips leading indentation from multiline strings, using the same algorithm as textwrap.dedent().

Anyone who’s written a multiline SQL query or help text inside a function and battled with indentation knows the pain:

Python
import textwrap

# Before: awkward indentation or textwrap.dedent() wrapper
def get_query():
    return textwrap.dedent("""\
        SELECT name, email
        FROM users
        WHERE active = true
    """)

# With d-strings: clean and readable
def get_query():
    return d"""
        SELECT name, email
        FROM users
        WHERE active = true
    """

The d prefix combines with f, r, b, and even the upcoming t (template strings) prefixes. PEP 822 was submitted to the Steering Council on March 9 and targets Python 3.15, though a decision hasn’t landed yet. If you’ve ever wished Python strings would just handle indentation for you, this one’s worth keeping an eye on.

Other PEPs in Progress

Read the full article at https://realpython.com/python-news-april-2026/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 06, 2026 02:00 PM UTC

Quiz: For Loops in Python (Definite Iteration)

Test your understanding of For Loops in Python (Definite Iteration).

You’ll revisit Python loops, iterables, and how iterators behave. You’ll also explore set iteration order and the effects of the break and continue statements.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 06, 2026 12:00 PM UTC


Python Bytes

#476 Common themes

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://pydevtools.com/blog/migrating-from-mypy-to-ty-lessons-from-fastapi/?featured_on=pythonbytes">Migrating from mypy to ty: Lessons from FastAPI</a></strong></li> <li><strong><a href="https://oxyde.fatalyst.dev/latest/?featured_on=pythonbytes">Oxyde ORM</a></strong></li> <li><strong><a href="https://guoci.github.io/typeshedded_CPython_docs/library/functions.html?featured_on=pythonbytes">Typeshedded CPython docs</a></strong></li> <li><strong><a href="https://mkennedy.codes/posts/raw-dc-a-retrospective/?featured_on=pythonbytes">Raw+DC Database Pattern: A Retrospective</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=tOM8fOhcNbI' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="476">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: <a href="https://pydevtools.com/blog/migrating-from-mypy-to-ty-lessons-from-fastapi/?featured_on=pythonbytes">Migrating from mypy to ty: Lessons from FastAPI</a></strong></p> <ul> <li>Tim Hopper</li> <li>I saw this post by Sebastián Ramírez about all of his projects <a href="https://bsky.app/profile/tiangolo.com/post/3milnufxpcs2h?featured_on=pythonbytes">switching to ty</a> <ul> <li>FastAPI, Typer, SQLModel, Asyncer, FastAPI CLI</li> </ul></li> <li>SqlModel is already ty only - mypy removed</li> <li>This signals that ty is ready to use</li> <li>Tim lists some steps to apply ty to your own projects <ul> <li>Add ty alongside mypy</li> <li>Set <code>error-on-warning = true</code></li> <li>Accept the double-ignore comments</li> <li>Pick a smaller project to cut over first</li> <li>Drop mypy when the noise exceeds the signalAdd ty alongside mypy</li> </ul></li> <li>Related anecdote: <ul> <li>I had tried out ty with <a href="https://github.com/okken/pytest-check?featured_on=pythonbytes">pytest-check</a> in the past with difficulty</li> <li>Tried it again this morning, only a few areas where mypy was happy but ty reported issues</li> <li>At least one ty warning was a potential problem for people running pre-releases of pytest,</li> <li>Not really related: <a href="https://packaging.pypa.io/en/latest/version.html?featured_on=pythonbytes">packaging.version.parse</a> is awesome</li> </ul></li> </ul> <p><strong>Michael #2: <a href="https://oxyde.fatalyst.dev/latest/?featured_on=pythonbytes">Oxyde ORM</a></strong></p> <ul> <li><strong>Oxyde ORM</strong> is a type-safe, Pydantic-centric asynchronous ORM with a high-performance Rust core.</li> <li>Note: Oxyde is a young project under active development. The API may evolve between minor versions.</li> <li>No sync wrappers or thread pools. Oxyde is async from the ground up</li> <li>Includes <a href="https://github.com/mr-fatalyst/oxyde-admin?featured_on=pythonbytes"><strong>oxyde-admin</strong></a></li> <li>Features <ul> <li><strong>Django-style API</strong> - Familiar <code>Model.objects.filter()</code> syntax</li> <li><strong>Pydantic v2 models</strong> - Full validation, type hints, serialization</li> <li><strong>Async-first</strong> - Built for modern async Python with <code>asyncio</code></li> <li><strong>Rust performance</strong> - SQL generation and execution in native Rust</li> <li><strong>Multi-database</strong> - PostgreSQL, SQLite, MySQL support</li> <li><strong>Transactions</strong> - <code>transaction.atomic()</code> context manager with savepoints</li> <li><strong>Migrations</strong> - Django-style <code>makemigrations</code> and <code>migrate</code> CLI</li> </ul></li> </ul> <p><strong>Brian #3:</strong> <a href="https://guoci.github.io/typeshedded_CPython_docs/library/functions.html?featured_on=pythonbytes">Typeshedded CPython docs</a></p> <ul> <li><a href="https://bsky.app/profile/emmatyping.dev/post/3mfhxrttu2s22?featured_on=pythonbytes"><strong>Thanks emmatyping for the suggestion</strong></a></li> <li>Documentation for Python with typeshed types</li> <li>Source: <a href="https://github.com/guoci/typeshedding_cpython_docs?featured_on=pythonbytes"><strong>typeshedding_cpython_docs</strong></a></li> </ul> <p><strong>Michael #4:</strong> <a href="https://mkennedy.codes/posts/raw-dc-a-retrospective/?featured_on=pythonbytes">Raw+DC Database Pattern: A Retrospective</a></p> <ul> <li>A new design pattern I’m seeing gain traction in the software space: <a href="https://mkennedy.codes/posts/raw-dc-the-orm-pattern-of-2026/?featured_on=pythonbytes">Raw+DC: The ORM pattern of 2026</a></li> <li>I’ve had a chance to migrate three of my most important web app.</li> <li>Thrilled to report that yes, <strong>the web app is much faster using Raw+DC</strong></li> <li>Plus, this was part of the journey to move from 1.3 GB memory usage to 0.45 GB (more on this next week)</li> </ul> <p><img src="https://cdn.mkennedy.codes/posts/raw-dc-a-retrospective/raw-dc-vs-mongoengine-graph.webp" alt="" /></p> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">Lean TDD 0.5 update</a> <ul> <li>Significant rewrite and focus</li> </ul></li> </ul> <p>Michael:</p> <ul> <li><a href="https://github.com/databooth/pytest-just?featured_on=pythonbytes">pytest-just</a> (for <a href="https://github.com/casey/just?featured_on=pythonbytes">just command file</a> testing), by Michael Booth</li> <li>Something going on with Encode <ul> <li><strong>httpx</strong>: <a href="https://www.reddit.com/r/Python/comments/1rl5kuq/anyone_know_whats_up_with_httpx/?featured_on=pythonbytes">Anyone know what's up with HTTPX?</a> And <a href="https://tildeweb.nl/~michiel/httpxyz.html?featured_on=pythonbytes">forked</a></li> <li><strong>starlette</strong> and <strong>uvicorn</strong>: <a href="https://github.com/Kludex/starlette/discussions/2997?featured_on=pythonbytes">Transfer of Uvicorn &amp; Starlette</a></li> <li><strong>mkdocs</strong>: <a href="https://fpgmaas.com/blog/collapse-of-mkdocs/?featured_on=pythonbytes">The Slow Collapse of MkDocs</a></li> <li><strong>django-rest-framework:</strong> <a href="https://github.com/django-commons/membership/issues/188#issue-3070631761">Move to django commons?</a></li> </ul></li> <li><a href="https://talkpython.fm/blog/posts/announcing-course-completion-certificates/?featured_on=pythonbytes">Certificates at Talk Python Training</a></li> </ul> <p><strong>Joke:</strong> </p> <ul> <li><a href="https://x.com/PR0GRAMMERHUM0R/status/2021509552504525304?featured_on=pythonbytes"><strong>Neue Rich</strong></a></li> </ul>

April 06, 2026 08:00 AM UTC

April 05, 2026


EuroPython

Humans of EuroPython: George Zisopoulos

Behind every flawless talk, engaging workshop, and perfectly timed coffee break at EuroPython is a crew of unsung heroes—our volunteers! 🌟 Not just organizers, but dream enablers: printer ninjas, registration magicians, social butterflies, and even salsa instructors (yeah, that happened!) 

We’re the quiet force turning chaos into community, one sprint at a time. 💻✨ 

Curious who really makes the magic happen? Today we’d like to introduce George Zisopoulos, member of the Operations team at EuroPython 2025. 

altGeorge Zisopoulos, member of the Operations Team at EuroPython 2025

EP: What first inspired you to volunteer for EuroPython? And which edition of the conference was it?

I was inspired because I gave a presentation in 2020, and after that I wanted to experience the conference from the other side, as part of the volunteers. It was amazing to see how much work all these people had done for us as attendees, and I wanted to be a part of that.

So I applied and became an online volunteer in 2022 in Dublin, and the following year I joined EuroPython 2023 as an on-site volunteer. Once you start, you can’t stop doing it.

EP: Have you learned new skills while contributing to EuroPython? If so, which ones?

It’s less about learning new skills and more about discovering the ones you already have. With guidance and a supportive team, you feel confident using them and even pushing a bit past your comfort zone.

EP: What&aposs your favorite memory from volunteering at the conference?

My favorite part is walking into the conference and unexpectedly running into someone you met at previous years’ editions. It’s like a little déjà vu. They hug you like you just saw them yesterday, even if it’s been a whole year.

EP: Did you make any lasting friendships or professional connections through volunteering?

Yes, I’ve made a few lasting friendships. We stay in touch all year, even though we live in different cities or countries. We visit each other, and often end up meeting in other countries while traveling.

EP: Any unexpected or funny experiences during the conference which you’d like to share?

I love coffee, so during the conference I’m usually wandering around with a cup in hand. Two years ago, thanks to some playful hits from friends, I ended up destroying three t-shirts with coffee during the conference! Now every year they wonder… How many shirts will I sacrifice this time?

EP: Would you volunteer again, and why?

I would say what I used to say last year: Summer without EuroPython just doesn’t really feel like a summer 😉 See you all there!

EP: Thank you for your contribution, George!

April 05, 2026 02:18 PM UTC

April 04, 2026


Marcos Dione

Correcting OpenStreetMap wrong tag values

As a hobbyist consumer of OSM data to render maps, I find wrong tags annoying. Bad values mean that the resulting map is wrong or incomplete, so less useful. I decided to attack the most egregious ones, which include typos, street names instead of type and some other errors. The idea is to attack the long tail first, so I'm not blocked because the next batch of errors (objects with exactly the same error) looks too big (yes, OCD).

So I hacked a small python script to help me find and edit them:

#! /usr/bin/env python3

import os

import psycopg2


def main():
    db = psycopg2.connect(dbname='europe')
    cursor = db.cursor()

    cursor.execute('''
        SELECT
            count(*) AS count,
            highway
        FROM planet_osm_line
        WHERE
            highway IS NOT NULL
        GROUP BY highway
        ORDER BY count ASC
    ''')
    data = cursor.fetchall()

    for count, highway in data:
        print(f"next {count}: {highway}")

        cursor.execute('''
            SELECT osm_id
            FROM planet_osm_line
            WHERE
                highway = %s
        ''', (highway, ))

        for (osm_id, ) in cursor.fetchall():
            if osm_id < 0:
                # in rendering DBs, this is a relation
                os.system(f"librewolf -P default 'https://www.openstreetmap.org/edit?relation={-osm_id}'")
            else:
                os.system(f"librewolf -P default 'https://www.openstreetmap.org/edit?way={osm_id}'")


if __name__ == '__main__':
    main()

It is quite inefficient, but what I want is to edit the errors, not to write a script :) This requires a rendering database, which I already have locally :)

From here the workflow is:

In my machine, finding the long tail, and finding each set of errors takes one minute, so I was launching two at the same time. One of the things to notice is that if the object you try to edit does no exists anymore, you get and edit view of the whole planet.

April 04, 2026 10:16 AM UTC