Planet Python
Last update: January 27, 2026 09:44 PM UTC
January 27, 2026
PyCoder’s Weekly
Issue #719: Django Tasks, Dictionaries, Ollama, and More (Jan. 27, 2026)
#719 – JANUARY 27, 2026
View in Browser »
Migrating From Celery to Django Tasks
Django 6 introduced the new tasks framework, a general interface for asynchronous tasks. This article shows you how to go from Celery specific code to the new general purpose mechanism.
PAUL TRAYLOR
The Hidden Cost of Python Dictionaries
Learn why Python dicts cause silent bugs and how NamedTuple, dataclass, and Pydantic catch errors earlier with better error messages.
CODECUT.AI • Shared by Khuyen Tran
Python Errors? Fix ‘em Fast for FREE with Honeybadger
If you support web apps in production, you need intelligent logging with error alerts and de-duping. Honeybadger filters out the noise and transforms Python logs into contextual issues so you can find and fix errors fast. Get your FREE account →
HONEYBADGER sponsor
How to Integrate Local LLMs With Ollama and Python
Learn how to integrate your Python projects with local models (LLMs) using Ollama for enhanced privacy and cost efficiency.
REAL PYTHON
Articles & Tutorials
Nothing to Declare: From NaN to None via null
Explore the key differences between NaN, null, and None in numerical data handling using Python. While all signal “no meaningful value,” they behave differently. Learn about the difference and how to correctly handle the data using Pydantic models and JSON serialization.
FMULARCZYK.PL • Shared by Filip Mularczyk
Continuing to Improve the Learning Experience at Real Python
If you haven’t visited the Real Python website lately, then it’s time to check out a great batch of updates on realpython.com! Dan Bader returns to the show this week to discuss improvements to the site and more ways to learn Python.
REAL PYTHON podcast
The Ultimate Guide to Docker Build Cache
Docker builds feel slow because cache invalidation is working against you. Depot explains how BuildKit’s layer caching works, when to use bind mounts vs cache mounts, and how to optimize your Dockerfile so Gradle dependencies don’t rebuild on every code change →
DEPOT sponsor
The State of WebAssembly: 2025 and 2026
A comprehensive look at WebAssembly in 2025 and 2026, covering browser support, Safari updates, WebAssembly 3.0, WASI, .NET, Kotlin, debugging improvements, and growing adoption across edge computing and embedded devices.
GERARD GALLANT
Asyncio Is Neither Fast Nor Slow
There are many misconceptions on asyncio, as such there are many misleading benchmarks out there. This article looks at how to analyse a benchmark result and to come up with more relevant conclusions.
CHANGS.CO.UK • Shared by Jamie Chang
Expertise Is the Art of Ignoring
Kevin says that trying to “master” a programming language is a trap. Real expertise comes from learning what you need, when you need it, and ignoring the rest on purpose.
KEVIN RENSKERS
uv vs pip: Python Packaging and Dependency Management
Choosing between uv vs pip? This video course compares speed, reproducible environments, compatibility, and dependency management to help you pick the right tool.
REAL PYTHON course
Ee Durbin Departing the Python Software Foundation
Ee Durbin is a long time contributor to Python and was heavily involved in the community even before becoming a staff member at the PSF. Ee is moving on though.
PYTHON SOFTWARE FOUNDATION
Self-Concatenation
Strings and other sequences can be multiplied by numbers to self-concatenate them. You need to be careful with mutable sequences though.
TREY HUNNER
Python, Is It Being Killed by Incremental Improvements?
This opinion piece asks whether Python’s recent focus on concurrency is a misstep and whether efforts should be focused elsewhere.
STEFAN-MARR.DE
How to Parametrize Exception Testing in PyTest?
A quick TIL-style article on how to provide different input data and test different exceptions being raised in pytest.
BORUTZKI
Projects & Code
django-nis2-shield: NIS2 Compliance Middleware
GITHUB.COM/NIS2SHIELD • Shared by Fabrizio Di Priamo
pfst: AST Manipulation That Preserves Formatting
GITHUB.COM/TOM-PYTEL • Shared by Tomasz Pytel
Events
Weekly Real Python Office Hours Q&A (Virtual)
January 28, 2026
REALPYTHON.COM
Python Devroom @ FOSDEM 2026
January 31 to February 1, 2026
FOSDEM.ORG
Melbourne Python Users Group, Australia
February 2, 2026
J.MP
PyBodensee Monthly Meetup
February 2, 2026
PYBODENSEE.COM
STL Python
February 5, 2026
MEETUP.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #719.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
death and gravity
DynamoDB crash course: part 1 – philosophy
This is part one of a series covering core DynamoDB concepts and patterns, from the data model and features all the way up to single-table design.
The goal is to get you to understand what idiomatic usage looks like and what the trade-offs are in under an hour, providing entry points to detailed documentation.
(Don't get me wrong, the AWS documentation is comprehensive, but can be quite complex, and DynamoDB being a relatively low level product with lots of features added over the years doesn't really help with that.)
Today, we're looking at what DynamoDB is and why it is the way it is.
What is DynamoDB? #
Quoting Wikipedia:
Amazon DynamoDB is a managed NoSQL database service provided by AWS. It supports key-value and document data structures and is designed to handle a wide range of applications requiring scalability and performance.
See also
This definition should suffice for now; for a more detailed refresher, see:
The DynamoDB data model can be summarized as follows:
A table is a collection of items, and an item is a collection of named attributes. Items are uniquely identified by a partition key attribute and an optional sort key attribute. The partition key determines where (i.e. on what computer) an item is stored. The sort key is used to get ordered ranges of items from a specific partition.
That's is, that's the whole data model. Sure, there's indexes and transactions and other features, but at its core, this is it. Put another way:
A DynamoDB table is a hash table of B-trees1 – partition keys are hash table keys, and sort keys are B-tree keys. Because of this, any access not based on partition and sort key is expensive, since you end up doing a full table scan.
If you were to implement this model in Python, it'd look something like this:
from collections import defaultdict
from sortedcontainers import SortedDict
class Table:
def __init__(self, pk_name, sk_name):
self._pk_name = pk_name
self._sk_name = sk_name
self._partitions = defaultdict(SortedDict)
def put_item(self, item):
pk, sk = item[self._pk_name], item[self._sk_name]
old_item = self._partitions[pk].setdefault(sk, {})
old_item.clear()
old_item.update(item)
def get_item(self, pk, sk):
return dict(self._partitions[pk][sk])
def query(self, pk, minimum=None, maximum=None, inclusive=(True, True), reverse=False):
# in the real DynamoDB, this operation is paginated
partition = self._partitions[pk]
for sk in partition.irange(minimum, maximum, inclusive, reverse):
yield dict(partition[sk])
def scan(self):
# in the real DynamoDB, this operation is paginated
for partition in self._partitions.values():
for item in partition.values():
yield dict(item)
def update_item(self, item):
pk, sk = item[self._pk_name], item[self._sk_name]
old_item = self._partitions[pk].setdefault(sk, {})
old_item.update(item)
def delete_item(self, pk, sk):
del self._partitions[pk][sk]
>>> table = Table('Artist', 'Song')
>>>
>>> table.put_item({'Artist': '1000mods', 'Song': 'Vidage', 'Year': 2011})
>>> table.put_item({'Artist': '1000mods', 'Song': 'Claws', 'Album': 'Vultures'})
>>> table.put_item({'Artist': 'Kyuss', 'Song': 'Space Cadet'})
>>>
>>> table.get_item('1000mods', 'Claws')
{'Artist': '1000mods', 'Song': 'Claws', 'Album': 'Vultures'}
>>> [i['Song'] for i in table.query('1000mods')]
['Claws', 'Vidage']
>>> [i['Song'] for i in table.query('1000mods', minimum='Loose')]
['Vidage']
Philosophy #
One can't help but feel this kind of simplicity would be severely limiting.
A consequence of DynamoDB being this low level is that, unlike with most relational databases, query planning and sometimes index management happen at the application level, i.e. you have to do them yourself in code. In turn, this means you need to have a clear, upfront understanding of your application's access patterns, and accept that changes in access patterns will require changes to the application.
In return, you get a fully managed, highly-available database that scales infinitely:2 there are no servers to take care of, there's almost no downtime, and there are no limits on table size or the number of items in a table; where limits do exist, they are clearly documented, allowing for predictable performance.
This highlights an intentional design decision that is essentially DynamoDB's main proposition to you as its user: data modeling complexity is always preferable to complexity coming from infrastructure maintenance, availability, and scalability (what AWS marketing calls "undifferentiated heavy lifting").
To help manage this complexity, a number of design patterns have arisen, covered extensively by the official documentation, and which we'll discuss in a future article. Even so, the toll can be heavy – by AWS's own admission, the prime disadvantage of single table design, the fundamental design pattern, is that:
[the] learning curve can be steep due to paradoxical design compared to relational databases
As this walkthrough puts it:
a well-optimized single-table DynamoDB layout looks more like machine code than a simple spreadsheet
...which, admittedly, sounds pretty cool, but also why would I want that? After all, most useful programming most people do is one or two abstraction levels above assembly, itself one over machine code.
See also
- NoSQL design
- (unofficial) # The DynamoDB philosophy of limits
A bit of history #
Perhaps it's worth having a look at where DynamoDB comes from.
Amazon.com used Oracle databases for a long time. To cope with the increasing scale, they first adopted a database-per-service model, and then sharding, with all the architectural and operational overhead you would expect. At its 2017 peak (five years after DynamoDB was released in AWS, and over ten years after some version of it was available internally), they still had 75 PB of data in nearly 7500 Oracle databases, owned by 100+ teams, with thousands of applications, for OLTP workloads alone. That sounds pretty traumatic – it was definitely bad enough to allegedly ban OLTP relational databases internally, and require that teams get VP approval to use one.
Yeah, coming from that, it's hard to argue DynamoDB adds complexity.
That is not to say relational databases cannot be as scalable as DynamoDB, just that Amazon doesn't belive in them – distributed SQL databases like Google's Spanner and CockroachDB have existed for a while now, and even AWS seems to be warming up to the idea.
This might also explain why the design patterns are so slow to make their way into SDKs, or even better, into DynamoDB itself; when you have so many applications and so many experienced teams, the cost of yet another bit of code to do partition key sharding just isn't that great.
See also
- (paper) Amazon DynamoDB: A Scalable, Predictably Performant, and Fully Managed NoSQL Database Service (2022)
- (paper) Dynamo: Amazon’s Highly Available Key-value Store (2007)
Anyway, that's it for now.
In the next article, we'll have a closer look at the DynamoDB data model and features.
Learned something new today? Share it with others, it really helps!
Want to know when new articles come out? Subscribe here to get new stuff straight to your inbox!
Or any other sorted data structure that allows fast searches, sequential access, insertions, and deletions. [return]
As the saying goes, the cloud is just someone else's computers. Here, "infinitely" means it scales horizontally, and you'll run out of money before AWS runs out of computers. [return]
Real Python
Create Callable Instances With Python's .__call__()
In Python, a callable is any object that you can call using a pair of parentheses and, optionally, a series of arguments. Functions, classes, and methods are all common examples of callables in Python. Besides these, you can also create custom classes that produce callable instances. To do this, you can add the .__call__() special method to your class.
Instances of a class with a .__call__() method behave like functions, providing a flexible and handy way to add functionality to your objects. Understanding how to create and use callable instances is a valuable skill for any Python developer.
In this video course, you’ll:
- Understand the concept of callable objects in Python
- Create callable instances by adding a
.__call__()method to your classes - Compare
.__init__()and.__call__()and understand their distinct roles - Build practical examples that use callable instances to solve real-world problems
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyBites
The missing 66% of your skillset
Bob and I have spent many years as Python devs, and 6 years coaching with Pybites and we can safely say that being a Senior Developer is only about 1/3 Python knowledge.
The other 60% is the ecosystem. It’s the tooling. It’s all of the tech around Python that makes you stand out from the rest.
This is the biggest blind spot keeping developers stuck in Tutorial Hell. You spend hours memorising obscure library features, but you crumble when asked to configure a CI/CD pipeline. (That’s not just made up by the way – many of you in dev roles will have seen this with colleagues at some point or another!)
These are the elements of the Python ecosystem you should absolutely be building experience with if you want to move from being a scripter to an engineer:
- Dependency Management: Stop using pip freeze. Look at uv.
- Git: Not just add/commit. Learn branching strategies and how to fix a merge conflict without panicking.
- Testing: print() is not a test. Learn pytest and how to write good tests.
- Quality Control: Set up Linters (Ruff) so you stop arguing about formatting, and ty for type checking.
- Automation: Learn GitHub Actions (CI/CD). Make the robots run your tests for you.
- Deployment: How does your code get to a server? Learn basic Docker and Cloud.
- The CLI: Stop clicking buttons and get comfortable in the terminal. Learn Makefiles and create a make install or make test command to save your sanity.
It looks like a lot. It is a lot. But this is the difference between a hobbyist and a professional.
Does this make you feel overwhelmed? Or does it give you a roadmap of what to do this year?
I’m curious! Feel free to hit me up in the Community with your thoughts.
And yes, these are all things we coach people on in PDM. Use the link below to have a chat.
Julian
This note was originally sent to our email list. Join here: https://pybit.es/newsletter
Seth Michael Larson
Use “\A...\z”, not “^...$” with Python regular expressions
Two years ago I discovered a potential foot-gun
with the Python standard library “re” module.
I blogged about this behavior,
and turns out that
I wasn't only one who didn't know this:
The article was #1 on HackerNews and the
most-read article on my blog in 2024.
In short the unexpected behavior is that the pattern “^Hello$” matches both “Hello” and “Hello\n”,
and sometimes you don't intend to match a trailing newline.
This article serves as a follow-up!
Back in 2024
I created a table showing that \z was a partially viable
alternative to $ for matching end-of-string
without matching a trailing newline... for every regular expression
implementation EXCEPT Python and EMCAScript.
But that is no longer true, Python 3.14 now supports \z! This means \z is one step closer
to being the universal recommendation to match
the end of string without matching a newline.
Obviously no one is upgrading their Python
version just for this new feature, but it's good to know that
the gap is being closed. Thanks to David Wheeler
for doing deeper research in the OpenSSF Best Practices
WG and publishing this report.
Until Python 3.13 is deprecated and long gone: using \Z (as an alias for \z) works fine for Python regular expressions.
Just note that this behavior isn't the same across regular expression
implementations, for example EMCAScript, Golang, and Rust
don't support \Z and for PHP, Java, and .NET \Z actually
matches trailing newlines!
Thanks for keeping RSS alive! ♥
Armin Ronacher
Colin and Earendil
Regular readers of this blog will know that I started a new company. We have put out just a tiny bit of information today, and some keen folks have discovered and reached out by email with many thoughtful responses. It has been delightful.
Colin and I met here, in Vienna. We started sharing coffees, ideas, and lunches, and soon found shared values despite coming from different backgrounds and different parts of the world. We are excited about the future, but we’re equally vigilant of it. After traveling together a bit, we decided to plunge into the cold water and start a company together. We want to be successful, but we want to do it the right way and we want to be able to demonstrate that to our kids.
Vienna is a city of great history, two million inhabitants and a fascinating vibe that is nothing like San Francisco. In fact, Vienna is in many ways the polar opposite to the Silicon Valley, both in mindset, in opportunity and approach to life. Colin comes from San Francisco, and though I’m Austrian, my career has been shaped by years working with California companies and people from there who used my Open Source software. Vienna is now our shared home. Despite Austria being so far away from California, it is a place of tinkerers and troublemakers. It’s always good to remind oneself that society consists of more than just your little bubble. It also creates the necessary counter balance to think in these times.
The world that is emerging in front of our eyes is one of change. We incorporated as a PBC with a founding charter to craft software and open protocols, strengthen human agency, bridge division and ignorance and to cultivate lasting joy and understanding. Things we believe in deeply.
I have dedicated 20 years of my life in one way or another creating Open Source software. In the same way as artificial intelligence calls into question the very nature of my profession and the way we build software, the present day circumstances are testing society. We’re not immune to these changes and we’re navigating them like everyone else, with a mixture of excitement and worry. But we share a belief that right now is the time to stand true to one’s values and principles. We want to take an earnest shot at leaving the world a better place than we found it. Rather than reject the changes that are happening, we look to nudge them towards the right direction.
If you want to follow along you can subscribe to our newsletter, written by humans not machines.
January 26, 2026
Real Python
GeoPandas Basics: Maps, Projections, and Spatial Joins
GeoPandas extends pandas to make working with geospatial data in Python intuitive and powerful. If you’re looking to do geospatial tasks in Python and want a library with a pandas-like API, then GeoPandas is an excellent choice. This tutorial shows you how to accomplish four common geospatial tasks: reading in data, mapping it, applying a projection, and doing a spatial join.
By the end of this tutorial, you’ll understand that:
- GeoPandas extends pandas with support for spatial data. This data typically lives in a
geometrycolumn and allows spatial operations such as projections and spatial joins, while Folium focuses on richer interactive web maps after data preparation. - You inspect CRS with
.crsand reproject data using.to_crs()with an authority code likeEPSG:4326orESRI:54009. - A geographic CRS stores longitude and latitude in degrees, while a projected CRS uses linear units like meters or feet for area and distance calculations.
- Spatial joins use
.sjoin()with predicates like"within"or"intersects", and both inputs must share the same CRS or the relationships will be computed incorrectly.
Here’s how GeoPandas compares with alternative libraries:
| Use Case | Pick pandas | Pick Folium | Pick GeoPandas |
|---|---|---|---|
| Tabular data analysis | ✅ | - | ✅ |
| Mapping | - | ✅ | ✅ |
| Projections, spatial joins | - | - | ✅ |
GeoPandas builds on pandas by adding support for geospatial data and operations like projections and spatial joins. It also includes tools for creating maps. Folium complements this by focusing on interactive, web-based maps that you can customize more deeply.
Get Your Code: Click here to download the free sample code for learning how to work with GeoPandas maps, projections, and spatial joins.
Take the Quiz: Test your knowledge with our interactive “GeoPandas Basics: Maps, Projections, and Spatial Joins” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
GeoPandas Basics: Maps, Projections, and Spatial JoinsTest GeoPandas basics for reading, mapping, projecting, and spatial joins to handle geospatial data confidently.
Getting Started With GeoPandas
You’ll first prepare your environment and load a small dataset that you’ll use throughout the tutorial. In the next two subsections, you’ll install the necessary packages and read in a sample dataset of New York City borough boundaries. This gives you a concrete GeoDataFrame to explore as you learn the core concepts.
Installing GeoPandas
This tutorial uses two packages: geopandas for working with geographic data and geodatasets for loading sample data. It’s a good idea to install these packages inside a virtual environment so your project stays isolated from the rest of your system and you can manage its dependencies cleanly.
Once your virtual environment is active, you can install both packages with pip:
$ python -m pip install "geopandas[all]" geodatasets
Using the [all] option ensures you have everything needed for reading data, transforming coordinate systems, and creating plots. For most readers, this will work out of the box.
If you do run into installation issues, the project’s maintainers provide alternative installation options on the official installation page.
Reading in Data
Most geospatial datasets come in GeoJSON or shapefile format. The read_file() function can read both, and it accepts either a local file path or a URL.
In the example below, you’ll use read_file() to load the New York City Borough Boundaries (NYBB) dataset. The geodatasets package provides a convenient path to this dataset, so you don’t need to download anything manually. You’ll also drop unnecessary columns:
>>> import geopandas as gpd
>>> import matplotlib.pyplot as plt
>>> from geodatasets import get_path
>>> path_to_data = get_path("nybb")
>>> nybb = gpd.read_file(path_to_data)
>>> nybb = nybb[["BoroName", "Shape_Area", "geometry"]]
>>> nybb
BoroName Shape_Area geometry
0 Staten Island 1.623820e+09 MULTIPOLYGON (((970217.022 145643.332, ....
1 Queens 3.045213e+09 MULTIPOLYGON (((1029606.077 156073.814, ...
2 Brooklyn 1.937479e+09 MULTIPOLYGON (((1021176.479 151374.797, ...
3 Manhattan 6.364715e+08 MULTIPOLYGON (((981219.056 188655.316, ....
4 Bronx 1.186925e+09 MULTIPOLYGON (((1012821.806 229228.265, ...
>>> type(nybb)
<class 'geopandas.geodataframe.GeoDataFrame'>
>>> type(nybb["geometry"])
<class 'geopandas.geoseries.GeoSeries'>
nybb is a GeoDataFrame. A GeoDataFrame has rows, columns, and all the methods of a pandas DataFrame. The difference is that it typically includes a special geometry column, which stores geographic shapes instead of plain numbers or text.
The geometry column is a GeoSeries. It behaves like a normal pandas Series, but its values are spatial objects that you can map and run spatial queries against. In the nybb dataset, each borough’s geometry is a MultiPolygon—a shape made of several polygons—because every borough consists of multiple islands. Soon you’ll use these geometries to make maps and run spatial operations, such as finding which borough a point falls inside.
Mapping Data
Once you’ve loaded a GeoDataFrame, one of the quickest ways to understand your data is to visualize it. In this section, you’ll learn how to create both static and interactive maps. This allows you to inspect shapes, spot patterns, and confirm that your geometries look the way you expect.
Creating Static Maps
Read the full article at https://realpython.com/geopandas/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Kushal Das
replyfast a python module for signal
replyfast is a Python module to receive and send messages on Signal.
You can install it via
python3 -m pip install replyfast
or
uv pip install replyfast
I have to add Windows builds to CI though.
I have a script to help you to register as a device, and then you can send and receive messages.
I have a demo bot which shows both sending and rreceiving messages, and also how to schedule work following the crontab syntaxt.
scheduler.register(
"*/5 * * * *",
send_disk_usage,
args=(client,),
name="disk-usage",
)
This is all possible due to the presage library written in Rust.
Real Python
Quiz: GeoPandas Basics: Maps, Projections, and Spatial Joins
In this quiz, you’ll test your understanding of GeoPandas.
You’ll review coordinate reference systems, GeoDataFrames, interactive maps, and spatial joins with .sjoin(). You’ll also explore how projections affect maps and learn best practices for working with geospatial data.
This quiz helps you confirm that you can prepare, visualize, and analyze geospatial data accurately using GeoPandas.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Software Foundation
Your Python. Your Voice. Join the Python Developers Survey 2026!
This year marks the ninth iteration of the official Python Developers Survey. We intentionally launched the survey in January (later than years prior) so that data collection and results can be completed and shared within the same calendar year. The survey aims to capture the current state of the Python language and its surrounding ecosystem. By comparing the results with last year’s, the community can identify emerging trends and gain deeper insight into how Python continues to evolve.
We encourage you to contribute to our community’s knowledge by sharing your experience and perspective. Your participation is valued! The survey should only take you about 10-15 minutes to complete.
Contribute to the Python Developers Survey 2026!
This year we aim to reach even more of our community and ensure accurate global representation by highlighting our localization efforts:
- The survey is translated into Chinese, French, German, Japanese, Korean, Portuguese, Russian, Spanish.
- To assist individuals in promoting the survey and encouraging their local communities and professional networks we have created a Promotion Kit with images and social media posts translated into a variety of languages. We hope this promotion kit empowers folks to spread the invitation to respond to the survey within their local communities.
- We’d love it if you’d share one or more of the posts in the Promotion Kit to your social media or any community accounts you manage, as well as share the information in Python related discords, mailing lists, or chats you participate in.
- If you would like to help out with translations you see are missing, please request edit access to the doc and share what language you will be translating to. Translations for promotions into languages the survey may not be translated to is also welcome!
If you have ideas about what else we can do to get the word out and encourage a diversity of responses, please comment on the corresponding Discuss thread.
The survey is organized in partnership between the Python Software Foundation and JetBrains. After the survey is over, JetBrains will publish the aggregated results and randomly choose 20 winners (among those who complete the survey in its entirety), who will each receive a $100 Amazon Gift Card or a local equivalent.
Python Bytes
#467 Toads in my AI
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://check.labs.greynoise.io?featured_on=pythonbytes">GreyNoise IP Check</a></strong></li> <li><strong><a href="https://pypi.org/project/tprof/?featured_on=pythonbytes">tprof: a targeting profiler</a></strong></li> <li><strong><a href="https://github.com/batrachianai/toad?featured_on=pythonbytes">TOAD is out</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=24gBkjE8tOU' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="467">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1:</strong> <a href="https://check.labs.greynoise.io?featured_on=pythonbytes">GreyNoise IP Check</a></p> <ul> <li>GreyNoise watches the internet's background radiation—the constant storm of scanners, bots, and probes hitting every IP address on Earth.</li> <li>Is your computer sending out bot or other bad-actor traffic? What about the myriad of devices and IoT things on your local IP?</li> <li>Heads up: If your IP has recently changed, it might not be you (false positive).</li> </ul> <p>Brian #2: <a href="https://pypi.org/project/tprof/?featured_on=pythonbytes">tprof: a targeting profiler</a></p> <ul> <li>Adam Johnson</li> <li>Intro blog post: <a href="https://adamj.eu/tech/2026/01/14/python-introducing-tprof/?featured_on=pythonbytes"><strong>Python: introducing tprof, a targeting profiler</strong></a></li> </ul> <p><strong>Michael #3: <a href="https://github.com/batrachianai/toad?featured_on=pythonbytes">TOAD is out</a></strong></p> <ul> <li>Toad is a unified experience for AI in the terminal</li> <li>Front-end for AI tools such as <a href="https://openhands.dev/?featured_on=pythonbytes">OpenHands</a>, <a href="https://www.claude.com/product/claude-code?featured_on=pythonbytes">Claude Code</a>, <a href="https://geminicli.com/?featured_on=pythonbytes">Gemini CLI</a>, and many more.</li> <li>Better TUI experience (e.g. @ for file context uses fuzzy search and dropdowns)</li> <li>Better prompt input (mouse, keyboard, even colored code and markdown blocks)</li> <li>Terminal within terminals (for TUI support)</li> </ul> <p><strong>Brian #4</strong>: <a href="https://github.com/fastapi/fastapi/pull/14706/files?featured_on=pythonbytes">FastAPI adds Contribution Guidelines around AI usage</a></p> <ul> <li>Docs commit: <a href="https://github.com/fastapi/fastapi/pull/14706/files?featured_on=pythonbytes"><strong>Add contribution instructions about LLM generated code and comments and automated tools for PRs</strong></a></li> <li>Docs section: <a href="https://fastapi.tiangolo.com/contributing/?h=contributin#automated-code-and-ai">Development - Contributing : Automated Code and AI</a></li> <li>Great inspiration and example of how to deal with this for popular open source projects <ul> <li>“If the <strong>human effort</strong> put in a PR, e.g. writing LLM prompts, is <strong>less</strong> than the <strong>effort we would need to put</strong> to <strong>review it</strong>, please <strong>don't</strong> submit the PR.”</li> </ul></li> <li>With sections on <ul> <li>Closing Automated and AI PRs</li> <li>Human Effort Denial of Service</li> <li>Use Tools Wisely</li> </ul></li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://techcrunch.com/2026/01/14/digg-launches-its-new-reddit-rival-to-the-public/?featured_on=pythonbytes">Apparently Digg is back</a> and there’s a <a href="https://digg.com/python?featured_on=pythonbytes">Python Community</a> there</li> <li><a href="https://marijkeluttekes.dev/blog/articles/2026/01/21/why-light-weight-websites-may-one-day-save-your-life/?featured_on=pythonbytes">Why light-weight websites may one day save your life</a> - Marijke LuttekesHome</li> </ul> <p>Michael:</p> <ul> <li>Blog posts about Talk Python AI Integrations <ul> <li><a href="https://talkpython.fm/blog/posts/announcing-talk-python-ai-integrations/?featured_on=pythonbytes">Announcing Talk Python AI Integrations</a> <em><em></em></em>on Talk Python’s Blog</li> <li><a href="https://mkennedy.codes/posts/why-hiding-from-ai-crawlers-is-a-bad-idea/?featured_on=pythonbytes">Blocking AI crawlers might be a bad idea</a> on Michael’s Blog</li> </ul></li> <li>Already using the compile flag for faster app startup on the containers: <ul> <li><code>RUN --mount=type=cache,target=/root/.cache uv pip install --compile-bytecode --python /venv/bin/python</code></li> <li>I think it’s speeding startup by about 1s / container.</li> </ul></li> <li><a href="https://blobs.pythonbytes.fm/big-prompt-or-what-2026-01.png">Biggest prompt yet?</a> <strong>72 pages</strong>, 11, 000</li> </ul> <p><strong>Joke: <a href="https://www.reddit.com/r/ProgrammerHumor/comments/1q2tznx/forgotthebasecase/?featured_on=pythonbytes">A date</a></strong></p> <ul> <li>via From Pat Decker</li> </ul>
Reuven Lerner
What’s new in Pandas 3?
Pandas 3 is out!
As of last week, saying “pip install pandas” or “uv add pandas” gives you the latest version.
What’s new? What has changed?
I’ve got a whole YouTube playlist, explaining what you need to know: https://www.youtube.com/playlist?list=PLbFHh-ZjYFwFWHVT0qeg9Jz1TBD0TlJJT
The post What’s new in Pandas 3? appeared first on Reuven Lerner.
January 25, 2026
Ned Batchelder
Testing: exceptions and caches
Two testing-related things I found recently.
Unified exception testing
Kacper Borucki blogged about parameterizing exception testing, and linked to pytest docs and a StackOverflow answer with similar approaches.
The common way to test exceptions is to use
pytest.raises as a context manager, and have
separate tests for the cases that succeed and those that fail. Instead, this
approach lets you unify them.
I tweaked it to this, which I think reads nicely:
from contextlib import nullcontext as produces
import pytest
from pytest import raises
@pytest.mark.parametrize(
"example_input, result",
[
(3, produces(2)),
(2, produces(3)),
(1, produces(6)),
(0, raises(ZeroDivisionError)),
("Hello", raises(TypeError)),
],
)
def test_division(example_input, result):
with result as e:
assert (6 / example_input) == e
One parameterized test that covers both good and bad outcomes. Nice.
AntiLRU
The @functools.lru_cache decorator (and its
convenience cousin @cache) are good ways to save the result of a function
so that you don’t have to compute it repeatedly. But, they hide an implicit
global in your program: the dictionary of cached results.
This can interfere with testing. Your tests should all be isolated from each other. You don’t want a side effect of one test to affect the outcome of another test. The hidden global dictionary will do just that. The first test calls the cached function, then the second test gets the cached value, not a newly computed one.
Ideally, lru_cache would only be used on pure functions: the result only depends on the arguments. If it’s only used for pure functions, then you don’t need to worry about interactions between tests because the answer will be the same for the second test anyway.
But lru_cache is used on functions that pull information from the environment, perhaps from a network API call. The tests might mock out the API to check the behavior under different API circumstances. Here’s where the interference is a real problem.
The lru_cache decorator makes a .clear_cache method available on each
decorated function. I had some code that explicitly called that method on the
cached functions. But then I added a new cached function, forgot to update the
conftest.py code that cleared the caches, and my tests were failing.
A more convenient approach is provided by
pytest-antilru: it’s a pytest plugin that monkeypatches
@lru_cache to track all of the cached functions, and clears them all
between tests. The caches are still in effect during each test, but can’t
interfere between them.
It works great. I was able to get rid of all of the manually maintained cache clearing in my conftest.py.
January 23, 2026
Talk Python to Me
#535: PyView: Real-time Python Web Apps
Building on the web is like working with the perfect clay. It’s malleable and can become almost anything. But too often, frameworks try to hide the web’s best parts away from us. Today, we’re looking at PyView, a project that brings the real-time power of Phoenix LiveView directly into the Python world. I'm joined by Larry Ogrodnek to dive into PyView.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br> <a href='https://talkpython.fm/devopsbook'>Python in Production</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guest</strong><br/> <strong>Larry Ogrodnek</strong>: <a href="https://hachyderm.io/@ogrodnek?featured_on=talkpython" target="_blank" >hachyderm.io</a><br/> <br/> <strong>pyview.rocks</strong>: <a href="https://pyview.rocks?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Phoenix LiveView</strong>: <a href="https://github.com/phoenixframework/phoenix_live_view?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>this section</strong>: <a href="https://pyview.rocks/getting-started/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Core Concepts</strong>: <a href="https://pyview.rocks/core-concepts/liveview-lifecycle/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Socket and Context</strong>: <a href="https://pyview.rocks/core-concepts/socket-and-context/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Event Handling</strong>: <a href="https://pyview.rocks/core-concepts/event-handling/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>LiveComponents</strong>: <a href="https://pyview.rocks/core-concepts/live-components/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Routing</strong>: <a href="https://pyview.rocks/core-concepts/routing/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Templating</strong>: <a href="https://pyview.rocks/templating/overview/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>HTML Templates</strong>: <a href="https://pyview.rocks/templating/html-templates/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>T-String Templates</strong>: <a href="https://pyview.rocks/templating/t-string-templates/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>File Uploads</strong>: <a href="https://pyview.rocks/features/file-uploads/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Streams</strong>: <a href="https://pyview.rocks/streams-usage/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Sessions & Authentication</strong>: <a href="https://pyview.rocks/features/authentication/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>Single-File Apps</strong>: <a href="https://pyview.rocks/single-file-apps/?featured_on=talkpython" target="_blank" >pyview.rocks</a><br/> <strong>starlette</strong>: <a href="https://starlette.dev?featured_on=talkpython" target="_blank" >starlette.dev</a><br/> <strong>wsproto</strong>: <a href="https://github.com/python-hyper/wsproto?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>apscheduler</strong>: <a href="https://github.com/agronholm/apscheduler?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>t-dom project</strong>: <a href="https://github.com/t-strings/tdom?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=g0RDxN71azs" target="_blank" >youtube.com</a><br/> <strong>Episode #535 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/535/pyview-real-time-python-web-apps#takeaways-anchor" target="_blank" >talkpython.fm/535</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/535/pyview-real-time-python-web-apps" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
Real Python
The Real Python Podcast – Episode #281: Continuing to Improve the Learning Experience at Real Python
If you haven't visited the Real Python website lately, then it's time to check out a great batch of updates on realpython.com! Dan Bader returns to the show this week to discuss improvements to the site and more ways to learn Python.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
January 22, 2026
The Python Coding Stack
The Orchestra Conductor, The Senior Programmer, and AI • [Club]
I spent a few years learning to play the piano when I was a child. It was always clear I would never be a concert pianist. Or a pianist of any description. This is to say that I don’t know much about music. I still struggle to understand why an orchestra needs a conductor–don’t the musicians all have the score that they can play nearly perfectly?
And many people who comment about programming and AI know as much about programming as I know about music…and probably even less about AI.
But the orchestra conductor analogy seems a good one. Let me explore it further.
Reuven Lerner
Learn to code with AI — not just write prompts
The AI revolution is here. Engineers at major companies are now using AI instead of writing code directly.
But there’s a gap: Most developers know how to write code OR how to prompt AI, but not both. When working with real data, vague AI prompts produce code that might work on sample datasets but creates silent errors, performance issues, or incorrect analyses with messy, real-world data that requires careful handling.
I’ve spent 30 years teaching Python at companies like Apple, Intel, and Cisco, plus at conferences worldwide. I’m adapting my teaching for the AI era.
Specifically: I’m launching AI-Powered Python Practice Workshops. These are hands-on sessions where you’ll solve real problems using Claude Code, then learn to critically evaluate and improve the results.
Here’s how it works:
- I present a problem
- You solve it using Claude Code
- We compare prompts, discuss what worked (and what didn’t)
- I provide deep-dives on both the Python concepts AND the AI collaboration techniques
In 3 hours, we’ll cover 3-4 exercises. That’ll give you a chance to learn two skills: Python/Pandas AND effective AI collaboration. That’ll make you more effective at coding, and at the data analysis techniques that actually work with messy, real-world datasets.
Each workshop costs $200 for LernerPython members. Not a member? Total cost is $700 ($500 annual membership + $200 workshop fee). Want both workshops? $900 total ($500 membership + $400 for both workshops). Plus you get 40+ courses, 500+ exercises, office hours, Discord, and personal mentorship.
AI-Powered Python Practice Workshop
- Focus is on the Python language, standard library, and common packages
- Monday, February 2nd
- 10 a.m. – 1 p.m. Eastern / 3 p.m. – 6 p.m. London / 5 p.m. – 8 p.m. Israel
- Sign up here: https://lernerpython.com/product/ai-python-workshop-1/
AI-Powered Pandas Practice Workshop
- Focus is on data analysis with Pandas
- Monday, February 9th
- 10 a.m. – 1 p.m. Eastern / 3 p.m. – 6 p.m. London / 5 p.m. – 8 p.m. Israel
- Sign up here: https://lernerpython.com/product/ai-pandas-workshop-1/
I want to encourage lots of discussion and interactions, so I’m limiting the class to 20 total participants. Both sessions will be recorded, and will be available to all participants.
Questions? Just e-mail me at reuven@lernerpython.com.
The post Learn to code with AI — not just write prompts appeared first on Reuven Lerner.
Python Software Foundation
Announcing Python Software Foundation Fellow Members for Q4 2025! 🎉
The PSF is pleased to announce its fourth batch of PSF Fellows for 2025! Let us welcome the new PSF Fellows for Q4! The following people continue to do amazing things for the Python community:
Chris Brousseau
Website, LinkedIn, GitHub, Mastodon, X, PyBay, PyBay GitHub
Dave Forgac
Website, Mastodon, GitHub, LinkedIn
Inessa Pawson
James Abel
Website, LinkedIn, GitHub, Bluesky
Karen Dalton
Mia Bajić
Tatiana Andrea Delgadillo Garzofino
Website, GitHub, LinkedIn, Instagram
Thank you for your continued contributions. We have added you to our Fellows Roster.
The above members help support the Python ecosystem by being phenomenal leaders, sustaining the growth of the Python scientific community, maintaining virtual Python communities, maintaining Python libraries, creating educational material, organizing Python events and conferences, starting Python communities in local regions, and overall being great mentors in our community. Each of them continues to help make Python more accessible around the world. To learn more about the new Fellow members, check out their links above.
Let's continue recognizing Pythonistas all over the world for their impact on our community. The criteria for Fellow members is available on our PSF Fellow Membership page. If you would like to nominate someone to be a PSF Fellow, please send a description of their Python accomplishments and their email address to psf-fellow at python.org. We are accepting nominations for Quarter 1 of 2026 through February 20th, 2026.
Are you a PSF Fellow and want to help the Work Group review nominations? Contact us at psf-fellow at python.org.
January 21, 2026
Django Weblog
Djangonaut Space - Session 6 Accepting Applications
We are thrilled to announce that Djangonaut Space, a mentorship program for contributing to Django, is open for applicants for our next cohort! 🚀
Djangonaut Space is holding a sixth session! This session will start on March 2nd, 2026. We are currently accepting applications until February 2nd, 2026 Anywhere on Earth. More details can be found in the website.
Djangonaut Space is a free, 8-week group mentoring program where individuals will work self-paced in a semi-structured learning environment. It seeks to help members of the community who wish to level up their current Django code contributions and potentially take on leadership roles in Django in the future.
“I'm so grateful to have been a part of the Djangonaut Space program. It's a wonderfully warm, diverse, and welcoming space, and the perfect place to get started with Django contributions. The community is full of bright, talented individuals who are making time to help and guide others, which is truly a joy to experience. Before Djangonaut Space, I felt as though I wasn't the kind of person who could become a Django contributor; now I feel like I found a place where I belong.” - Eliana, Djangonaut Session 1
Enthusiastic about contributing to Django but wondering what we have in store for you? No worries, we have got you covered! 🤝
Python Software Foundation
Departing the Python Software Foundation (Staff)
This week will be my last as the Director of Infrastructure at the Python Software Foundation and my last week as a staff member. Supporting the mission of this organization with my labor has been unbelievable in retrospect and I am filled with gratitude to every member of this community, volunteer, sponsor, board member, and staff member of this organization who have worked alongside me and entrusted me with root@python.org for all this time.
But, it is time for me to do something new. I don’t believe there would ever be a perfect time for this transition, but I do believe that now is one of the best. The PSF has built out a team that shares the responsibilities I carried across our technical infrastructure, the maintenance and support of PyPI, relationships with our in-kind sponsors, and the facilitation of PyCon US. I’m also not “burnt-out” or worse, I knew that one day I would move on “dead or alive” and it is so good to feel alive in this decision, literally and figuratively.
“The PSF and the Python community are very lucky to have had Ee at the helm for so many years. Ee’s approach to our technical needs has been responsive and resilient as Python, PyPI, PSF staff and the community have all grown, and their dedication to the community has been unmatched and unwavering. Ee is leaving the PSF in fantastic shape, and I know I join the rest of the staff in wishing them all the best as they move on to their next endeavor.”
- Deb Nicholson, Executive Director
The health and wellbeing of the PSF and the Python community is of utmost importance to me, and was paramount as I made decisions around this transition. Given that, I am grateful to be able to commit 20% of my time over the next six months to the PSF to provide support and continuity. Over the past few weeks we’ve been working internally to set things up for success, and I look forward to meeting the new staff and what they accomplish with the team at the PSF!
My participation in the Python community and contributions to the infrastructure began long before my role as a staff member. As I transition out of participating as PSF staff I look forward to continuing to participate in and contribute to this community as a volunteer, as long as I am lucky enough to have the chance.
Reuven Lerner
We’re all VCs now: The skills developers need in the AI era
Many years ago, a friend of mine described how software engineers solve problems:
- When you’re starting off, you solve problems with code.
- When you get more experienced, you solve problems with people.
- When you get even more experienced, you solve problems with money.
In other words: You can be the person writing the code, and solving the problem directly. Or you can manage people, specifying what they should do. Or you can invest in teams, telling them about the problems you want to solve, but letting them set specific goals and managing the day-to-day work.
Up until recently, I was one of those people who said, “Generative AI is great, but it’s not nearly ready to write code on our behalf.” I spoke and wrote about how AI presents an amazing learning opportunity, and how I’ve integrated AI-based learning into my courses.
Things have changed… and are still changing
I’ve recently realized that my perspective is oh-so-last year. Because in 2026, many companies and individuals are using AI to write code on their behalf. In just the last two weeks, I’ve spoken with developers who barely touch code, having AI to develop it for them. And in case you’re wondering whether this only applies to freelancers, I’ve spoken with people from several large, well-known companies, who have said something similar.
And it’s not just me: Gergely Orosz, who writes the Pragmatic Engineer newsletter, recently wrote that AI-written code is “mega-trend set to hit the tech industry,” and that a growing number of companies are already relying on AI to specify, write, and test code (https://newsletter.pragmaticengineer.com/p/when-ai-writes-almost-all-code-what).
And Simon Willison, who has been discussing and evaluating AI models in great depth for several years, has seen a sea change in model-generated code quality in just the last few months. He predicts that within six years, it’ll be as quaint for a human to type code as it is to use punch cards (https://simonwillison.net/2026/Jan/8/llm-predictions-for-2026/#6-years-typing-code-by-hand-will-go-the-way-of-punch-cards).
An inflection point in the tech industry
This is mind blowing. I still remember taking an AI course during my undergraduate years at MIT, learning about cutting-edge AI research… and finding it quite lacking. I did a bit of research at MIT’s AI Lab, and saw firsthand how hard language recognition was. To think that we can now type or talk to an AI model, and get coherent, useful results, continues to astound me, in part because I’ve seen just how far this industry has gone.
When ChatGPT first came out, it was breathtaking to see that it could code. It didn’t code that well, and often made mistakes, but that wasn’t the point. It was far better than nothing at all. In some ways, it was like the old saw about dancing bears, amazing that it could dance at all, never mind dancing well.
Over the last few years, GenAI companies have been upping their game, slowly but surely. They still get things wrong, and still give me bad coding advice and feedback. But for the most part, they’re doing an increasingly impressive job. And from everything I’m seeing, hearing, and reading, this is just the beginning.
Whether the current crop of AI companies survives their cash burn is another question entirely. But the technology itself is here to stay, much like how the dot-com crash of 2000 didn’t stop the Internet.
We’re at an inflection point in the computer industry, one that is increasingly allowing one person to create a large, complex software system without writing it directly. In other words: Over the coming years, programmers will spend less and less time writing code. They’ll spend more and more time partnering with AI systems — specifying what the code should do, what is considered success, what errors will be tolerated, and how scalable the system will be.
This is both exciting and a bit nerve-wracking.
Engineering >> Coding
The shift from “coder” to “engineer” has been going on for years. We abstracted away machine code, then assembly, then manual memory management. AI represents the biggest abstraction leap yet. Instead of abstracting away implementation details, we’re abstracting away implementation itself.
But software engineering has long been more than just knowing how to code. It’s about problem solving, about critical thinking, and about considering not just how to build something, but how to maintain it. It’s true that coding might go away as an individual discipline, much as there’s no longer much of a need for professional scribes in a world where everyone knows how to write.
However, it does mean that to succeed in the software world, it’ll no longer be enough to understand how computers work, and how to effectively instruct them with code. You’ll have to have many more skills, skills which are almost never taught to coders, because there were already so many fundamentals you needed to learn.
In this new age, creating software will be increasingly similar to being an investor. You’ll need to have a sense of the market, and what consumers want. You’ll need to know what sorts of products will potentially succeed in the market. You’ll need to set up a team that can come up with a plan, and execute on it. And then you’ll need to be able to evaluate the results. If things succeed, then great! And if not, that’s OK — you’ll invest in a number of other ventures, hoping that one or more will get the 10x you need to claim success.
If that seems like science fiction, it isn’t. I’ve seen and heard about amazing success with Claude Code from other people, and I’ve started to experience it myself, as well. You can have it set up specifications. You can have it set up tests. You can have it set up a list of tasks. You can have it work through those tasks. You can have it consult with other GenAI systems, to bring in third-party advice. And this is just the beginning.
Programming in English?
When ChatGPT was first released, many people quipped that the hottest programming language is now English. I laughed at that then, less because of the quality of AI coding, and more because most people, even given a long time, don’t have the experience and training to specify a programming project. I’ve been to too many meetings in which developers and project managers exchange harsh words because they interpreted vaguely specified features differently. And that’s with humans, who presumably understand the specifications better!
As someone said to me many years ago, computers do what you tell them to do, not what you want them to do. Engineers still make plenty of mistakes, even with their training and experience. But non-technical people, attempting to specify a software system to a GenAI model, will almost certainly fail much of the time.
So yes, technical chops will still be needed! But just as modern software engineers don’t think too much about the object code emitted by a compiler, assuming that it’ll be accurate and useful, future software engineers won’t need to check the code emitted by AI systems. (We still have some time before that happens, I expect.) The ability to break a problem into small parts, think precisely, and communicate clearly, will be more valuable than ever.
Even when AI is writing code for us, we’ll still need developers. But the best, most successful developers won’t be the ones who have mastered Python syntax. Rather, they’ll be the best architects, the clearest communicators, and the most critical thinkers.
Preparing yourself: We’re all VCs now
So, how do you prepare for this new world? How can you acquire this VC mindset toward creating software?
Learn to code: You can only use these new AI systems if you have a strong understanding of the underlying technology. AI is like a chainsaw, in that it does wonders for people with experience, but is super dangerous for the untrained. So don’t believe the hype, that you don’t need to learn to program, because we’re now in an age of AI. You still need to learn it. The language doesn’t matter nearly as much as the underlying concepts. For the time being, you will also need to inspect the code that GenAI produces, and that requires coding knowledge and experience.
Communication is key: You need to learn to communicate clearly. AI uses text, which means that the better you are at articulating your plans and thoughts, the better off you’ll be. Remember “Let me Google that for you,” the snarky way that many techies responded to people who asked for help searching the Web? Well, guess what: Searching on the Internet is a skill that demands some technical understanding. People who can’t search well aren’t dumb; they just don’t have the needed skills. Similarly, working with GenAI is a skill, one that requires far more lengthy, detailed, and precise language than Google searches ever did. Improving your writing skills will make you that much more powerful as a modern developer.
High-level problem solving: An engineering education teaches you (often the hard way) how to break problems apart into small pieces, solve each piece, and then reassemble them. But how do you do that with AI agents? That’s especially where the VC mindset comes into play: Given a budget, what is the best team of AI agents you can assemble to solve a particular problem? What role will each agent play? What skills will they need? How will they communicate with one another? How do you do so efficiently, so that you don’t burn all of your tokens in one afternoon?
Push back: When I was little, people would sometimes say that something must be true, because it was in the newspaper. That mutated to: It must be true, because I read it online. Today, people believe that Gemini is AI, so it must be true. Or unbiased. Or smart. But of course, that isn’t the case; AI tools regularly make mistakes, and you need to be willing to push back, challenge them, and bring counter-examples. Sadly, people don’t do this enough. I call this “AI-mposter syndrome,” when people believe that the AI must be smarter than they are. Just today, while reading up on the Model Context Protocol, Claude gave me completely incorrect information about how it works. Only providing counter-examples got Claude to admit that actually, I was right, and it was wrong. But it would have been very easy for me to say, “Well, Claude knows better than I do.” Confidence and skepticism will go a long way in this new world.
The more checking, the better: I’ve been using Python for a long time, but I’ve spent no small amount of time with other dynamic languages, such as Ruby, Perl, and Lisp. We’ve already seen that you can only use Python in serious production environments with good testing, and even more so with type hints. When GenAI is writing your code for you, there’s zero room for compromise on these fronts. (Heck, if it’s writing the code, and the tests, then why not go all the way with test-driven development?) If you aren’t requiring a high degree of safety checks and testing, you’re asking for trouble — and potentially big trouble. Not everyone will be this serious about code safety. There will be disasters – code that seemed fine until it wasn’t, corners that seemed reasonable to cut until they weren’t. Don’t let that be you.
Learn how to learn: This has always been true in the computer industry; the faster you can learn new things and synthesize them into your existing knowledge, the better. But the pace has sped up considerably in the last few years. Things are changing at a dizzying pace. It’s hard to keep up. But you really have no choice but to learn about these new technologies, and how to use them effectively. It has long been common for me to learn about something one month, and then use it in a project the next month. Lately, though, I’ve been using newly learned ideas just days after coming across them.
What about juniors?
A big question over the last few years has been: If AI makes senior engineers 100x more productive, then why would companies hire juniors? And if juniors can’t find work, then how will they gain the experience to make them attractive, AI-powered seniors?
This is a real problem. I attended conferences in five countries in 2025, and young engineers in all of them were worried about finding a job, or keeping their current one. There aren’t any easy answers, especially for people who were looking forward to graduating, joining a company, gradually gaining experience, and finally becoming a senior engineer or hanging out their own shingle.
I can say that AI provides an excellent opportunity for learning, and the open-source world offers many opportunities for professional development, as well as interpersonal connections. Perhaps the age in which junior engineers gained their experience on the job are fading, and that participating in open-source projects will need to be part of the university curriculum or something people do in their spare time. And pairing with an AI tool can be extremely rewarding and empowering. Much as Waze doesn’t scold you for missing a turn, AI systems are extremely polite, and patient when you make a mistake, or need to debug a problem. Learning to work with such tools, alongside working with people, might be a good way for many to improve their skills.
Standards and licensing
Beyond skill development, AI-written code raises some other issues. For example: Software is one of the few aspects of our lives that has no official licensing requirements. Doctors, nurses, lawyers, and architects, among others, can’t practice without appropriate education and certification. They’re often required to take courses throughout their career, and to get re-certified along the way.
No doubt, part of the reason for this type of certification is to maintain the power (and profits) of those inside of the system. But it also does help to ensure quality and accountability. As we transition to a world of AI-generated software, part of me wonders whether we’ll eventually need to feed the AI system a set of government- mandated codes that will ensure user safety and privacy. Or that only certified software engineers will be allowed to write the specifications fed into AI to create software.
After all, during most of human history, you could just build a house. There weren’t any standards or codes you needed to follow. You used your best judgment — and if it fell down one day, then that kinda happened, and what can you do? Nowadays, of course, there are codes that restrict how you can build, and only someone who has been certified and licensed can try to implement those codes.
I can easily imagine the pushback that a government would get for trying to impose such restrictions on software people. But as AI-generated code becomes ubiquitous in safety-critical systems, we’ll need some mechanism for accountability. Whether that’s licensing, industry standards, or something entirely new remains to be seen.
Conclusions
The last few weeks have been among the most head-spinning in my 30-year career. I see that my future as a Python trainer isn’t in danger, but is going to change — and potentially quite a bit — even in the coming months and years. I’m already rolling out workshops in which people solve problems not using Python and Pandas, but using Claude Code to write Python and Pandas on their behalf. It won’t be enough to learn how to use Claude Code, but it also won’t be enough to learn Python and Pandas. Both skills will be needed, at least for the time being. But the trend seems clear and unstoppable, and I’m both excited and nervous to see what comes down the pike.
But for now? I’m doubling down on learning how to use AI systems to write code for me. I’m learning how to get them to interact, to help one another, and to critique one another. I’m thinking of myself as a VC, giving “smart money” to a bunch of AI agents that have assembled to solve a particular problem.
And who knows? In the not-too-distant future, an updated version of my friend’s statement might look like this:
- When you’re starting off, you solve problems with code.
- When you get more experienced, you solve problems with an AI agent.
- When you get even more experienced, you solve problems with teams of AI agents.
The post We’re all VCs now: The skills developers need in the AI era appeared first on Reuven Lerner.
Real Python
How to Integrate Local LLMs With Ollama and Python
Integrating local large language models (LLMs) into your Python projects using Ollama is a great strategy for improving privacy, reducing costs, and building offline-capable AI-powered apps.
Ollama is an open-source platform that makes it straightforward to run modern LLMs locally on your machine. Once you’ve set up Ollama and pulled the models you want to use, you can connect to them from Python using the ollama library.
Here’s a quick demo:
In this tutorial, you’ll integrate local LLMs into your Python projects using the Ollama platform and its Python SDK.
You’ll first set up Ollama and pull a couple of LLMs. Then, you’ll learn how to use chat, text generation, and tool calling from your Python code. These skills will enable you to build AI-powered apps that run locally, improving privacy and cost efficiency.
Get Your Code: Click here to download the free sample code that you’ll use to integrate LLMs With Ollama and Python.
Take the Quiz: Test your knowledge with our interactive “How to Integrate Local LLMs With Ollama and Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Integrate Local LLMs With Ollama and PythonCheck your understanding of using Ollama with Python to run local LLMs, generate text, chat, and call tools for private, offline apps.
Prerequisites
To work through this tutorial, you’ll need the following resources and setup:
- Ollama installed and running: You’ll need Ollama to use local LLMs. You’ll get to install it and set it up in the next section.
- Python 3.8 or higher: You’ll be using Ollama’s Python software development kit (SDK), which requires Python 3.8 or higher. If you haven’t already, install Python on your system to fulfill this requirement.
- Models to use: You’ll use
llama3.2:latestandcodellama:latestin this tutorial. You’ll download them in the next section. - Capable hardware: You need relatively powerful hardware to run Ollama’s models locally, as they may require considerable resources, including memory, disk space, and CPU power. You may not need a GPU for this tutorial, but local models will run much faster if you have one.
With these prerequisites in place, you’re ready to connect local models to your Python code using Ollama.
Step 1: Set Up Ollama, Models, and the Python SDK
Before you can talk to a local model from Python, you need Ollama running and at least one model downloaded. In this step, you’ll install Ollama, start its background service, and pull the models you’ll use throughout the tutorial.
Get Ollama Running
To get started, navigate to Ollama’s download page and grab the installer for your current operating system. You’ll find installers for Windows 10 or newer and macOS 14 Sonoma or newer. Run the appropriate installer and follow the on-screen instructions. For Linux users, the installation process differs slightly, as you’ll learn soon.
On Windows, Ollama will run in the background after installation, and the CLI will be available for you. If this doesn’t happen automatically for you, then go to the Start menu, search for Ollama, and run the app.
On macOS, the app manages the CLI and setup details, so you just need to launch Ollama.app.
If you’re on Linux, install Ollama with the following command:
$ curl -fsSL https://ollama.com/install.sh | sh
Once the process is complete, you can verify the installation by running:
$ ollama -v
If this command works, then the installation was successful. Next, start Ollama’s service by running the command below:
$ ollama serve
That’s it! You’re now ready to start using Ollama on your local machine. In some Linux distributions, such as Ubuntu, this final command may not be necessary, as Ollama may start automatically when the installation is complete. In that case, running the command above will result in an error.
Read the full article at https://realpython.com/ollama-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: How to Integrate Local LLMs With Ollama and Python
In this quiz, you’ll test your understanding of How to Integrate Local LLMs With Ollama and Python.
By working through this quiz, you’ll revisit how to set up Ollama, pull models, and use chat, text generation, and tool calling from Python.
You’ll connect to local models through the ollama Python library and practice sending prompts and handling responses. You’ll also see how local inference can improve privacy and cost efficiency while keeping your apps offline-capable.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Reuven Lerner
Build YOUR data dashboard — join my next 8-week HOPPy studio cohort

Want to analyze data? Good news: Python is the leading language in the data world. Libraries like NumPy and Pandas make it easy to load, clean, analyze, and visualize your data.
But wait: If your colleagues aren’t coders, how can they explore your data?
The answer: A data dashboard, which uses UI elements (e.g., sliders, text fields, and checkboxes). Your colleagues get a custom, dynamic app, rather than static graphs, charts, and tables.
One of the newest and hottest ways to create a data dashboard in Python is Marimo. Among other things, Marimo offers UI widgets, real-time updating, and easy distribution. This makes it a great choice for creating a data dashboard.
In the upcoming (4th) cohort of HOPPy (Hands-On Projects in Python), you’ll learn to create a data dashboard. You’ll make all of the important decisions, from the data set to the design. But you’ll do it all under my personal mentorship, along with a small community of other learners.
The course starts on Sunday, February 1st, and will meet every Sunday for eight weeks. When you’re done, you’ll have a dashboard you can share with colleagues, or just add to your personal portfolio.
If you’ve taken Python courses, but want to sink your teeth into a real-world project, then HOPPy is for you. Among other things:
- Go beyond classroom learning: You’ll learn by doing, creating your own personal product
- Live instruction: Our cohort will meet, live, for two hours every Sunday to discuss problems you’ve had and provide feedback.
- You decide what to do: This isn’t a class in which the instructor dictates what you’ll create. You can choose whatever data set you want. But I’ll be there to support and advise you every step of the way.
- Learn about Marimo: Get experience with one of the hottest new Python technologies.
- Learn about modern distribution: Use Molab and WASM to share your dashboard with others
Want to learn more? Join me for an info session on Monday, January 26th. You can register here: https://us02web.zoom.us/webinar/register/WN_YbmUmMSgT2yuOqfg8KXF5A
Ready to join right now? Get full details, and sign up, at https://lernerpython.com/hoppy-4.
Questions? Just reply to this e-mail. It’ll go straight to my inbox, and I’ll answer you as quickly as I can.
I look forward to seeing you in HOPPy 4!
The post Build YOUR data dashboard — join my next 8-week HOPPy studio cohort appeared first on Reuven Lerner.
Seth Michael Larson
mGBA → Dolphin not working? You need a GBA BIOS
The GBA emulator “mGBA” supports emulating the Game Boy Advance Link Cable (not to be confused with the Game Boy Advance /Game/ Link Cable) and connecting to a running Dolphin emulator instance. I am interested in this functionality for Legend of Zelda: Four Swords Adventures, specifically the “Navi Trackers” game mode that was announced for all regions but was only released in Japan and Korea. In the future I want to explore the English language patches.
After reading the documentation to connect the two emulators I configured the controllers to be “GBA (TCP)” in Dolphin and ensured that Dolphin had the permissions it needed to do networking (Dolphin is installed as a Flatpak). I selected “Connect” on mGBA from the “Connect to Dolphin” popup screen and there was zero feedback... no UI changes, errors, or success messages. Hmmm...
I found out in a random Reddit comment section that a GBA BIOS was needed to connect to Dolphin, so I set off to legally obtain the BIOSes from my hardware. I opted to use the BIOS-dump ROM developed by the mGBA team to dump the BIOS from my Game Boy Advance SP and DS Lite.
Below is a guide on how to build the BIOS ROM from source on Ubuntu 24.04, and then dump GBA BIOSes. Please note you'll likely need a GBA flash cartridge for running homebrew on your Game Boy Advance. I used an EZ-Flash Omega flash cartridge, but I've heard Everdrive GBA is also popular.
Installing devKitARM on Ubuntu 24.04
To build this ROM from source you'll need devKitARM.
If you already have devKitARM installed you can skip these steps.
The devKitPro team supplies an easy script for installing
devKitPro toolsets, but unfortunately the apt.devkitpro.org domain
appears to be behind an aggressive “bot” filter right now
so their instructions to use wget are not working as written.
Instead, download their GPG key with a browser and then run the commands yourself:
apt-get install apt-transport-https
if ! [ -f /usr/local/share/keyring/devkitpro-pub.gpg ]; then
mkdir -p /usr/local/share/keyring/
mv devkitpro-pub.gpg /usr/local/share/keyring/
fi
if ! [ -f /etc/apt/sources.list.d/devkitpro.list ]; then
echo "deb [signed-by=/usr/local/share/keyring/devkitpro-pub.gpg] https://apt.devkitpro.org stable main" > /etc/apt/sources.list.d/devkitpro.list
fi
apt-get update
apt-get install devkitpro-pacman
Once you've installed devKitPro pacman (for Ubuntu: dkp-pacman)
you can install the GBA development tools package group:
dkp-pacman -S gba-dev
After this you can set the DEVKITARM environment variable
within your shell profile to /opt/devkitpro/devkitARM.
Now you should be ready to build the GBA BIOS dumping ROM.
Building the bios-dump ROM
Once devKitARM toolkit is installed the next step is much easier.
You basically download the source, run make with the DEVKITARM environment variable
set properly, and if all the tools are installed you'll quickly have
your ROM:
apt-get install build-essential curl unzip
curl -L -o bios-dump.zip \
https://github.com/mgba-emu/bios-dump/archive/refs/heads/master.zip
unzip bios-dump.zip
cd bios-dump-master
export DEVKITARM=/opt/devkitpro/devkitARM/
make
You should end up with a GBA ROM file titled bios-dump.gba.
Add this .gba file to your microSD card for the flash cartridge.
Boot up the flash cartridge using the device you are trying to dump
BIOS of and after boot-up the screen should quickly show a success message
along with checksums of the BIOS file. As noted in the mGBA bios-dump README, there are two GBA BIOSes:
sha256:fd2547: GBA, GBA SP, GBA SP “AGS-101”, GBA Micro, and Game Boy Player.sha256:782eb3: DS, DS Lite, and all 3DS variants
I own a GBA SP, a Game Boy Player, and a DS Lite, so I was able to dump three different GBA BIOSes, two of which are identical:
sha256sum *.bin
fd2547... gba_sp_bios.bin
fd2547... gba_gbp_bios.bin
782eb3... gba_ds_bios.bin
From here I was able to configure mGBA with a GBA BIOS file (Tools→Settings→BIOS) and successfully connect to Dolphin running four instances of mGBA; one for each of the Links!


💚❤️💙💜
mGBA probably could have shown an error message when the “connecting” phase requires a BIOS. Looks like this behavior been known since 2021.
Thanks for keeping RSS alive! ♥

