Planet Python
Last update: October 12, 2025 09:43 PM UTC
October 12, 2025
Anwesha Das
ssh version output in stderr
Generally Linux commands print their version on stdout
, for example
git --version
or python --version
. But not ssh
. ssh -V
prints output to stderr
.
To test it you can do the following:
git version on stdout
> git --version 2> error 1> output
> cat output
git version 2.51.0
ssh version on stderr
> ssh -V 2>> error 1>> output
> cat error
OpenSSH_9.9p1, OpenSSL 3.2.4 11 Feb 2025
Hope this will be helpful.
October 11, 2025
Paolo Melchiorre
My Django On The Med 2025Â đïž
A summary of my experience at Django On The Med 2025 told through the posts I published on Mastodon during the conference.
Hugo van Kemenade
Releasing Python 3.14.0
Prologue #
I livetooted the release of Python 3.14.0. Here it is in blogpost form!
One week #
Only one week left until the release of Python 3.14.0 final!
What are you looking forward to?
#Python
#Python314
Tue, Sep 30, 2025, 15:19 EEST
Three days #
Three days until release and a bug in the Linux kernel has turned a dozen buildbots red…
It’s already been fixed in the kernel, but will take some time to bubble up. We’ll skip that test for relevant kernel versions in the meantime.
#Python
#Python314
Sat, Oct 4, 2025, 16:15 EEST
Green #
And back to green!
#Python
#Python314
Sun, Oct 5, 2025, 16:58 EEST
Release day! #
First off, check blockers and buildbots.
A new release-blocker appeared yesterday (because of course) but it can wait until 3.14.1.
Three deferred-blockers are also waiting until 3.14.1.
A new tier-2 buildbot failure appeared yesterday (because of course) but it had previously been offline for a month and will need some reconfiguration. Can ignore.
OK, let’s make a Python!
#Python
#Python314
#release
Tue, Oct 7, 2025, 11:40 EEST
run_release.py
#
Next up, merge and backport the final change to What’s New in Python 3.14 to declare it latest stable.
Now start run_release.py
, the main release automation script, which does a bunch of
pre-checks, runs blurb to create a merged changelog, bumps some numbers, and pushes a
branch and tag to my fork. It’ll go upstream at the end of a successful build.
Then kick off the CI to build source zips, docs and Android binaries.
#Python
#Python314
#release
Tue, Oct 7, 2025, 12:43 EEST
Installers #
(That’s actually the second CI attempt, we had to update some script arguments following an Android test runner update.)
This build takes about half an hour.
I’ve also informed the Windows and macOS release managers about the tag and they will start up installer builds.
This takes a few hours, so I’ve got time to finish up the release notes.
PEP 101 is the full process, but much is automated and we don’t need to follow it all manually.
#Python
#Python314
#release
Tue, Oct 7, 2025, 12:52 EEST
Windows #
The Windows build has been started.
The jobs with profile-guided optimisation (PGO) build once, then collect a profile by running the tests, and then build again using that profile, to see how ‘real’ code executes and optimises for that.
Meanwhile, the docs+source+Android build has finished and the artifacts have been copied to where they need to go with SBOMs created.
#Python
#Python314
#release
Tue, Oct 7, 2025, 13:50 EEST
macOS #
The Windows build is ready and macOS is underway.
#Python
#Python314
#release
Tue, Oct 7, 2025, 15:36 EEST
Final steps #
macOS installer done, next on to the final publishing and announcing steps.
#Python
#Python314
#release
Tue, Oct 7, 2025, 17:02 EEST
đ It’s out! #
đ„§ Please install and enjoy Python 3.14!
#Python
#Python314
#release
Tue, Oct 7, 2025, 17:27 EEST
Finally #
And the last few tasks: announce also on the
blog &
mailing lists,
update the PEP &
downloads landing page, fix
Discourse post
links, unlock the 3.14
branch for the
core team to start landing PRs that didn’t need to be in the RC, and eat the pie.
A HUGE thanks to @sovtechfund Fellowship for allowing me to dedicate my time on getting this out đ
#Python
#Python314
#release
Tue, Oct 7, 2025, 19:28 EEST
Django Weblog
2026 DSF Board Nominations
Nominations are open for the elections of the 2026 Django Software Foundation Board of Directors. The Board guides the direction of the marketing, governance and outreach activities of the Django community. We provide funding, resources, and guidance to Django events on a global level.
The Board of Directors consists of seven volunteers who are elected to two-year terms. This is an excellent opportunity to help advance Django. We canât do it without volunteers, such as yourself. Anyone including current Board members, DSF Members, or the public at large can apply to the Board. It is open to all.
How to apply
If you are interested in helping to support the development of Django weâd enjoy receiving your application for the Board of Directors. Please fill out the 2026 DSF Board Nomination form by 23:59 on October 31, 2025 Anywhere on Earth to be considered.
Submit your nomination for the 2026 Board
If you have any questions about applying, the work, or the process in general please donât hesitate to reach out on the Django forum or via email to foundation@djangoproject.com.
Thank you for your time, and we look forward to working with you in 2026!
The 2025 DSF Board of Directors.
Brett Cannon
Why it took 4 years to get a lock files specification
(This is the blog post version of my keynote from EuroPython 2025 in Prague, Czechia.)
We now have a lock file format specification. That might not sound like a big deal, but for me it took 4 years of active work to get us that specification. Part education, part therapy, this post is meant to help explain what make creating a lock file difficult and why it took so long to reach this point.
What goes into a lock file
A lock file is meant to record all the dependencies your code needs to work along with how to install those dependencies.
That involves The "how" is source trees, source distributions (aka sdists), and wheels. With all of these forms, the trick is recording the right details in order to know how to install code in any of those three forms. Luckily we already had the direct_url.json
specification that just needed translation into TOML for source trees. As for sdists and wheels, it&aposs effectively recording what an index server provides you when you look at a project&aposs release.
The much trickier part is figuring what to install when. For instance, let&aposs consider where your top-level, direct dependencies come from. In pyproject.toml
there&aposs project.dependencies
for dependencies you always need for your code to run, project.optional-dependencies
(aka extras), for when you want to offer your users the option to install additional dependencies, and then there&aposs dependency-groups
for dependencies that are not meant for end-users (e.g. listing your test dependencies).
But letting users control what is (not) installed isn&apost the end of things. There&aposs also the specifiers you can add to any of your listed dependencies. They allow you to not only restrict what versions of things you want (i.e. setting a lower-bound and not setting an upper-bound if you can help it), but also when the dependency actually applies (e.g. is it specific to Windows?).
Put that all together and you end up with a graph of dependencies who edges dictate whether a dependency applies on some platform. If you manage to write it all out then you have multi-use lock files which are portable across platforms and whatever options the installing users selects, compared to single-use lock files that have a specific applicability due to only supporting a single platform and set of input dependencies.
Oh, and even getting the complete list of dependencies in either case is an NP-complete problem.
And it make makes things "interesting", I also wanted the file format to be written by software but readable by people, secure by default, fast to install, and allow the locker which write the lock file to be different from the installer that performs the install (and either be written in a language other than Python).
In the end, it all worked out (luckily); you can read the spec for all the nitty-gritty details about pylock.toml
or watch the keynote where I go through the spec. But it sure did take a while to get to this point.
Why it took (over) 4 years
I&aposm not sure if this qualifies as the longest single project I have ever taken on for Python (rewriting the import system might still hold that record for me), but it definitely felt the most intense over a prolonged period of time.
The oldest record I have that I was thinking about this problem is a tweet from Feb 2019:

2019
That year there were 106 posts on discuss.python.org about a requirements.txt
v2 proposal. It didn&apost come to any specific conclusion that I can recall, but it at least got the conversation started.
2020
The next year, the conversation continued and generated 43 posts. I was personally busy with PEP 621 and the [project]
table in pyproject.toml
.
2021
In January of 2021 Tzu-Ping Chung, Pradyun Gedam, and myself began researching how other language ecosystems did lock files. It culminated in us writing PEP 665 and posting it in July. That led to 359 posts that year.
The goal of PEP 665 was a very secure lock file which partially achieved that goal by only supporting wheels. With no source trees or sdists to contend with, it meant installation didn&apost involve executing a build back-end which can be slow, be indeterminate, and a security risk simply due to running more code. We wrote the PEP with the idea that any source trees or sdists would be built into wheels out-of-band so you could then lock against those wheels.
2022
In the end, PEP 665 was rejected in January of 2022, generating 106 posts on the subject both before and after the rejection. It turns out enough people had workflows dependent on sdists that they balked at having the added step of building wheels out-of-band. There was also some desire to also lock the build back-end dependencies.
2023
After the failure of PEP 665, I decided to try to tackle the problem again entirely on my own. I didn&apost want to drag other poor souls into this again and I thought that being opinionated might make things a bit easier (compromising to please everyone can lead to bad outcomes when a spec if large and complicated like I knew this would be).
I also knew I was going to need a proof-of-concept. That meant I needed code that could get metadata from an index server, resolve all the dependencies some set of projects needed (at least from a wheel), and at least know what I would install on any given platform. Unfortunately a lot of that didn&apost exist as some library on PyPI, so I had to write a bunch of it myself. Luckily I had already started the journey before with my mousebender project, but that only covered the metadata from an index server. I still needed to be able to read MEtADATA
files from a wheel and do the resolution. The former Donald Stufft had taken a stab at and which I picked up and completed, leading to packaging.metadata
. I then used resolvelib to create a resolver.
As such there were only 54 posts about lock files that were general discussion. The key outcome there was trying to lock for build back-ends confused people too much, and so I dropped that feature request from my thinking.
2024
Come 2024, I was getting enough pieces together to actually have a proof-of-concept. And then uv came out in February. That complicated things a bit as it did/planned to do things I had planned to help entice people to care about lock files. I also knew I couldn&apost keep up with the folks at Astral as I didn&apost get to work on this full-time as a job (although I did get a lot more time starting in September of 2024).
I also became a parent in April which initially gave me a chunk of time (babies for the first couple of months sleep a lot, so if gives you a bit of time). And so in July I posted the first draft of PEP 751. It was based on pdm.lock
(which itself is based on poetry.lock
). It covered sdists and wheels and was multi-use, all by recording the projects to install as a set which made installation fast.
But uv&aposs popularity was growing and they had extra needs that PDM and Poetry– the other major participants in the PEP discussions --didn&apost. And do I wrote another draft where I pivoted from a set of projects to a graph of projects. But otherwise the original feature set was all there.
And then Hynek came by with what seemed like an innocuous request about making the version of a listed project optional instead of required (which was done because the version is required in PKG-INFO
in sdists and METADATA
in wheels).

Unfortunately the back-and-forth on that was enough to cause the Astral folks to want to scale the whole project back all the way to the requirements.txt
v2 solution.

While I understood their reasoning and motivation, I would be lying if I said it wasn&apost disappointing. I felt we were extremely close up to that point in reaching an agreement on the PEP, and then having to walk back so much work and features did not exactly make me happy.
This was covered by 974 posts on discuss.python.org.
2025
But to get consensus among uv, Poetry, and PDM, I did a third draft of PEP 751. This went back to the set of projects to install, but was single-use only. I also became extremely stringent with timelines on when people could provide feedback as well as what would be required to add/remove anything. At this point I was fighting burn-out on this subject and my own wife had grown tired of the subject and seeing me feel dejected every time there was a setback. And so I set a deadline of the end of March to get things done, even if I had to drop features to make it happen.
And in February I thought we had reached and agreement on this third draft. But then Frost Ming, the maintainer of PDM, asked why did we drop multi-use lock files when they thought the opposition wasn&apost that strong?

And so, with another 150 posts and some very strict deadlines for feedback, we managed to bring back multi-use lock files and get PEP 751 accepted-- with no changes! -- on March 31.
2 PEPs and 6 years later ...
If you add in some ancillary discussions, the total number of posts on the subject of lock files since 2019 comes to over 1.8K. But as I write this post, less than 7 months since PEP 751 was accepted, PDM has already been updated to allow users to opt into using pylock.toml
over pdm.lock
(which shows that the lock file format works and meets the needs of at least one of the three key projects I tried to make happy). Uv and pip also have some form of support.
I will say, though, that I think I&aposm done with major packaging projects (work has also had me move on from working on packaging since April, so any time at this point would be my free time, which is scant when you have a toddler). Between pyproject.toml
and pylock.toml
, I&aposm ready to move on to the next area of Python where I think I could be the most useful.
October 10, 2025
Python Engineering at Microsoft
Python in Visual Studio Code â October 2025 Release
We’re excited to announce that the October 2025 release of the Python extensions for Visual Studio Code are now available!
This release includes the following announcements:
- Python Environments extension improvements
- Enhanced testing workflow with Copy Test ID in gutter menu
- Shell startup improvements for Python environment activation
If you’re interested, you can check the full list of improvements in our changelogs for the Python and Pylance extensions.
Python Environments Extension Improvements
The Python Environments extension received several fixes and updates to enhance your Python development experience in VS Code. Highlights include improved performance and reliability when working with conda environments – now lauching code directly without conda run
, a smoother environment flow with Python versions now sorted in descending order for easier acces to the latest releases, fixes for crashes when running Python files that use input()
, and improvements to false-positive environment configuration warnings.
The extension also now automatically refreshes environment managers when expanding tree nodes, keeping your environment list up to date without any extra steps.
We appreciate the community feedback that helped identify and prioritize these improvements. Please continue to share your thoughts, suggestions and bug reports on the Python Environments GitHub repository as we continue rolling out this extension.
Enhanced Testing Workflow with Copy Test ID
We’ve improved the testing experience by adding a “Copy Test ID” option to the gutter icon context menu for test functions. This feature allows you to quickly copy test identifiers in pytest format directly from the editor gutter, making it easier to run specific tests from the command line or share test references with teammates.
Improved Shell Startup for Python Environment Activation
We have made improvements to shell start up to reduce issues where terminals created by GitHub Copilot weren’t properly activating Python virtual environments. With the new shell startup approach, you’ll get a more reliable environment activation across terminal creation methods while reducing redundant permission prompts.
Additionally, virtual environment prompts such as (.venv)
now appear correctly when PowerShell is activated through shell integration, and we have resolved issues with activation in WSL.
To benefit from these improvements, set your python-envs.terminal.autoActivationType
to shellStartup
in your VS Code settings.
Other Changes and Enhancements
We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python and Jupyter Notebooks in Visual Studio Code. Some notable changes include:
- Enhanced contributor experience with new Copilot Chat instruction files that provide guidance for testing features and understanding VS Code components when contributing to the Python extension (#25473, #25477)
- Updated debugpy to version 1.8.16 (#795)
We would also like to extend special thanks to this month’s contributors:
- Morikko: Upgraded jedi-language-server to 0.45.1 (#25450)
- cnaples79: Fixed mypy diagnostics parsing from stderr in non-interactive mode (#375)
- renan-r-santos: Display activate button when a terminal is moved to the editor window (#764)
- lev-blit: Added py.typed to debugpy distributed package (#1960)
Try out these new improvements by downloading the Python extension and the Jupyter extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or â + â§ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.
The post Python in Visual Studio Code – October 2025 Release appeared first on Microsoft for Python Developers Blog.
Peter Bengtsson
In Python, you have to specify the type and not rely on inference
Unlike TypeScript, if you give a variable a default, which has a type, that variable is implied to always have the type of the default. That's not the case with mypy and ty.
Brian Okken
Installing Python 3.14 on Mac or Windows
The easiest way to install Python 3.14 (or 3.13, 3.12, 3.11, 3.10, 3.10, …)
I originally wrote this post in 2022 for Python 3.11.
From 2022 through 2024, I remained of the belief that installing from python.org was the best option for most people.
However, 2025 changed that for me, with uv
and uv python
supporting the installation of Python versions. It’s a really pleasant and clean way to keep Python versions up to date and install new versions.
Real Python
The Real Python Podcast â Episode #269: Python 3.14: Exploring the New Features
Python 3.14 is here! Christopher Trudeau returns to discuss the new version with Real Python team member Bartosz ZaczyĆski. This year, Bartosz coordinated the series of preview articles with members of the Real Python team and wrote the showcase tutorial, "Python 3.14: Cool New Features for You to Try." Christopher's video course, "What's New in Python 3.14", covers the topics from the article and shows the new features in action.
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Brian Okken
Testing against Python 3.14
Python 3.14 is here.
If you haven’t done so, it’s time to update your projects to test against 3.14.
The following procedure is what I’m following for a handful of projects. Your process of course may be different if you use different tools.
Honestly, I’m partly writing this down so I don’t have to remember it all in a year when 3.15 rolls around.
Grab a local version of Python 3.14
Installing with uv
uv self update
uv python install 3.14
While it’s true that creating a virtual environment with uv venv .venv --python 3.14
will install 3.14 if it isn’t already there, you still gotta run uv self update
. So I just usually install it while I’m at it.
Django Weblog
2025 Malcolm Tredinnick Memorial Prize awarded to Tim Schilling
We are pleased to announce that the 2025 Malcolm Tredinnick Memorial Prize has been awarded to Tim Schilling!
Tim embodies the values that define the Django community: generosity, respect, thoughtfulness, and a deep commitment to supporting others. He is a tireless community leader who creates spaces where newcomers thrive â€ïž exactly in the spirit of our prize and Malcolm Tredinnickâs work.
About Tim
As a co-founder of Djangonaut Space, Tim has encouraged countless people to take their first steps as contributors. With the overall program but also specific initiatives like co-writing sessions, Space Reviewers, Cosmic Contributors. Many community members trace their involvement in Django back to Timâs encouragement and support.
Beyond Djangonaut Space, Tim serves on the Django Steering Council, is one of the founders of Django Commons, and is an active member of DEFNA, supporting DjangoCon US. He is known for thoughtful feedback, amplifying othersâ work, and encouraging people to step forward for leadership roles.
Quotes
Here is some of what people said about Timâs involvement with the community:
Tim exemplifies all the values the Django community is known for. He is incredibly supportive of newcomers, respectful, and generous. Always ready to give constructive feedback and lend a hand where needed, be it through a pull review or the many Django-related forums he participates in, Tim is a natural leader, someone that the community looks up to.
â Felipe Villegas
Every time he spots a chance to help, he doesn't need to think twice. He's a welcoming person not only with newcomers, as in Djangonaut Space, but also with maintainers through Django Commons. Tim is also very creative, finding different ways to contribute. For example, inside the Djangonaut Space community, the "Space Reviewers" team was formed to host a live stream to help people become reviewers by sharing the process and also actually reviewing a ticket that needs some attention. The Django community is much more than blessed to have Tim, who exemplifies dedication, respect, and support for others.
â Raffaella
Tim just has this way of making sure newcomers feel welcome and get the support they need. He doesn't just talk about community building - he actually does the work to make it happen.
â Abe Hanoka
Tim is a thoughtful and caring community leader. He engages with newcomers in a warm and welcoming manner. In his roles as the admin for Djangonaut Space, the admin of Django Commons, and a member of the Steering Council, he strategically identifies the gaps in the community, collaborates with other members to develop an action plan, and follows through with the execution. He's doing some of the hardest work out there. Not only is Tim nurturing newcomers, he's also growing the community by bridging the gap between newcomers and experienced open source contributors. Tim's actions speak louder than words.
â Lilian
Other nominees
Other nominations for this year included:
- Adam Hill and Sangeeta Jadoonanan
- Anna Makarudze
- Baptiste Mispelon
- Bhuvnesh Sharma
- Carlton Gibson
- David Smith
- Ester Beltrami
- Jake Howard
- Lilian
- Mike Edmunds
- Mike Edwards
- Noah Maina
- Raymond Penners
- Simon Charette
- Will Vincent
Malcolm would be very proud of the legacy he has fostered in our community. Each year we receive many nominations, and it is always hard to pick the winner. If your nominee didnât make it this year, you can always nominate them again next year!
Congratulations Tim on the well-deserved honor!
October 09, 2025
Everyday Superpowers
Why I switched from HTMX to Datastar
In 2022, David Guillot delivered an inspiring DjangoCon Europe talk, showcasing a web app that looked and felt as dynamic as a React app. Yet he and his team had done something bold. They converted it from React to HTMX, cutting their codebase by almost 70% while significantly improving its capabilities.
Since then, teams everywhere have discovered the same thing: turning a single-page app into a multi-page hypermedia app often slashes lines of code by 60% or more while improving both developer and user experience.
I saw similar results when I switched my projects from HTMX to Datastar. It was exciting to reduce my code while building real-time, multi-user applications without needing WebSockets or complex frontend state management.
While preparing my FlaskCon 2025 talk, I hit a wall. I was juggling HTMX and AlpineJS to keep pieces of my UI in sync, but they fell out of step. I lost hours debugging why my component wasnât updating. Neither library communicates with the other. Since they are different libraries created by different developers, you are the one responsible for helping them work together.
Managing the dance to initialize components at various times and orchestrating events was causing me to write more code than I wanted to and spend more time than I could spare to complete tasks.
Knowing that Datastar had the capability of both libraries with a smaller download, I thought Iâd give it a try. It handled it without breaking a sweat, and the resulting code was much easier to understand.
I appreciate that thereâs less code to download and maintain. Having a library handle all of this in under 11 KB is great for improving page load performance, especially for users on mobile devices. The less you need to download, the better off you are.
But that's just the starting point.
As I incorporated Datastar into my project at work, I began to appreciate Datastarâs API. It feels significantly lighter than HTMX. I find that I need to add fewer attributes to achieve the desired results.
For example, most interactions with HTMX require you to create an attribute to define the URL to hit, what element to target with the response, and then you might need to add more to customize how HTMX behaves, like this:
<a hx-target="#rebuild-bundle-status-button"
hx-select="#rebuild-bundle-status-button"
hx-swap="outerHTML"
hx-trigger="click"
hx-get="/rebuild/status-button"></a>
One doesnât always need all of these, but I find it common to have two or three attributes every time[2]{And then there are the times I need to remember to look up the ancestry chain to see if any attribute changes the way Iâm expecting things to work. Those are confusing bugs when they happen!}.
With Datastar, I regularly use just one attribute, like this:
<a data-on-click="@get('/rebuild/status-button')"></a>
This gives me less to think about when I return months later and need to recall how this works.
The primary difference between HTMX and Datastar is that HTMX is a front-end library that advances the HTML specification. DataStar is a server-side-driven library that aims to create high-performance, web-native, live-updating web applications.
In HTMX, you describe its behavior by adding attributes to the element that triggers the request, even if it updates something far away on the page. Thatâs powerful, but it means your logic is scattered across multiple layers. Datastar flips that: the server decides what should change, keeping all your update logic in one place.
To cite an example from HTMXâs documentation:
<div>
<div id="alert"></div>
<button hx-get="/info"
hx-select="#info-details"
hx-swap="outerHTML"
hx-select-oob="#alert">
Get Info!
</button>
</div>
When the button is pressed, it sends a GET request to `/info`, replaces the button with the element in the response that has the ID 'info-details', and then retrieves the element in the response with the ID 'alert', replacing the element with the same ID on the page.
This is a lot for that button element to know. To author this code, you need to know what information youâre going to return from the server, which is done outside of editing the HTML. This is when HTMX loses the âlocality of behaviorâ I like so much.
Datastar, on the other hand, expects the server to define the behavior, and it works better.
To replicate the behavior above, you have options. The first option keeps the HTML similar to above:
<div>
<div id="alert"></div>
<button id="info-details"
data-on-click="@get('/info')">
Get Info!
</button>
</div>
In this case, the server can return an HTML string with two root elements that have the same IDs as the elements theyâre updating:
<p id="info-details">These are the details you are looking forâŠ</p>
<div id="alert">Alert! This is a test.</div>
I love this option because itâs simple and performant.
A better option would change the HTML to treat it as a component.
What is this component? It appears to be a way for the user to get more information about a specific item.
What happens when the user clicks the button? It seems like either the information appears or there is no information to appear, and instead we render an error. Either way, the component becomes static.
Maybe we could split the component into each state, first, the placeholder:
<!-- info-component-placeholder.html -->
<div id="info-component">
<button data-on-click="@get('/product/{{product.id}}/info')">
Get Info!
</button>
</div>
Then the server could render the information the user requestsâŠ
<!-- info-component-get.html -->
<div id="info-component">
{% if alert %}<div id="alert">{{ alert }}</div>{% endif %}
<p>{{product.additional_information}}</p>
</div>
âŠand Datastar will update the page to reflect the changes.
This particular example is a little wonky, but I hope you get the idea. Thinking at a component level is better as it prevents you from entering an invalid state or losing track of the user's state.
One of the amazing things from David Guillot's talk is how his app updated the count of favored items even though that element was very far away from the component that changed the count.
Davidâs team accomplished that by having HTMX trigger a JavaScript event, which in turn triggered the remote component to issue a GET request to update itself with the most up-to-date count.
With Datastar, you can update multiple components at once, even in a synchronous function.
If we have a component that allows someone to add an item to a shopping cart:
<form id="purchase-item"
data-on-submit="@post('/add-item', {contentType: 'form'})">"
>
<input type=hidden name="cart-id" value="{{cart.id}}">
<input type=hidden name="item-id" value="{{item.id}}">
<fieldset>
<button data-on-click="$quantity -= 1">-</button>
<label>Quantity
<input name=quantity type=number data-bind-quantity value=1>
</label>
<button data-on-click="$quantity += 1">+</button>
</fieldset>
<button type=submit>Add to cart</button>
{% if msg %}
<p class=message>{{msg}}</p>
{% endif %}
</form>
And another one that shows the current count of items in the cart:
<div id="cart-count">
<svg viewBox="0 0 10 10" xmlns="http://www.w3.org/2000/svg">
<use href="#shoppingCart">
</svg>
{{count}}
</div>
Then a developer can update them both in the same request. This is one way it could look in Django:
from datastar_py.consts import ElementPatchMode
from datastar_py.django import (
DatastarResponse,
ServerSentEventGenerator as SSE,
)
def add_item(request):
# skipping all the important state updates
return DatastarResponse([
SSE.patch_elements(
render_to_string('purchase-item.html', context=dict(cart=cart, item=item, msg='Item added!'))
),
SSE.patch_elements(
render_to_string('cart-count.html', context=dict(count=item_count))
),
])
Being a part of the Datastar Discord, I appreciate that Datastar isn't just a helper script. Itâs a philosophy about building apps with the webâs own primitives, letting the browser and the server do what theyâre already great at.
Where HTMX is trying to push the HTML spec forward, Datastar is more interested in promoting the adoption of web-native features, such as CSS view transitions, Server-Sent Events, and web components, where appropriate.
This has been a massive eye-opener for me, as Iâve long wanted to leverage each of these technologies, and now Iâm seeing the benefits.
One of the biggest wins I achieved with Datastar was by refactoring a complicated AlpineJS component and extracting a simple web component that I reused in multiple places[3]{Iâll talk more about this in an upcoming post.}.
I especially appreciate this because there are times when it's best to rely on JavaScript to accomplish a task. But it doesn't mean you have to reach for a tool like React to achieve it. Creating custom HTML elements is a great pattern to accomplish tasks with high locality of behavior and the ability to reuse them across your app.
However, Datastar provides you with even more capabilities.
Apps built with collaboration as a first-class feature stand out from the rest, and Datastar is up to the challenge.
To accomplish this, most HTMX developers achieve updates either by "pulling" information from the server by polling every few seconds or by writing custom WebSocket code, which increases complexity.
Datastar uses a simple web technology called Server-Sent Events (SSE) to allow the server to "push" updates to connected clients. When something changes, such as a user adding a comment or a status change, the server can immediately update browsers with minimal additional code.
You can now build live dashboards, admin panels, and collaborative tools without crafting custom JavaScript. Everything flows from the server, through HTML.
Additionally, suppose a client's connection is interrupted. In that case, the browser will automatically attempt to reconnect without requiring additional code, and it can even notify the server, "This is the last event I received." It's wonderful.
Being a part of the Datastar community on Discord has helped me appreciate the Datastar vision of making web apps. They aim to have push-based UI updates, reduce complexity, and leverage tools like web components to handle more complex situations locally. Itâs common for the community to help newcomers by helping them realize theyâre overcomplicating things.
Here are some of the tips Iâve picked up:
- Donât be afraid to re-render the whole component and send it down the pipe. Itâs easier, it probably wonât affect performance too much, you get better compression ratios, and itâs incredibly fast for the browser to parse HTML strings.
- The server is the state of truth and is more powerful than the browser. Let it handle the majority of the state. You probably donât need the reactive signals as much as you think you do.
- Web components are great for encapsulating logic into a custom element with high locality of behavior. A great example of this is the star field animation in the header of the Datastar website. The `<ds-starfield>` element encapsulates all the code to animate the star field and exposes three attributes to change its internal state. Datastar drives the attributes whenever the range input changes or the mouse moves over the element.
But what Iâm most excited about are the possibilities that Datastar enables. The community is routinely creating projects that push well beyond the limits experienced by developers using other tools.
The examples page includes a database monitoring demo that leverages Hypermedia to significantly improve the speed and memory footprint of a demo presented at a JavaScript conference.
The one million checkbox experiment was too much for the server it started on. Anders Murphy used Datastar to create one billion checkboxes on an inexpensive server.
But the one that most inspired me was a web app that displayed data from every radar station in the United States. When a blip changed on a radar, the corresponding dot in the UI would change within 100 milliseconds. This means that *over 800,000 points are being updated per second*. Additionally, the user could scrub back in time for up to an hour (with under a 700 millisecond delay). Can you imagine this as a Hypermedia app? This is what Datastar enables.
Iâm still in what I consider my discovery phase of Datastar. Replacing the standard HTMX functionality of ajaxing updates to a UI was quick and easy to implement. Now Iâm learning and experimenting with different patterns to use Datastar to achieve more and more.
For decades, Iâve been interested in ways I could provide better user experiences with real-time updates, and I love that Datastar enables me to do push-based updates, even in synchronous code.
HTMX filled me with so much joy when I started using it. But I havenât felt like I lost anything since switching to Datastar. In fact, I feel like Iâve gained so much more.
If youâve ever felt the joy of using HTMX, I bet youâll feel the same leap again with Datastar. Itâs like discovering what the web was meant to do all along.
Read more...
Mike Driscoll
An /intro to Python 3.14âs New Features
Python 3.14 came out this week and has many new features and improvements. For the full details behind the release, the documentation is the best source. However, you will find a quick overview of the major changes here.
As with most Python releases, backwards compatibility is rarely broken. However, there has been a push to clean up the standard library, so be sure to check out what was removed and what has been deprecated. In general, most of the items in these lists are things the majority of Python users do not use anyway.
But enough with that. Let’s learn about the big changes!
Release Changes in 3.14
The biggest change to come to Python in a long time is the free-threaded build of Python. While free-threaded Python existed in 3.13, it was considered experimental at that time. Now in 3.14, free-threads are officially supported, but still optional.
Free-threaded Python is a build option in Python. You can turn it on if you want to when you build Python. There is still debate about turning free-threading on by default, but that has not been decided at the time of writing of this article.
Another new change in 3.14 is an experimental just-in-time (JIT) compiler for MacOS and Windows release binaries. Currently, the JIT compiler is NOT recommended in production. If you’d like you test it out, you can set  PYTHON_JIT=1
as an environmental variable. When running with JIT enabled, you may see Python perform 10% slower or up to 20% faster, depending on workload.
Note that native debuggers and profilers (gdp and perf) are not able to unwind JIT frames, although Python’s own pdb and profile modules work fine with them. Free-threaded builds do not support the JIT compilter though.
The last item of note is that GPG (Pretty Good Privacy) signatures are not provided for Python 3.14 or newer versions. Instead, users must use  Sigstore verification materials. Releases have been signed using Sigstore since Python 3.11.
Python Interpreter Improvements
There are a slew of new improvements to the Python interpreter in 3.14. Here is a quick listing along with links:
- PEP 649 and PEP 749: Deferred evaluation of annotations
- PEP 734:Â Multiple interpreters in the standard library
- PEP 750:Â Template strings
- PEP 758:Â Allow except and except* expressions without brackets
- PEP 765:Â Control flow in finally blocks
- PEP 768:Â Safe external debugger interface for CPython
- A new type of interpreter
- Free-threaded mode improvements
- Improved error messages
- Incremental garbage collection
Let’s talk about the top three a little. Deferred evaluation of annotations refers to type annotations. In the past, the type annotations that are added to functions, classes, and modules were evaluated eagarly. That is no longer the case. Instead, the annotations are stored  in special-purpose annotate functions and evaluated only when necessary with the exception of if from __future__ import annotations
is used at the top of the module.
the reason for this change it to improve performance and usability of type annotations in Python. You can use the new annotationlib
module to inspect deferred annotations. Here is an example from the documentation:
>>> from annotationlib import get_annotations, Format >>> def func(arg: Undefined): ... pass >>> get_annotations(func, format=Format.VALUE) Traceback (most recent call last): ... NameError: name 'Undefined' is not defined >>> get_annotations(func, format=Format.FORWARDREF) {'arg': ForwardRef('Undefined', owner=<function func at 0x...>)} >>> get_annotations(func, format=Format.STRING) {'arg': 'Undefined'}
Another interesting change is the addition of multiple interpreters in the standard library. The complete formal definition of this new feature can be found in PEP 734. This feature has been available in Python for more than 20 years, but only throught the C-API. Starting in Python 3.14, you can now use the new concurrent.interpreters
module.
Why would you want to use multiple Python interpreters?
- They support a more human-friendly concurrency model
- They provide a true multi-core parallelism
These interpreters provide isolated “processes” that run in parallel with no sharing by default.
Another feature to highlightare the template string literals (t-strings). Full details can be found in PEP 750. Brett Cannon, a core developer of the Python language, posted a good introductory article about these new t-strings on his blog. A template string or t-string is a new mechanism for custom string processing. However, unlike an f-string, a t-string will return an object that represents the static and the interpolated parts of the string.
Here’s a quick example from the documentation:
>>> variety = 'Stilton' >>> template = t'Try some {variety} cheese!' >>> type(template) <class 'string.templatelib.Template'> >>> list(template) ['Try some ', Interpolation('Stilton', 'variety', None, ''), ' cheese!']
You can use t-strings to sanitize SQL, improve logging, implement custom, lightweight DSLs, and more!
Standard Library Improvements
Python’s standard library has several significant improvements. Here are the ones highlighted by the Python documentation:
- PEP 784:Â Zstandard support in the standard library
- Asyncio introspection capabilities
- Concurrent safe warnings control
- Syntax highlighting in the default interactive shell, and color output in several standard library CLIs
If you do much compression in Python, then you will be happy that Python has added Zstandard support in addition to the zip and tar archive support that has been there for many years.
Compressing a string using Zstandard can be accomplished with only a few lines of code:
from compression import zstd import math data = str(math.pi).encode() * 20 compressed = zstd.compress(data) ratio = len(compressed) / len(data) print(f"Achieved compression ratio of {ratio}")
Another neat addition to the Python standard library is asyncio introspection via a new command-line interface. You can now use the following command to introspect:
- python -m asyncio ps PID
- python -m asyncio pstree PID
The ps
sub-command will inspect the given process ID and siplay information about the current asyncio tasks. You will see a task table as output which contains a listing of all tasks, their names and coroutine stacks, and which tasks are awaiting them.
The pstree
sub-command will fetch the same information, but it will render them using a visual async call tree instead, which shows the coroutine relationships in a hierarcical format. Ths pstree
command is especiialy useful for debugging stuck or long-running async programs.
One other neat update to Python is that the default REPL shell now highlights Python syntax. You can change the color theme using an experimental API _colorize.set_theme() which can be called interactively or in the PYTHONSTARTUP
script. The REPL also supports impor tauto-completion, which means you can start typing the name of a module and then hit tab to get it to complete.
Wrapping Up
Python 3.14 looks to be an exciting release with many performance improvements. They have also laid down more framework to continue improving Python’s speed.
The latest version of Python has many other imrpovements to modules that aren’t listed here. To see all the nitty gritty details, check out the What’s New in Python 3.14 page in the documentation.
Drop a comment to let us know what you think of Python 3.14 and what you are excited to see in upcoming releases!
The post An /intro to Python 3.14’s New Features appeared first on Mouse Vs Python.
Python Bytes
#452 pi py-day (or is it py pi-day?)
<strong>Topics covered in this episode:</strong><br> <ul> <li><em>* Python 3.14</em>*</li> <li><em>* <a href="https://ft-checker.com?featured_on=pythonbytes">Free-threaded Python Library Compatibility Checker</a></em>*</li> <li><em>* <a href="https://www.anthropic.com/news/claude-sonnet-4-5?featured_on=pythonbytes">Claude Sonnet 4.5</a></em>*</li> <li><em>* <a href="https://pep-previews--4622.org.readthedocs.build/pep-0810/?featured_on=pythonbytes">Python 3.15 will get Explicit lazy imports</a></em>*</li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=j74eRaSEvIo' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="452">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by DigitalOcean: <a href="https://pythonbytes.fm/digitalocean-gen-ai"><strong>pythonbytes.fm/digitalocean-gen-ai</strong></a> Use code <strong>DO4BYTES</strong> and get $200 in free credit</p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: Python 3.14</strong></p> <ul> <li>Released on Oct 7</li> <li><a href="https://docs.python.org/3.14/whatsnew/3.14.html?featured_on=pythonbytes">Whatâs new in Python 3.14</a></li> <li>Just a few of the changes <ul> <li>PEP 750: Template string literals</li> <li>PEP 758: Allow except and except* expressions without brackets</li> <li><strong>Improved error messages</strong></li> <li>Default interactive shell now <ul> <li>highlights Python syntax</li> <li>supports auto-completion</li> </ul></li> <li>argparse <ul> <li>better support for <code>python -m module</code></li> <li>has a new <code>suggest_on_error</code> parameter for âmaybe you meant âŠâ support</li> </ul></li> <li><code>python -m calendar</code> now highlights todayâs date</li> </ul></li> <li>Plus so much more</li> </ul> <p><strong>Michael #2: <a href="https://ft-checker.com?featured_on=pythonbytes">Free-threaded Python Library Compatibility Checker</a></strong></p> <ul> <li>by <a href="https://github.com/corona10?featured_on=pythonbytes">Donghee Na</a></li> <li>App checks compatibility of top PyPI libraries with CPython 3.13t and 3.14t, helping developers understand how the Python ecosystem adapts to upcoming Python versions.</li> <li>Itâs still pretty red, letâs get in the game everyone!</li> </ul> <p><strong>Michael #3: <a href="https://www.anthropic.com/news/claude-sonnet-4-5?featured_on=pythonbytes">Claude Sonnet 4.5</a></strong></p> <ul> <li>Top programming model (even above Opus 4.1)</li> <li>Shows large improvements in reducing concerning behaviors like sycophancy, deception, power-seeking, and the tendency to encourage delusional thinking</li> <li>Anthropic is releasing the Claude Agent SDK, the same infrastructure that powers Claude Code, making it available for developers to build their own agents, along with major upgrades including checkpoints, a VS Code extension, and new context editing features</li> <li>And Claude Sonnet 4.5 is available in <a href="https://blog.jetbrains.com/ai/2025/09/introducing-claude-agent-in-jetbrains-ides/?map=2&mkt_tok=NDI2LVFWRC0xMTQAAAGdONnRabyMjVqZtzoP7gHexU8ch5afa-LRr8ve6qJs4H77qyz2tc6urxCZzTisSMxMDk1b7lvoTH8thHAh4_VCms-0_cbvDAen9_dSRNXu3axlSpR2&featured_on=pythonbytes">PyCharm too</a>.</li> </ul> <p><strong>Brian #4: <a href="https://pep-previews--4622.org.readthedocs.build/pep-0810/?featured_on=pythonbytes">Python 3.15 will get Explicit lazy imports</a></strong></p> <ul> <li><a href="https://discuss.python.org/t/pep-810-explicit-lazy-imports/104131?featured_on=pythonbytes">Discussion</a> on <a href="http://discuss.python.org?featured_on=pythonbytes">discuss.python.org</a></li> <li><p>This PEP introduces syntax for lazy imports as an explicit language feature:</p> <div class="codehilite"> <pre><span></span><code><span class="n">lazy</span> <span class="kn">import</span><span class="w"> </span><span class="nn">json</span> <span class="n">lazy</span> <span class="kn">from</span><span class="w"> </span><span class="nn">json</span><span class="w"> </span><span class="kn">import</span> <span class="n">dumps</span> </code></pre> </div></li> <li><p>BTW, lazy loading in fixtures is a super easy way to speed up test startup times.</p></li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://www.youtube.com/watch?v=4hGCwTqRz0w">Music video made in Python</a> - from Patrick of the band âFriends in Real Lifeâ <ul> <li>source code: https://gitlab.com/low-capacity-music/r9-legends/</li> </ul></li> </ul> <p>Michael:</p> <ul> <li>New article: <a href="https://mkennedy.codes/posts/goodbye-wordpress-thanks-ai/?featured_on=pythonbytes">Thanks AI</a></li> <li>Lots of updates for <a href="https://github.com/mikeckennedy/content-types?featured_on=pythonbytes">content-types</a></li> <li>Dramatically improved search on Python Bytes (example: https://pythonbytes.fm/search?q=wheel use the filter toggle to see top hits)</li> <li><a href="https://talkpython.fm/books/python-in-production/buy?featured_on=pythonbytes">Talk Python in Production is out and for sale</a></li> </ul> <p><strong>Joke: <a href="https://x.com/pr0grammerhum0r/status/1971424514966683683?s=12&featured_on=pythonbytes">You do estimates</a>?</strong></p>
Trey Hunner
Handy Python REPL Modifications
I find myself in the Python REPL a lot.
I open up the REPL to play with an idea, to use Python as a calculator or quick and dirty text parsing tool, to record a screencast, to come up with a code example for an article, and (most importantly for me) to teach Python. My Python courses and workshops are based largely around writing code together to guess how something works, try it out, and repeat.
As I’ve written about before, you can add custom keyboard shortcuts to the new Python REPL (since 3.13) and customize the REPL syntax highlighting (since 3.14). If you spend time in the Python REPL and wish it behaved a little more like your favorite editor, these tricks can come in handy.
I have added custom keyboard shortcuts to my REPL and other modifications to help me more quickly write and edit code in my REPL. I’d like to share some of the modifications that I’ve found helpful in my own Python REPL.
Creating a PYTHONSTARTUP file
If you want to run Python code every time an interactive prompt (a REPL) starts, you can make a PYTHONSTARTUP file.
When Python launches an interactive prompt, it checks for a PYTHONSTARTUP
environment variable.
If it finds one, it treats it as a filename that contains Python code and it runs all the code in that file, as if you had copy-pasted the code into the REPL.
So all of the modifications I have made to my Python REPL rely on this PYTHONSTARTUP
variable in my ~/.zshenv
file:
1
|
|
If you use bash, you’ll put that in your ~/.bashrc
file.
If you’re on Windows you’ll need to set an environment variable the Windows way.
With that variable set, I can now create a ~/.startup.py
file that has Python code in it.
That code will automatically run every time I launch a new Python REPL, whether within a virtual environment or outside of one.
My REPL keyboard shortcuts
The quick summary of my current modifications are:
- Pressing Home moves to the first character in the code block
- Pressing End moves to the last character in the code block
- Pressing Alt+M moves to the first character on the current line
- Pressing Shift+Tab removes common indentation from the code block
- Pressing Alt+Up swaps the current line with the line above it
- Pressing Alt+Down swaps the current line with the line below it
- Pressing Ctrl+N inserts a specific list of numbers
- Pressing Ctrl+F inserts a specific list of strings
If you’ve read my Python REPL shortcuts article, you know that we can use Ctrl+A to move to the beginning of the line and Ctrl+E to move to the end of the line. I already use those instead of the Home and End keys, so I decided to rebind Home and End to do something different.
The Alt+M key combination is essentially the same as Alt+M
in Emacs or ^
in Vim. I usually prefer to move to the beginning of the non-whitespace in a line rather than to the beginning of the entire line.
The Shift+Tab functionality is basically a fancy wrapper around using textwrap.dedent
: it dedents the current code block while keeping the cursor over the same character it was at before.
The Ctrl+N and Ctrl+F shortcuts make it easier for me to grab an example data structure to work with when teaching.
In addition to the above changes, I also modify my color scheme to work nicely with my Solarized Light color scheme in Vim.
I created a pyrepl-hacks library for this
My PYTHONSTARTUP file became so messy that I ended up creating a pyrepl-hacks library to help me with these modifications.
My PYTHONSTARTUP file now looks pretty much like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
That’s pretty short!
But wait… won’t this fail unless pyrepl-hacks is installed in every virtual environment and installed globally for every Python version on my machine?
That’s where that sys.path.append
trick comes in handy…
Wait… let’s acknowledge the dragons đČ
At this point I’d like to pause to note that all of this relies on using an implementation detail of Python that is deliberately undocumented because it is not designed to be used by end users.
The above code all relies on the _pyrepl
module that was added in Python 3.13 (and optionally the _colorize
module that was added in Python 3.14).
When I run a new future version of Python (for example Python 3.15) this solution may break.
I’m willing to take that risk, as I know that I can always unset my shell’s PYTHONSTARTUP
variable or clear out my startup file.
So, just be aware… here be (private undocumented implementation detail) dragons.
Monkey patching sys.path
to allow importing pyrepl_hacks
I didn’t install pyrepl-hacks the usual way. Instead, I installed it in a very specific location.
I created a ~/.pyhacks
directory and then installed pyrepl-hacks
into that directory:
1 2 |
|
In order for the pyrepl_hacks
Python package to work, it needs to available within every Python REPL I might launch.
Normally that would mean that it needs to be installed in every virtual environment that Python runs within.
This trick avoids that constraint.
When Python tries to import a module, it iterates through the sys.path
directory list.
Any Python packages found within any of the sys.path
directories may be imported.
So monkey patching sys.path
within my PYTHONSTARTUP file allows pyrepl_hacks
to be imported in every Python interpreter I launch:
1 2 3 |
|
With those 3 lines (or something like them) placed in my PYTHONSTARTUP file, all interactive Python interpreters I launch will be able to import modules that are located in my ~/.pyhacks
directory.
Creating your own custom REPL commands
That’s pretty neat. But what if you want to invent your own REPL commands?
Well, the bind
utility I’ve created in the pyrepl_hacks
module can be used as a decorator for that.
This will make Ctrl+X followed by Ctrl+R insert import subprocess
followed by subprocess.run("", shell=True)
with the cursor positioned in between the double quotes after it’s all inserted:
1 2 3 4 5 6 7 8 9 10 |
|
You can read more about the ins and outs of the pyrepl-hacks package in the readme file.
pyrepl-hacks is just a fancy wrapper
The pyrepl-hacks package is really just a fancy wrapper around Python’s _pyrepl
and _colorize
modules.
Why did I make a whole package and then modify my sys.path
to use it, when I could have just used _pyrepl
directly?
Three reasons:
- To make creating new commands a bit easier (functions can be used instead of classes)
- To make the key bindings look a bit nicer (I prefer
"Ctrl+M"
overr"\C-M"
) - To hide my hairy hacks behind a shiny API âš
Before I made pyrepl-hacks, I implemented these commands directly within my PYTHONSTARTUP file by reaching into the internals of _pyrepl
directly.
My PYTHONSTARTUP file before pyrepl-hacks was over 100 lines longer.
Try pyrepl-hacks and leave feedback
My hope is that the pyrepl-hacks library will be obsolete one day.
Eventually the _pyrepl
module might be renamed to pyrepl
(or maybe just repl
?) and it will have a well-documented friendly-ish public interface.
In the meantime, I plan to maintain pyrepl-hacks. As Python 3.15 is developed, I’ll make sure it continues to work. And I may add more useful commands if I think of any.
If you hack your own REPL, I’d love to hear what modifications you come up with. And if you have thoughts on how to improve pyrepl-hacks, please open an issue or get in touch.
Also, if you use Windows, please help me confirm whether certain keys work on Windows. Thanks!
Contributions and ideas welcome!
October 08, 2025
Real Python
Python 3.14: Cool New Features for You to Try
Python 3.14 was released on October 7, 2025. While many of its biggest changes happen under the hood, there are practical improvements youâll notice right away. This version sharpens the languageâs tools, boosts ergonomics, and opens doors to new capabilities without forcing you to rewrite everything.
In this tutorial, youâll explore features like:
- A smarter, more colorful REPL experience
- Error messages that guide you toward fixes
- Safer hooks for live debugging
- Template strings (t-strings) for controlled interpolation
- Deferred annotation evaluation to simplify typing
- New concurrency options like subinterpreters and a free-threaded build
If you want to try out the examples, make sure you run Python 3.14 or a compatible preview release.
Note: On Unix systems, when you create a new virtual environment with the new Python 3.14, youâll spot a quirky alias:
(venv) $ đthon
Python 3.14.0 (main, Oct 7 2025, 17:32:06) [GCC 14.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
This feature is exclusive to the 3.14 release as a tribute to the mathematical constant Ï (pi), whose rounded value, 3.14, is familiar to most people.
As you read on, youâll find detailed examples and explanations for each feature. Along the way, youâll get tips on how they can streamline your coding today and prepare you for whatâs coming next.
Get Your Code: Click here to download the free sample code that youâll use to learn about the new features in Python 3.14.
Take the Quiz: Test your knowledge with our interactive âPython 3.14: Cool New Features for You to Tryâ quiz. Youâll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python 3.14: Cool New Features for You to TryIn this quiz, you'll test your understanding of the new features introduced in Python 3.14. By working through this quiz, you'll review the key updates and improvements in this version of Python.
Developer Experience Improvements
Python 3.14 continues the trend of refining the languageâs ergonomics. This release enhances the built-in interactive shell with live syntax highlighting and smarter autocompletion. It also improves syntax and runtime error messages, making them clearer and more actionable. While these upgrades donât change the language itself, they boost your productivity as you write, test, and debug code.
Even Friendlier Python REPL
Pythonâs interactive interpreter, also known as the REPL, has always been the quickest way to try out a snippet of code, debug an issue, or explore a third-party library. It can even serve as a handy calculator or a bare-bones data analysis tool. Although your mileage may vary, you typically start the REPL by running the python
command in your terminal without passing any arguments:
$ python
Python 3.14.0 (main, Oct 7 2025, 17:32:06) [GCC 14.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
The humble prompt, which consists of three chevrons (>>>
), invites you to type a Python statement or an expression for immediate evaluation. As soon as you press Enter, youâll instantly see the computed result without having to create any source files or configure a project workspace. After each result, the familiar prompt returns, ready to accept your next command:
>>> 2 + 2
4
>>>
For years, the stock Python REPL remained intentionally minimal. It was fast and reliable, but lacked the polish of alternative shells built by the community, like IPython, ptpython, or bpython.
That started to change in Python 3.13, which adopted a modern REPL based on PyREPL borrowed from the PyPy project. This upgrade introduced multiline editing, smarter history browsing, and improved Tab completion, while keeping the simplicity of the classic REPL.
Python 3.14 takes the interactive shell experience to the next level, introducing two new features:
- Syntax highlighting: Real-time syntax highlighting with configurable color themes
- Code completion: Autocompletion of module names inside
import
statements
Together, these improvements make the built-in REPL feel closer to a full-fledged code editor while keeping it lightweight and always available. The Python REPL now highlights code as you type. Keywords, strings, comments, numbers, and operators each get their own color, using ANSI escape codes similar to those that already color prompts and tracebacks in Python 3.13:
Python 3.14 Syntax Highlighting in the REPLNotice how the colors shift as you type, once the interactive shell has enough context to parse your input. In particular, tokens such as the underscore (_
) are recognized as soft keywords only in the context of pattern matching, and Python highlights them in a distinct color to set them apart. This colorful output also shows up in the Python debugger (pdb) when you set a breakpoint()
on a given line of code, for example.
Additionally, a few of the standard-library modules can now take advantage of this new syntax-coloring capability of the Python interpreter:
Colorful Output in Python 3.14's Standard-Library ModulesThe argparse
module displays a colorful help message, the calendar
module highlights the current day, the json
module pretty-prints and colorizes JSON documents. Finally, the unittest
module provides a colorful output for failed assertions to make reading and diagnosing them easier.
Read the full article at https://realpython.com/python314-new-features/ »
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Python 3.14: Cool New Features for You to Try
In this quiz, you’ll test your understanding of Python 3.14: Cool New Features for You to Try. By working through this quiz, you’ll review the key updates and improvements in this version of Python.
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
October 07, 2025
PyCoderâs Weekly
Issue #703: PEP 8, Error Messages in Python 3.14, splitlines(), and More (Oct. 7, 2025)
#703 â OCTOBER 7, 2025
View in Browser »
Python Violates PEP 8
PEP 8 outlines the preferred coding style for Python. It often gets wielded as a cudgel in online conversations. This post talks about what PEP 8 says and where it often gets ignored.
AL SWIEGART
Python 3.14 Preview: Better Syntax Error Messages
Python 3.14 includes ten improvements to error messages, which help you catch common coding mistakes and point you in the right direction.
REAL PYTHON
Free Course: Build a Durable AI Agent with Temporal and Python
Curious about how to build an AI agent that actually works in production? This free hands-on course shows you how with Python and Temporal. Learn to orchestrate workflows, recover from failures, and deliver a durable chatbot agent that books trips and generates invoices. Explore Tutorial â
TEMPORAL sponsor
Why splitlines()
Instead of split("\n")
?
To split text into lines in Python you should use the splitlines()
method, not the split()
method, and this post shows you why.
TREY HUNNER
Python Jobs
Senior Python Developer (Houston, TX, USA)
Articles & Tutorials
Advice on Beginning to Learn Python
What’s changed about learning Python over the last few years? What new techniques and updated advice should beginners have as they start their journey? This week on the show, Stephen Gruppetta and Martin Breuss return to discuss beginning to learn Python.
REAL PYTHON podcast
Winning a Bet About six
In 2020, Seth Larson and Andrey Petrov made a bet about whether six
, the Python 2 compatibility shim would still be in the top 20 PyPI downloads. Seth won, but probably only because of a single library still using it.
SETH LARSON
Show Off Your Python Chops: Win the 2025 Table & Plotnine Contests
Showcase your Python data skills! Submit your best Plotnine charts and table summaries to the 2025 Contests. Win swag, boost your portfolio, and get recognized by the community. Deadline: Oct 17, 2025. Submit now!
POSIT sponsor
Durable Python Execution With Temporal
Talk Python interviews Mason Egger to discuss Temporal, a durable execution platform that enables developers to build scalable applications without sacrificing productivity or reliability.
KENNEDY & EGGER podcast
Astral’s ty
: A New Blazing-Fast Type Checker for Python
Learn to use ty, an ultra-fast Python type checker written in Rust. Get setup instructions, run type checks, and fine-tune custom rules in personal projects.
REAL PYTHON
What Is “Good Taste” in Software Engineering?
This opinion piece talks about the difference between skill and taste when writing software. What “clean code” means to one may not be the same as to others.
SEAN GOEDECKE
Modern Python Linting With Ruff
Ruff is a blazing-fast, modern Python linter with a simple interface that can replace Pylint, isort, and Blackâand it’s rapidly becoming popular.
REAL PYTHON course
Introducing tdom
: HTML Templating With tâstrings
Python 3.14 introduces t-strings and this article shows you tdom
a new HTML DOM toolkit that takes advantage of them to produce safer output.
DAVE PECK
Full Text Search With Django and SQLite
A walkthrough how to build full text search to power the search functionality of a blog using Django and SQLite.
TIMO ZIMMERMANN
Projects & Code
subprocesslib: Like pathlib
for the subprocess
Module
PYPI.ORG âą Shared by Antoine Cezar
Python Implementation of the Cap’n Web Protocol
GITHUB.COM/ABILIAN âą Shared by Stefane Fermigier
Events
Weekly Real Python Office Hours Q&A (Virtual)
October 8, 2025
REALPYTHON.COM
PyCon Africa 2025
October 8 to October 13, 2025
PYCON.ORG
Wagtail Space 2025
October 8 to October 11, 2025
ZOOM.US
PyCon Hong Kong 2025
October 11 to October 13, 2025
PYCON.HK
PyCon NL 2025
October 16 to October 17, 2025
PYCON-NL.ORG
PyCon Thailand 2025
October 17 to October 19, 2025
PYCON.ORG
PyCon Finland 2025
October 17 to October 18, 2025
PLONECONF.ORG
PyConES 2025
October 17 to October 20, 2025
PYCON.ORG
Happy Pythoning!
This was PyCoder’s Weekly Issue #703.
View in Browser »
[ Subscribe to đ PyCoder’s Weekly đ â Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Python Morsels
Python 3.14's best new features
Python 3.14 includes syntax highlighting, improved error messages, enhanced support for cocurrency and parallelism, t-strings and more!

Table of contents
- Very important but not my favorites
- Python 3.14: now in color!
- My tiny contribution
- Beginner-friendly error messages
- Tab completion for import statements
- Standard library improvements
- Cleaner multi-exception catching
- Concurrency improvements
- External debugger interface
- T-strings (template strings)
- Try out Python 3.14 yourself
Very important but not my favorites
I'm not going to talk about the experimental free-threading mode, the just-in-time compiler, or other performance improvements. I'm going to focus on features that you can use right after you upgrade.
Python 3.14: now in color!
One of the most immediately âŠ
Read the full article: https://www.pythonmorsels.com/python314/
Real Python
What's New in Python 3.14
Python 3.14 was published on October 7, 2025. While many of its biggest changes happen under the hood, there are practical improvements you’ll notice right away. This version sharpens the language’s tools, boosts ergonomics, and opens doors to new capabilities without forcing you to rewrite everything.
In this video course, you’ll explore features like:
- A smarter, more colorful REPL experience
- Error messages that guide you toward fixes
- Safer hooks for live debugging
- Template strings (t-strings) for controlled interpolation
- Deferred annotation evaluation to simplify typing
- New concurrency options like subinterpreters and a free-threaded build
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Seth Michael Larson
Is the "Nintendo Classics" collection a good value?
Nintendo Classics is a collection of hundreds of retro video games from Nintendo (and Sega) consoles from the NES to the GameCube. Nintendo Classics is included with the Nintendo Switch Online (NSO) subscription, which starts at $20/year (~$1.66/month) for individual users.
Looking at the prices of retro games these days, this seems like an incredible value for players that want to play these games. This post is sharing a dataset that I've curated about Nintendo Classics games and mapping their value to actual physical prices of the same games, with some interesting queries.
For example, here's a graph showing the total value (in $USD) of Nintendo Classics over time:
The dataset was generated from the tables provided on Wikipedia (CC-BY-SA). The dataset doesn't contain pricing information, instead only links to corresponding Pricecharting pages. This page only shares approximate aggregate price information, not prices of individual games. This page will be automatically updated over time as Nintendo announces more games are coming to Nintendo Classics. This page was last updated October 7th, 2025.
How many games and value per platform?
There are 8 unique platforms on Nintendo Classics each with their own collection of games. The below table includes the value of both added and announced-but-not-added games. You can see that the total value of games in Nintendo Classics is many thousands of dollars if genuine physical copies were purchased instead. Here's a graph showing the total value of each platform changing over time:
And here's the data for all published and announced games as a table:
Platform | Games | Total Value | Value per Game |
---|---|---|---|
NES | 91 | $1980 | $21 |
SNES | 83 | $3600 | $43 |
Game Boy (GB/GBC) | 41 | $1615 | $39 |
Nintendo 64 (N64) | 42 | $1130 | $26 |
Sega Genesis | 51 | $2910 | $57 |
Game Boy Advance (GBA) | 30 | $930 | $31 |
GameCube | 9 | $640 | $71 |
Virtual Boy | 14 | $2580 | $184 |
All Platforms | 361 | $15385 | $42 |
View SQL query
SELECT platform, COUNT(*), SUM(price), SUM(price)/COUNT(*)
FROM games
GROUP BY platform;
How much value is in each Nintendo Classics tier?
There are multiple "tiers" of Nintendo Classics each with a different up-front price (for the console itself) and ongoing price for the Nintendo Switch Online (NSO) subscription.
Certain collections require specific hardware, such as Virtual Boy requiring either the recreation ($100) or cardboard ($30) Virtual Boy headset and GameCube collection requiring a Switch 2 ($450). All other collections work just fine with a Switch Lite ($100). All platforms beyond NES, SNES, Game Boy, and Game Boy Color require NSO + Expansion Pass.
Platforms | Requires | Price | Games | Games Value |
---|---|---|---|---|
NES, SNES, GB, GBC | Switch Lite & NSO * | $100 + $20/Yr | 215 | $7195 |
+N64, Genesis, GBA | Switch Lite & NSO+EP | $100 + $50/Yr | 338 | $12165 |
+Virtual Boy | Switch Lite, NSO+EP, & VB | $130 + $50/Yr | 352 | $14745 |
+GameCube | Switch 2 & NSO+EP | $450 + $50/Yr | 361 | $15385 |
* I wanted to highlight that Nintendo Switch Online (NSO) without Expansion Pack has the option to actually pay $3 monthly rather than $20 yearly. This doesn't make sense if you're paying for a whole year anyway, but if you want to just play a game in the NES, SNES, GB, or GBC collections you can pay $3 for a month of NSO and play games for very cheap.
How often are games added to Nintendo Classics?
Nintendo Classics tends to add a few games per platform every year. Usually when a platform is first announced a whole slew of games are added during the announcement with a slow drip-feed of games coming later.
Here's the break-down per year how many games were added to each platform:
Platform | 2018 | 2019 | 2020 | 2021 | 2022 | 2023 | 2024 | 2025 |
---|---|---|---|---|---|---|---|---|
NES | 30 | 30 | 8 | 2 | 5 | 4 | 12 | |
SNES | 25 | 18 | 13 | 9 | 1 | 9 | 8 | |
N64 | 10 | 13 | 8 | 8 | 3 | |||
Genesis | 20 | 17 | 8 | 3 | 3 | |||
Game Boy | 19 | 16 | 6 | |||||
GBA | 13 | 12 | 5 | |||||
GameCube | 9 | |||||||
Virtual Boy | ||||||||
All Platforms | 30 | 55 | 26 | 55 | 43 | 53 | 60 | 34 |
View SQL query
SELECT platform, STRFTIME('%Y', added_date) AS year, COUNT(*)
FROM games
GROUP BY platform, year
ORDER BY platform, year DESC;
What are the rarest or valuable games in Nintendo Classics?
There are a bunch of valuable and rare games available in Nintendo Classics. Here are the top-50 most expensive games that are available in the collection:
View SQL query
SELECT platform, name, price FROM games
ORDER BY price DESC LIMIT 50;
Who publishes their games to Nintendo Classics?
Nintendo Classics has more publishers than just Nintendo and Sega. Looking at which third-party publishers are publishing their games to Nintendo Classics can give you a hint at what future games might make their way to the collection:
Publisher | Games | Value |
---|---|---|
Capcom | 17 | $1055 |
Xbox Game Studios | 13 | $245 |
Koei Tecmo | 13 | $465 |
City Connection | 11 | $240 |
Konami | 10 | $505 |
Bandai Namco Entertainment | 9 | $190 |
Sunsoft | 7 | $155 |
Natsume Inc. | 7 | $855 |
G-Mode | 7 | $190 |
Arc System Works | 6 | $110 |
View SQL query
SELECT publisher, COUNT(*) AS num_games, SUM(price)
FROM games WHERE publisher NOT IN ('Nintendo', 'Sega')
GROUP BY publisher
ORDER BY num_games DESC LIMIT 20;
What games have been removed from Nintendo Classics?
There's only been one game that's been removed from Nintendo Classics so far. There likely will be more in the future:
Platform | Game | Added Date | Removed Date |
---|---|---|---|
SNES | Super Soccer | 2019-09-05 | 2025-03-25 |
View SQL query:
SELECT platform, name, added_date, removed_date
FROM games WHERE removed_date IS NOT NULL;
This site uses the MIT licensed ChartJS for the line chart visualization.
Thanks for keeping RSS alive! â„
October 06, 2025
Ari Lamstein
Visualizing 25 Years of Border Patrol Data with Python
I recently had the chance to speak with a statistician at the Department of Homeland Security (DHS) about my Streamlit app that visualizes trends in US Immigration Enforcement data (link). Our conversation helped clarify a question Iâd raised in an earlier postâone that emerged from a surprising pattern in the data.
A Surprising Pattern
The first graph in my post showed how the number of detainees in ICE custody has changed over time, broken down by the arresting agency: ICE (Immigration and Customs Enforcement) or CBP (Customs and Border Protection). The agency-level split revealed an unexpected trend.
As I noted in the post:
Equally interesting is the agency-level data: since Trump took office ICE detentions are sharply up, but CBP detentions are down. I am not sure why CBP detentions are down.
A Potential Answer
This person suggested that CBP arrests might reflect not just enforcement capacity, but the number of people attempting to cross the border illegallyâa figure that could fluctuate based on how welcoming an administration appears to be toward immigration.
This was a new lens for me. I hadnât considered that attempted border crossings might rise or fall with shifts in presidential tone or policy. Given that one of Trumpâs central campaign promises in 2024 was to crack down on illegal immigration (link), it felt like a hypothesis worth exploring.
The Data: USBP Encounters
While we can’t directly measure how many people attempt to cross the border illegally, DHS publishes a dataset that records each time the US Border Patrol (USBP) encounters a âremovable alienââa term DHS uses for individuals subject to removal under immigration law. This dataset can serve as a rough proxy for attempted illegal crossings.
The data is available on this page and is published as an Excel workbook titled âCBP Encounters â USBP â November 2024.â It covers October 1999 through November 2024, spanning five presidential administrations. While it doesnât include data from the current administration (which began in January 2025), it does offer a historical view of enforcement trends.
The workbook contains 16 sheets; this analysis focuses on the âMonthly Regionâ tab. In this sheet, âRegionâ refers to the part of the border where the encounter occurred: Coastal Border, Northern Land Border, or Southwest Land Border.
The Analysis
To support this analysis, I created a new Python module called encounters
. Itâs available in my existing immigration_enforcement repo, along with the dataset and example workbooks. Iâve tagged the version of the code used in this post as usbp_encounters_post, so people will always be able to run the examples belowâeven if the repo evolves. You’re welcome to clone it and use it as a foundation for your own analysis.
One important detail: this dataset records dates using fiscal years, which run from October 1 to September 30. For example, October of FY2020 corresponds to October 2019 on the calendar. To simplify analysis, the function encounters.get_monthly_region_df
reads in the âMonthly Regionâ sheet and automatically converts all fiscal year dates to calendar dates:
To preview the data, we can load the âMonthly Regionâ sheet using the encounters
module like this:
import encounters df = encounters.get_monthly_region_df() df.head()
This returns:
date | region | quantity | |
---|---|---|---|
0 | 1999-10-01 | Coastal Border | 740 |
1 | 1999-10-01 | Northern Land Border | 1250 |
2 | 1999-10-01 | Southwest Land Border | 87820 |
3 | 1999-11-01 | Coastal Border | 500 |
4 | 1999-11-01 | Northern Land Border | 960 |
To visualize the data, we can use Plotly to create a time series of encounters by region:
import plotly.express as px px.line( df, x="date", y="quantity", color="region", title="USBP Border Encounters Over Time", color_discrete_sequence=px.colors.qualitative.T10, )
From this graph, a few patterns stand out:
- Encounters are overwhelmingly concentrated at the Southwest Land Border.
- Until around 2015, the data shows a strong seasonal rhythm, typically dipping in December and peaking in March.
- After 2015, variability increases sharply, with both the lowest (2017) and highest (2023) values occurring in this period.
A Better Graph
Since the overwhelming majority of encounters occur at the Southwest Land Border, it makes sense to focus the visualization there. To explore how encounter trends align with presidential transitions, we can annotate the graph to show when administrations changed. The function encounters.get_monthly_encounters_graph
handles this:
encounters.get_monthly_encounters_graph(annotate_administrations=True)
This annotated graph appears to support what the DHS statistician suggested: encounter numbers sometimes shift dramatically between administrations. The change is especially pronounced for the Trump and Biden administrations:
- The lowest value (April 2017) occurred shortly after Trump took office.
- The transition from Trump to Biden marks one of the sharpest increases in the dataset.
- The highest value (December 2023) occurred during Bidenâs administration.
Potential Policy Link
While I’m not an expert on immigration policy, Wikipedia offers summaries of the immigration policies under both the Trump and Biden administrations.
It describes Trump’s policies as aiming to reduce both legal and illegal immigrationâthrough travel bans, lower refugee admissions, and stricter enforcement measures. And the page on Biden’s immigration policy begins:
âThe immigration policy Joe Biden initially focused on reversing many of the immigration policies of the previous Trump administration.â
The contrast between these two approaches is stark, and itâs at least plausible that the low number of encounters at the start of Trump’s first term, and the spike in encounters at the start of Bidenâs term, reflect responses to these shifts.
Future Work
This post is just a first step in analyzing Border Patrol Encounter data. Looking ahead, here are a few directions Iâm excited to explore:
- Integrate this graph into my existing Immigration Enforcement Streamlit app (link).
- Incorporate more timely data. While this dataset is only published annually, DHS appears to release monthly updates here. Finding a way to surface those numbers in the app would make it more responsive to current trends.
- Explore other dimensions of the dataset. Beyond raw encounter counts, the data includes details like citizenship, family status, and where encounters happen. These facets could offer deeper insight into enforcement patterns and humanitarian implications.
While comments on my blog are disabled, I welcome hearing from readers. You can contact me here.
Real Python
It's Almost Time for Python 3.14 and Other Python News
Python 3.14 nears release with new features in sight, and Django 6.0 alpha hints at whatâs next for the web framework. Several PEPs have landed, including improvements to type annotations and support for the free-threaded Python effort.
Plus, the Python Software Foundation announced new board members, while Real Python dropped a bundle of fresh tutorials and updates. Read on to learn whatâs new in the world of Python this month!
Join Now: Click here to join the Real Python Newsletter and youâll never miss another Python tutorial, course, or news update.
Python 3.14 Reaches Release Candidate 3
Python 3.14.0rc3 was announced in September, bringing the next major version of Python one step closer to final release. This release candidate includes critical bug fixes, final tweaks to new features, and overall stability improvements.
Python 3.14 is expected to introduce new syntax options, enhanced standard-library modules, and performance boosts driven by internal C API changes. For the complete list of changes in Python 3.14, consult the official Whatâs new in Python 3.14 documentation.
The release also builds upon ongoing work toward making CPython free-threaded, an effort that will eventually allow better use of multicore CPUs. Developers are encouraged to test their projects with the RC to help identify regressions or issues before the official release.
The final release, 3.14.0, is scheduled for October 7. Check out Real Pythonâs series about the new features you can look forward to in Python 3.14.
Django 6.0 Alpha Released
Django 6.0 alpha 1 is out! This first public preview gives early access to the upcoming features in Djangoâs next major version. Although not production-ready, the alpha includes significant internal updates and deprecations, setting the stage for future capabilities.
Some of the early changes include enhanced async support, continued cleanup of old APIs, and the groundwork for upcoming improvements in database backend integration. Now is a great time for Django developers to test their apps and provide feedback before Django 6.0 is finalized.
Django 5.2.6, 5.1.12, and 4.2.24 were released separately with important security fixes. If you maintain Django applications, then these updates are strongly recommended.
Read the full article at https://realpython.com/python-news-october-2025/ »
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Brian Okken
pytest-check 2.6.0 release
There’s a new release of pytest-check. Version 2.6.0.
This is a cool contribution from the community.
The problem
In July, bluenote10 reported that check.raises()
doesn’t behave like pytest.raises()
in that the AssertionError
returned from check.raises()
doesn’t have a queryable value
.
Example of pytest.raises()
:
with pytest.raises(Exception) as e:
do_something()
assert str(e.value) == "<expected error message>"
We’d like check.raises()
to act similarly:
with check.raises(Exception) as e:
do_something()
assert str(e.value) == "<expected error message>"
But that didn’t work prior to 2.6.0. The issue was that the value returned from check.raises()
didn’t have any .value
atribute.
Talk Python to Me
#522: Data Sci Tips and Tricks from CodeCut.ai
Today weâre turning tiny tips into big wins. Khuyen Tran, creator of CodeCut.ai, has shipped hundreds of bite-size Python and data science snippets across four years. We dig into open-source tools you can use right now, cleaner workflows, and why notebooks and scripts donât have to be enemies. If you want faster insights with fewer yak-shaves, this oneâs packed with takeaways you can apply before lunch. Letâs get into it.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/agntcy'>Agntcy</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Khuyen Tran (LinkedIn)</strong>: <a href="https://www.linkedin.com/in/khuyen-tran-1ab926151/?featured_on=talkpython" target="_blank" >linkedin.com</a><br/> <strong>Khuyen Tran (GitHub)</strong>: <a href="https://github.com/khuyentran1401/?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>CodeCut</strong>: <a href="https://codecut.ai/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Production-ready Data Science Book (discount code TalkPython)</strong>: <a href="https://codecut.ai/production-ready-data-science/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <br/> <strong>Why UV Might Be All You Need</strong>: <a href="https://codecut.ai/why-uv-might-all-you-need/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>How to Structure a Data Science Project for Readability and Transparency</strong>: <a href="https://codecut.ai/how-to-structure-a-data-science-project-for-readability-and-transparency-2/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Stop Hard-coding: Use Configuration Files Instead</strong>: <a href="https://codecut.ai/stop-hard-coding-in-a-data-science-project-use-configuration-files-instead/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Simplify Your Python Logging with Loguru</strong>: <a href="https://codecut.ai/simplify-your-python-logging-with-loguru/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Git for Data Scientists: Learn Git Through Practical Examples</strong>: <a href="https://codecut.ai/git-for-data-scientists-learn-git-through-practical-examples/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Marimo (A Modern Notebook for Reproducible Data Science)</strong>: <a href="https://codecut.ai/marimo-a-modern-notebook-for-reproducible-data-science/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Text Similarity & Fuzzy Matching Guide</strong>: <a href="https://codecut.ai/text-similarity-fuzzy-matching-guide/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Loguru (Python logging made simple)</strong>: <a href="https://github.com/Delgan/loguru?tab=readme-ov-file#modern-string-formatting-using-braces-style" target="_blank" >github.com</a><br/> <strong>Hydra</strong>: <a href="https://hydra.cc/?featured_on=talkpython" target="_blank" >hydra.cc</a><br/> <strong>Marimo</strong>: <a href="https://marimo.io/?featured_on=talkpython" target="_blank" >marimo.io</a><br/> <strong>Quarto</strong>: <a href="https://quarto.org/?featured_on=talkpython" target="_blank" >quarto.org</a><br/> <strong>Show Your Work! Book</strong>: <a href="https://austinkleon.com/show-your-work/?featured_on=talkpython" target="_blank" >austinkleon.com</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=lypo8Ul4NhU" target="_blank" >youtube.com</a><br/> <strong>Episode #522 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/522/data-sci-tips-and-tricks-from-codecut.ai#takeaways-anchor" target="_blank" >talkpython.fm/522</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/522/data-sci-tips-and-tricks-from-codecut.ai" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>đ„ Served in a Flask đž</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>