Planet Python
Last update: November 04, 2024 01:43 AM UTC
November 04, 2024
Michael Foord
Python Knowledge Sharing Videos Online
.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }
I’ve been teaching Python in one hour knowledge sharing sessions, some of which I’ve put online on youtube.
This is the link to the playlist of the sessions:
The slides for each of the sessions, along with some example code, can be found in this github repository:
So far there are seven one-hour sessions (with more planned) on:
- Python Core Object Model
- Closures and decorators (functional programming)
- Generators and Iterators
- Unicode, Floats and regex
- Concurrency (async, threads, processes, the GIL)
- Testing with pytest
- Modules and Namespaces
Other Talks
A selection of some of the talks and interviews I’ve given on Python and software engineering across my career.
- UK Health Security Agency Software Development Practise Conf 2024
- PyCon MEA 2022 How Python Took Over the World
- Test and Code Podcast Episode 145: For Those About to Mock
- PyCon Belarus 2020 How Python Took Over the World
- PyLondinium 2019 The Python Object Model
- Interview on Podcast.__init__ on testing, Mock and the Python community (2018)
- The Role of Abstractions: Lightning Talk PyCon US 2018
- Best Practises for Software Development and Testing (2017)
- PyCon UK Panel 2015
- To the Clouds: EuroPython 2015
- Automated Deployments with Juju: PyCon UK 2014
- Python and Pythons: PyCon NZ 2013
- Testing with Mock: PyCon US 2011
- New and Improved unittest 2: PyCon US 2010
- Michael Foord on IronPython: Hanselminutes 2009
- Michael Foord on IronPython: TechEd 2007
November 04, 2024 12:00 AM UTC
Agile Alliance Scrummaster Certification
I’ve been a fan of Agile ever since my first programming job with Resolver Systems back in 2006. I had taught myself programming and there I really learned engineering, how to build software products whilst caring about quality. I became passionate about testing as a way to ensure a minimal level of quality and about agile processes which are able to change quickly.
My only experience of Scrum was for a year at a heavily waterfall shop which layered Scrum for project management on top of heavily waterfall software development processes. It wasn’t a fun experience but I still learned a great deal.
I’ve been working with Gigaclear, as team lead on backend API servers, including One Touch Switch, for about a year now. I enjoy our software development practises and processes; we use a combination of agile for software development, devops (devsecops of course) for software deployment and maintenance, and Scrum for project management. It’s a very effective combination.
I recently attended a course with the Agile Alliance, led by John McFadyen, and became certified as a Scrummaster.
The course was fantastic and very inspiring. At Gigaclear we’re systematically evaluating all our systems, systems architecture and processes. This process and the course have been hugely inspirational and I have an article on software development processes coming shortly…
November 04, 2024 12:00 AM UTC
November 03, 2024
Glyph Lefkowitz
The Federation Deathmatch
It’s the weekend, and I have some Thoughts about federated social media. So, buckle up, I guess, it’s time to start some fights.
Recently there has been some discourse about Bluesky’s latest fundraising round. I’ve been participating in conversations about this on Mastodon, and I think I might sometimes come across as a Mastodon partisan, but my feelings are complex and I really don’t want to be boosting the ActivityPub Fediverse without qualification.
So here are some qualifications.
Bluesky Is Evil
To the extent that I am an ActivityPub partisan in the discourse between ActivityPub and ATProtocol, it is because I do not believe that Bluesky is a meaningfully decentralized social network. It is a social network, run by a company, which has a public API with some elements that might, one day, make it possible for it to be decentralized. But today, it is not, either practically or theoretically.
The Bluesky developers are putting in a ton of effort to maybe make it decentralized, hypothetically, someday. A lot of people think they will succeed. But ActivityPub (and, of course, Mastodon specifically) are already, today, meaningfully decentralized, as you can see on FediDB, there are instances with hundreds of thousands of people on them, before we even get to esoterica like the integrations Threads, Wordpress, Flipboard, and Ghost are doing.
The inciting incident for this post — that a lot of people are also angry about Bluesky raising millions of dollars from Evil Guys Doing Evil Stuff Capital — is indeed a serious concern. It lights the fuse that burns towards their eventual, inevitable incredible journey. ATProtocol is just an API, and that API will get shut off one day, whenever their funders get bored of the pretense of their network being “decentralized”.
At time of writing, it is also interesting that 3 of the 4 times that the CEO of Bluesky has even skeeted the word “blockchain” is to say “no blockchain”, to reassure users that the scam magnet of “Blockchain” is not actually near their product or protocol, which is a much harder position to maintain when your lead investor is “Blockchain Capital”.
I think these are all valid criticisms of Bluesky. But I also think that the actual engineers working on the product are aware of these issues, and are making a significant effort to address them or mitigate them in any way they can. All that work can still be easily incinerated by a slow quarter in terms of user growth numbers or a missed revenue forecast when the VCs are getting impatient, but it’s not nothing, it is a life’s work.
Really, who among us could not have our life’s ambitions trivially destroyed in an afternoon, simply because a billionaire decided that they should be? If you feel like you are safe from this, I have some bad news about how money works. So we are all doing our best in an imperfect system and maybe Bluesky is on to something here. That’s eminently possible. They’re certainly putting forth an earnest effort.
Mastodon Is Stupid
Meanwhile, not nearly as much has been made recently of Mastodon refusing funding from a variety of sources, when all indications are that funding is low, and plummeting, far below the level required to actually sustain the site, and they haven’t done a financial transparency report for over a year, and that report was already nearly a year late.
Mastodon and the fediverse are not nearly in a position to claim moral superiority over Bluesky. Sure, taking blockchain VC money might seem like a rookie mistake, but going out of business because you are spurning every possible source of funding is not that wise either.
Some might think that, sure, Mastodon the company might die but at least the Fediverse as a whole will keep going strong, right? Lots of people run their own instances! I even find elements of this argument convincing, and I think there is probably some truth to it. But to really believe this argument as claimed, that it’s a fait accompli that the fediverse will survive in some form, that all those self-run servers will be a robust network that will self-repair, requires believing some obviously false stuff. It is frankly unprofitable to run a Fediverse instance. Realistically, if you want to operate a mastodon server for yourself, it is going to cost at least $100/year once you include stuff like having a domain name, and managing the infrastructure costs is a complex problem that keeps getting harder to manage as the software itself gets slower.
Cory Doctorow has recently argued that this is all worth it, because at least on Mastodon, you’re in control, not at the whims of centralized website operators like Bluesky. In his words,
On Mastodon (and other services based on Activitypub), you can easily leave one server and go to another, and everyone you follow and everyone who follows you will move over to the new server. If the person who runs your server turns out to be imperfect in a way that you can’t endure, you can find another server, spend five minutes moving your account over, and you’re back up and running on the new server
He concludes:
Any system where users can leave without pain is a system whose owners have high switching costs and whose users have none
(Emphasis mine).
This is a beautiful vision. It is, however, an incorrect assessment of the state of the Fediverse as it stands today. It’s not true in two important ways:
First, if you look at any account of a user’s fediverse account migration, like this one from Steve Bate or this one from the Ente project or this one from Erin Kissane, you will see that it is “painful for the foreseeable future” or “wasn’t as seamless as advertised”, and that “the best time to […] migrate instances […] is never”. This language does not presage a pleasant experience, as Doctorow puts it, “without pain”.
Second, migration is an active process that requires engagement from the instance that hosts you. If you have been blocked or banned, or had your account terminated, you are just out of luck. You do not have control over your data or agency over your online identity unless you’ve shelled out the relatively exorbitant amount of money to actually operate your own instance.
In short, ActivityPub is no panacea. A federated system is not really a “decentralized” system, as much as it is a bunch of smaller centralized systems that all talk to each other. You still need to know, and care, about your social and financial relationship to the operators of your instance. There is probably no getting away from this, like, just generally on the Internet, no matter how much peer-to-peer software we deploy, but there certainly isn’t in the incomplete mess that is ActivityPub.
JOIN, or DIE.
Neither Mastodon (or ActivityPub) nor Bluesky (or ATProtocol) has a comprehensive solution to the problem of decentralized social media. These companies, and these protocols, are both deeply flawed and if everything keeps bumping along as it is, I believe both are likely to fail. At different times, on different timelines, and for different reasons, but fail nonetheless.
However, these networks are both small and growing, and we are not yet in the phase of enshittification where margins are shrinking and audiences are captured and the screws must be tightened to juice revenue. There are stil possibilities. Mastodon is crowdfunded and what they lack in resources they make up for in flexibility and scrappiness. Bluesky has money and while there will eventually be a need to monetize somehow, they have plenty of runway to come up with that answer, and a lot of sophisticated protocol work has been done. Not enough to make a complete circut and allow users true, practical decentralization, but it’s not nothing, either.
Mastodon and Bluesky are both organizations with humans in them, and piles of
data that is roughly schema-compatible even if the nuances and details are
different. I know that there is a compatible model becuse thanks to both
platforms being relatively open, there is a functioning ActivityPub/ATProtocol
bridge in the form of Brid.gy Fed. You can use it
today, and I highly recommend that you do so, so that “choice of protocol” does
not fully define your audience. If you’re on bluesky, follow this
account, and if you’re on Mastodon or
elsewhere on the Fediverse, search for and follow @bsky.brid.gy@bsky.brid.gy
.
The reality that fans of decentralized, independent social media must confront is that we are a tiny audicence right now. Whichever site we are looking at, we are talking about a few million monthly active users at best, in a world where even the pathetic husk of Twitter still has hundreds of millions and Facebook has billions. Interneceine fights are not going to get us anywhere. We need to build bridges and links and connect our networks as densely as possible. If I’m being honest, Bridgy Fed looks like a pretty janky solution, but it’s something, and we need to start doing something soon, so we do not collectively become a permanent minority that mass markets can safely ignore.
As users, we need to set an example, so that the developers of the respective platforms get their shit together and work together directly so that workarounds like Bridgy are not required. Frankly, this is mostly on the ActivityPub and Mastodon devs, as far as I can tell. Unfortunately, not a lot of this seems to be public, or at least I haven’t witnessed a lot of it directly, but I have heard repeatedly that the ActivityPub developers are prickly, and this is one high-profile public example where an ActivityPub partisan is incredibly, pointlessly hostile and borderline harrassing towards someone — Mike Masnick, a long-time staunch advocate for open protocols and open patents, someone with a Mastodon account, and thus as good a prospective ally as the ActivityPub fediverse might reasonably find — explaining some of the relative benefits of Bluesky.
Most of us are technology nerds in one way or another. In that way we can look at signifiers like “ActivityPub” and “ATProtocol”, and feel like these are hard boundaries around different all-encompassing structures for the future, and thus tribes we must join and support.
A better way to look at this, however, is to see social entities like Mastodon gGmbH and Bluesky PBC — or, more to the point, Fosstodon, SFBA Social, Hachyderm (and maybe, one day, even an instance which isn’t fully just for software development nerds), as groups that deploy these protocols to access some data that they publish, just as they might publish their website over HTTP or their newsletters over SMTP. There are technical challenges involved in bridging between mutually unintelligible domain models, but that is, like, network software's whole deal. Most software is just some kind of translation from one format or context to another. The best possible future for the fediverse is the one where users care as much about the distinction between ATProtocol and ActivityPub as they do about the distinction between POP3 and IMAP.
To both developers and users of these systems, I say: get it together. Be nice to each other. Because the rest of the social media ecosystem is sure as shit not going to be nice to us if we ever see even a hint of success and start to actually cut into their user base.
November 03, 2024 09:49 PM UTC
Real Python
The Python Square Root Function
The Python square root function, sqrt()
, is part of the math
module and is used to calculate the square root of a given number. To use it, you import the math
module and call math.sqrt()
with a non-negative number as an argument. For example, math.sqrt(9)
returns 3.0
.
This function works with both integers and floats and is essential for mathematical operations like solving equations and calculating geometric properties. In this tutorial, you’ll learn how to effectively use the square root function in Python.
By the end of this tutorial, you’ll understand how:
- Python’s
sqrt()
function calculates square roots using Python’smath.sqrt()
for quick and accurate results in your programs. math.sqrt()
calculates the square root of positive numbers and zero but raises an error for negative inputs.- Python’s square root function can be used to solve real-world problems like calculating distances using the Pythagorean theorem.
Time to dive in!
Python Pit Stop: This tutorial is a quick and practical way to find the info you need, so you’ll be back to your project in no time!
Free Bonus: Click here to get our free Python Cheat Sheet that shows you the basics of Python 3, like working with data types, dictionaries, lists, and Python functions.
Square Roots in Mathematics
In algebra, a square, x, is the result of a number, n, multiplied by itself: x = n²
You can calculate squares using Python:
>>> n = 5
>>> x = n**2
>>> x
25
The Python **
operator is used for calculating the power of a number. In this case, 5 squared, or 5 to the power of 2, is 25.
The square root, then, is the number n, which when multiplied by itself yields the square, x.
In this example, n, the square root of 25, is 5.
25 is an example of a perfect square. Perfect squares are the squares of integer values:
>>> 1**2
1
>>> 2**2
4
>>> 3**2
9
You might have memorized some of these perfect squares when you learned your multiplication tables in an elementary algebra class.
If you’re given a small perfect square, it may be straightforward enough to calculate or memorize its square root. But for most other squares, this calculation can get a bit more tedious. Often, an estimation is good enough when you don’t have a calculator.
Read the full article at https://realpython.com/python-square-root-function/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 03, 2024 02:00 PM UTC
Michael Foord
Gigaclear One Touch Switch Service
For the last year I’ve been working as a team lead for backend API development with Gigaclear a UK rural ISP who own and run fibre internet to rural communities across the UK. This is alongside my training work.
This image shows the main project I’ve been working on since joining Gigaclear, One Touch Switch. A regulatory requirement for all ISPs to allow automated switching between ISPs. When you sign up with a new internet provider your account is automatically ceased with the old provider and VOIP numbers can be automatically ported.
Our OTS project is just part of the Gigaclear One Touch Switch system which interfaces with Salesforce and Netadmin and the website order flow (the Online Buying Journey) and represents an impressive engineering effort. We were one of the first ISPs with a system ready to take part in industry trials a few months ago, both OTS and our underlying systems passed pen testing with flying colours, and the switch on has been smooth.
Something I’m proud to have been part of. My current project is preparing security awareness training materials based on OWASP for our various engineering departments whilst we also undertake a systematic review of all of our systems and processes.
In the diagram I’m team lead for the Sphinx engineering team.
November 03, 2024 12:00 AM UTC
Adventures with MicroPython
My first blog post in a few years! I have some articles I’d like to publish, and some adventures to share, so I thought it was time to fire up a blog engine again.
My nine year old son, Benjamin, is really into programming with Scratch and he’s keen to play with electronics and learn MicroPython. Which is awesome because there’s almost nothing I would love to do more with him.
MicroPython is an extremely impressive implementation of Python that will run on embedded devices and microcontrollers, as well as bigger tiny computers like the Raspberry Pi.
I’ve dug out an old MicroBit I had, purchased a Raspberry Pi Pico board/kit and also a ZumoBot 2040 robot which uses the same microcontroller as the Pico, to play with.
I’m now starting to get to grips with the basics, using the Thonny IDE.
I have a bunch of Neopixel LEDs, including a long light strip, I’d like to wire up in my living room controlled by a Pico board and an Android App using Kivy. That’s my goal number 1.
I’d like to program the ZumoBot to explore and map my flat. Goal 2.
Meanwhile Benjamin is enjoying playing with electronics (switches, LEDs, potentiometer and now a motor) with the Pico and on his own he’s programming the MicroBit with Scratch (or at least “blocks” which is the Microsoft equivalent). I’ve also done my first soldering in over a decade.
I have a github repository to track my tinkering, but I’d like to write up some recipes and post them on this blog as I go. (The biggest hurdle is I can’t easily create circuit diagrams. Time to explore.)
The github repository and ZumoBot links:
- https://github.com/voidspace/zumobot
- https://thepihut.com/products/zumo-2040-robot-assembled-with-75-1-hp-motors
- https://www.pololu.com/product/5012
- https://github.com/pololu/zumo-2040-robot
November 03, 2024 12:00 AM UTC
November 02, 2024
Brett Cannon
Don't return named tuples in new APIs
In my opinion, you should only introduce a named tuple to your code when you&aposre updating a preexisting API that was already returning a tuple or you are wrapping a tuple return value from another API.
Let&aposs start with when you should use named tuples. Usually an API that returns a tuple does so when you only have a couple of items in your tuple and the name of the function returning the tuple id enough to explain what each item in the tuple does. But sometimes your API expands and you find that your tuple is no longer self-documenting purely based on the name of the API (e.g., get_mouse_position()
very likely has a two-item tuple of X and Y coordinates of the screen while app_state()
could be a tuple of anything). When you find yourself in the situation of needing your return type to describe itself and a tuple isn&apost cutting it anymore, then that&aposs when you reach for a named tuple.
So why not start out that way? In a word: simplicity. Now, some of you might be saying to yourself, "but I use named tuples because they are so simple to define!" And that might be true for when you define your data structure (and I&aposll touch on this "simplicity of definition" angle later), but it actually makes your API more complex for both you and your users to use. For you, it doubles the data access API surface for your return type as you have to now support index-based and attribute-based data access forever (or until you choose to break your users and change your return type so it doesn&apost support both approaches). This leads to writing tests for both ways of accessing your data, not just one of them. And you shouldn&apost skimp on this because you don&apost know if your users will use indexes or attribute names to access the data structure, nor can you guarantee someone won&apost break your code in the future by dropping the named tuple and switching to some custom type (thanks to Python&aposs support of structural typing (aka duck typing), you can&apost assume people are using a type checker and thus the structure of your return type becomes your API contract). And so you need to test both ways of using your return type to exercise that contract you have with your users, which is more work than had you not used a named tuple and instead chose just a tuple or just a class.
Named tuples are also a bit more complex for users. If you&aposre reaching for a named tuple you&aposre essentially signalling upfront that the data structure is too big/complex for a tuple alone to work. And yet by using a named tuple means you are supporting the tuple approach even if you don&apost think it&aposs a good idea from the start. On top of that, the tuple API allows for things that you probably don&apost want people doing with your return type, like slicing, iterating over all the items as if they are homogeneous, etc. Basically my argument is the "flexibility" of having the index-based access to the data on top of the attribute-based access isn&apost flexible in a good way.
So why do people still reach for named tuples when defining return types for new APIs? I think it&aposs because people find them faster to define a new type than writing out a new class. Compare this:
Point = namedtuple(&aposPoint&apos, [&aposx&apos, &aposy&apos, z&apos])
To this:
class Point:
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
So there is a clear difference in the amount of typing. But there are three more ways to do the same data structure that might not be so burdensome. One is dataclasses:
@dataclasses.dataclass
class Point:
x: int
y: int
z: int
Another is simply a dictionary, although I know some prefer attribute-based access to data so much that they won&apost use this option). Toss in a TypedDict
and you also get editor support as well:
class Point(typing.TypedDict):
x: int
y: int
z: int
# Alternatively ...
Point = typing.TypedDict("Point", {"x": int, "y": int, "z": int})
A third option is types.SimpleNamespace
if you really want attributes without defining a class:
Point = lambda x, y, z: types.SimpleNamespace(x=x, y=y, z=z)
If none of these options work for you then you can always hope that somehow I convince enough people that my record/struct idea is a good one and get into the language. 😁
My key point in all of this is to prefer readability and ergonomics over brevity in your code. That means avoiding named tuples except where you are expanding to tweaking an existing API where the named tuple improves over the plain tuple that&aposs already being used.
November 02, 2024 10:00 PM UTC
Ned Batchelder
Coverage.py originally
Something many people don’t realize is that I didn’t write the original coverage.py. It was written by Gareth Rees in 2001. I’ve been extending and maintaining it since 2004. This ancient history came up this week, so I grabbed the 2001 version from archive.org to keep it here for posterity.
I already had a copy of Gareth’s original page about coverage.py, which now links to my local copy of coverage.py from 2001. BTW: that page is itself a historical artifact now, with the header from this site as it looked when I first copied the page.
The original coverage.py was a single file, so the “coverage.py” name was literal: it was the name of the file. It only had about 350 lines of code, including a few to deal with pre-2.0 Python! Some of those lines remain nearly unchanged to this day, but most of it has been heavily refactored and extended.
Coverage.py now has about 20k lines of Python in about 100 files. The project now has twice the amount of C code as the original file had Python. I guess in almost 20 years a lot can happen!
It’s interesting to see this code again, and to reflect on how far it’s come.
November 02, 2024 08:27 PM UTC
Hugo van Kemenade
Speed up CI with uv ⚡
We can use uv to make linting and testing on GitHub Actions around 1.5 times as fast.
Linting
When using pre-commit for linting:
name: Lint
on: [push, pull_request, workflow_dispatch]
env:
FORCE_COLOR: 1
permissions:
contents: read
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-python@v5
with:
python-version: "3.x"
cache: pip
- uses: pre-commit/action@v3.0.1
We can replace pre-commit/action with tox-dev/action-pre-commit-uv:
- uses: actions/setup-python@v5
with:
python-version: "3.x"
- cache: pip
- - uses: pre-commit/action@v3.0.1
+ - uses: tox-dev/action-pre-commit-uv@v1
name: Lint
on: [push, pull_request, workflow_dispatch]
env:
FORCE_COLOR: 1
permissions:
contents: read
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-python@v5
with:
python-version: "3.x"
- uses: tox-dev/action-pre-commit-uv@v1
This means uv will create virtual environments and install packages for pre-commit, which is faster for the initial seed operation when there's no cache.
Lint comparison
For example: python/blurb#32
Testing
When testing with tox:
name: Test
on: [push, pull_request, workflow_dispatch]
permissions:
contents: read
env:
FORCE_COLOR: 1
jobs:
test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13", "3.14"]
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
allow-prereleases: true
cache: pip
- name: Install dependencies
run: |
python --version
python -m pip install -U pip
python -m pip install -U tox
- name: Tox tests
run: |
tox -e py
We can replace tox with tox-uv:
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
allow-prereleases: true
- cache: pip
- - name: Install dependencies
- run: |
- python --version
- python -m pip install -U pip
- python -m pip install -U tox
+ - name: Install uv
+ uses: hynek/setup-cached-uv@v2
- name: Tox tests
run: |
- tox -e py
+ uvx --with tox-uv tox -e py
name: Test
on: [push, pull_request, workflow_dispatch]
permissions:
contents: read
env:
FORCE_COLOR: 1
jobs:
test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
allow-prereleases: true
- name: Install uv
uses: hynek/setup-cached-uv@v2
- name: Tox tests
run: |
uvx --with tox-uv tox -e py
tox-uv is tox plugin to replace virtualenv and pip with uv in your tox environments. We only need to install uv, and use uvx
to both install tox-uv and run tox, for faster installs of tox, the virtual environment, and the dependencies within it.
Test comparison
For example: python/blurb#32
No pip with tox-uv
One difference with uv environments compared to regular venv/virtualenv ones is that they do not have pip. This means calls to pip need replacing, for example in tox.ini
:
[testenv]
-commands_pre =
- {envpython} -m pip install -U -r requirements.txt
+deps =
+ -r requirements.txt
pass_env =
FORCE_COLOR
commands =
{envpython} -m pytest {posargs}
If you still need pip (or setuptools or wheel), add uv_seed = True
to your [testenv]
to inject them.
Bonus tip
Run the new tool zizmor to find security issues in GitHub Actions.
Header photo: "Road cycling at the 1952 Helsinki Olympics" by Olympia-Kuva Oy & Helsinki City Museum, Public Domain.
November 02, 2024 01:11 PM UTC
November 01, 2024
Python Engineering at Microsoft
Python in Visual Studio Code – November 2024 Release
We’re excited to announce the November 2024 release of the Python, Pylance and Jupyter extensions for Visual Studio Code!
This release includes the following announcements:
- Generate docstrings with Pylance
- New fold and unfold all docstrings commands
- Import suggestions can now include aliases from user files
- Experimental AI Code Action for implementing abstract classes
- Native REPL variables view
If you’re interested, you can check the full list of improvements in our changelogs for the Python, Jupyter and Pylance extensions.
Generate docstrings with Pylance
You can now more conveniently generate documentation for your Python code with Pylance‘s docstrings template generation feature! You can generate a docstring template for classes or methods by typing """
or '''
, pressing kbstyle(Ctrl+Space)
, or selecting the lightbulb to invoke the Generate Docstring Code Action. The generated docstring includes fields for the function’s description, parameter descriptions, parameter types, return value description, and return type.
This feature is currently behind an experimental setting, but we look forward to making it the default experience soon. You can try it out today by enabling the python.analysis.supportDocstringTemplate
setting.
New fold and unfold all docstrings commands
Docstrings are great for providing context and explanations for your code, but sometimes you might want to fold them to focus on the code itself. You can now more easily do so by folding docstrings with the new Pylance: Fold All Docstrings command, which can also be bound to a keybinding of your choice. To unfold them, use the Pylance: Unfold All Docstrings command.
Import suggestions with aliases from user files
One of Pylance’s most powerful features is its ability to provide auto-import suggestions. By default, Pylance offers the import suggestion from where the symbol is defined, but you might want it to import from a file where the symbol is imported (i.e. aliased). With the new python.analysis.includeAliasesFromUserFiles
setting, you can now control whether Pylance includes alias symbols from user files in its auto-import suggestions and in the add import Quick Fix.
Note: Enabling this setting can negatively impact performance, especially in large codebases, as Pylance may need to index more symbols and monitor more files for changes, which can increase resource usage.
Experimental AI Code Action: Implement Abstract Classes
You can now get the best of both worlds with AI and static analysis with the new experimental Code Action to implement abstract classes! This feature requires both the Pylance and the GitHub Copilot extensions. To try it out, you can select the Implement all inherited abstract classes with Copilot Code Action when defining a class that inherits from an abstract one.
You can disable this feature by setting "python.analysis.aiCodeActions": {"implementAbstractClasses": false}
in your User settings.
Native REPL Variables View
The Native Python REPL now provides up-to-date variables for the built-in Variables view. This lets you dig into the state of the interpreter as you execute code from files or through the REPL input box.
Upcoming deprecation of Python 3.8 support
Python 3.8 reached end-of-life (EOL) on 2024-10-07. As such, official support for Python 3.8 in the Python extension will stop in three months, in the February 2025 release of the Python extension. There are no plans to actively remove support for Python 3.8, and so we expect the extension will continue to work unofficially with Python 3.8 for the foreseeable future.
Other Changes and Enhancements
We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python and Jupyter Notebooks in Visual Studio Code. Some notable changes include:
- There’s a new documentation page on the various ways to run Python code in VS Code.
- Pixi functionality has been restored only when Pixi is available (vscode-python#24310).
- You can now change the type checking mode to strict or standard from the Language Status menu (pylance-release#6080)
We would also like to extend special thanks to this month’s contributors:
- @mnoah1 Add customizable interpreter discovery timeout in vscode-python#24227
- @brokoli777 Refactor code to remove unused JSDoc types in vscode-python#24300
- @T-256 Make
python_server.py
compatible to Python 3.7 in vscode-python#24252Note: This doesn’t guarantee full compatibility nor support for Python 3.7 in other parts of the Python extension. The minimum Python version we support is still Python 3.8 until February 2025, when the minimum officially supported version will be Python 3.9.
Try out these new improvements by downloading the Python extension and the Jupyter extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.
The post Python in Visual Studio Code – November 2024 Release appeared first on Python.
November 01, 2024 05:02 PM UTC
Python Software Foundation
PyCon US 2025 Kicks Off: Website, CfP, and Sponsorship Now Open!
Exciting news: the PyCon US 2025 conference website, Call for Proposals, and sponsorship program are open!
To learn more about the location, deadlines, and other details, check out the links below:
- PyCon US 2025 Launch Blog Post
- PyCon US 2025 Website
- Submit Your PyCon US 2025 Proposal on Pretalx
- Sponsorship Application
- PyCon US 2025 Changes Blog Post
We’re very happy to answer any questions you have about PSF sponsorship or PyCon US 2025– please feel free to reach out to us at sponsors@python.org.
On behalf of the PSF and the PyCon US 2025 Team, we look forward to receiving your proposals and seeing you in Pittsburgh next year 🥳 🐍
November 01, 2024 03:03 PM UTC
PyCon
PyCon US 2025 Launches!
We’re super excited to announce that PyCon US 2025 is back in Pittsburgh! If you missed our first time here, please check out our PyCon US 2024 recap and video recordings.
We’re also excited to announce the launch of our conference website, along with Call for Proposals and our sponsorship program!We’re Coming Back to Pittsburgh!
We will be hosting PyCon US 2025 again in Pittsburgh, Pennsylvania. We will continue to return in person, with Health and Safety Guidelines in place.PyCon US 2025 will be held at the David L. Lawrence Convention Center in Pittsburgh, Pennsylvania, on the following dates:
- May 14-15, 2025 - Tutorials
- May 15, 2025 - Sponsor Presentations
- May 16-18, 2025 - Main Conference Days—Keynotes, Talks, Charlas, Expo Hall, and more
- May 19-22, 2025 - Sprints
Introducing PyCon US Co-Chair, and Future Conference Chair: Jon Banafato!:
PyCon US 2025 Website is Live
You can now head over to the PyCon US 2025 website for all the conference details and more information about our sponsorship program.
Our design for this year’s event draws inspiration from the local fare of Pittsburgh infused with pops of colors and dynamic elements that reflect the uniqueness and character of our community.
We’re excited to collaborate with designers Malek Jerbi and Hamza Haj Taieb from Tunisia, whose illustrations and designs brought the PyCon US 2025 website to life. The design is brought together with the coordination of Georgi K and implemented by YupGup.PyCon US Call for Proposals is Now Open
PyCon US 2025’s Call for Proposals is officially open for Talks, Tutorials, Posters, and Charlas! The deadline to submit for all tracks is December 19, 2024, 11:59 PM ET. You can view what time that is for you locally on our CfP countdown.We need beginner, intermediate, and advanced proposals on all sorts of topics— and beginner, intermediate, and advanced speakers to give said presentations. You don’t need to be a 20-year veteran who has spoken at dozens of conferences. On all fronts, we need all types of people. Our community is comprised of a diverse set of people with unique skill sets, and we want our conference program to be a true reflection of that diversity.
For the new and first-time speakers, be sure to take advantage of the speaker mentorship program where you can be matched with experienced speakers who can help you with crafting your proposal. Check out the info on our Proposal Mentorship Program page.
For more information on where and how to submit your proposal, visit the Proposal Guidelines page on the PyCon US 2025 website.
Hatchery Program - Coming Soon!
The PyCon US Hatchery is an effort to establish a path for the introduction of new tracks, summits, and demos at PyCon US. We are still finalizing the details of this program, so please stay tuned for more news about the PyCon US Hatchery 🐣 and start thinking of ideas you might want to contribute!Sponsorship Has Tremendous Impact
Sponsors are what make PyCon US and the Python Software Foundation possible. PyCon US is the main source of revenue for the PSF, the non-profit behind the Python language and the Python Packaging Index (PyPI), and the hub for the Python community.PyCon US is the largest and longest-running Python gathering globally, with a diverse group of highly engaged attendees, many of whom you won’t find at other conferences. We’re excited to be able to provide our sponsors with opportunities to connect with and support the Python community. You’ll be face-to-face with talented developers, qualified recruits, and potential customers, access a large and diverse audience, as well as elevate your visibility and corporate identity within the Python community.
Check out our full menu of benefits,
What you can expect when you sponsor PyCon US and the PSF:
- Reach - Access to 2500+ attendees interested in your products and services and generate qualified leads.
- Brand strength - Be part of the biggest and most prestigious Python conference in the world and support the nonprofit organization behind the Python language.
- Connections - Networking with attendees in person to create connections and provide detailed information about your products and services.
- Recruiting - Access to qualified job candidates. If you’re hiring, there’s no better place to find Python developers than PyCon US.
- 12 months of benefits - Reach the Python community during PyCon US and beyond, with options for recognition on Python.org, PyPI.org, and more.
If you have any questions about sponsoring PyCon US and the PSF, please contact us at sponsors@python.org.
Stay in the Loop
As we get closer to the event, the conference website is where you’ll find details for our call for proposals, registration launch, venue information, and everything PyCon US related! Be sure to subscribe here to the PyCon US Blog, follow @PyCon on Twitter, @pycon@fosstodon.org and @ThePSF@fosstodon.org on Mastodon, the PSF on LinkedIn, and subscribe to PyCon US 2025 News so you won’t miss a thing. Our official hashtag is #PyConUS.November 01, 2024 02:42 PM UTC
ListenData
How to Automate WordPress using Python
This tutorial explains how to use Python to automate tasks in WordPress. It includes various functions to perform tasks such as creating, extracting, updating and deleting WordPress posts, pages, comments and media items (images) directly from Python.
To read this article in full, please click here
November 01, 2024 12:46 PM UTC
Real Python
The Real Python Podcast – Episode #226: PySheets: Spreadsheets in the Browser Using PyScript
What goes into building a spreadsheet application in Python that runs in the browser? How do you make it launch quickly, and where do you store the cells of data? This week on the show, we speak with Chris Laffra about his project, PySheets, and his book "Communication for Engineers."
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 01, 2024 12:00 PM UTC
PyCon
Important Changes Ahead: A Commitment to Financial Transparency
PyCon US is the largest and longest-running annual gathering for the community using and developing the open-source Python programming language. It is produced and underwritten by the Python Software Foundation (PSF), the 501(c)(3) nonprofit organization dedicated to advancing and promoting Python and its community.
The PSF is a grant-giving non-profit and the revenue generated by PyCon US is essential to our continued community support and operations. Check out our Mission and our latest Annual Impact Report to get an idea of the scope of work our foundation is responsible for. Many misunderstand or are unaware of this relationship, but the PSF relies on maintaining a strict budget for the operation of PyCon US in order to continue its grants programs and other financial responsibilities.
With this in mind, PyCon US has taken on many expenses in the years following the pandemic, including the losses associated with canceling PyCon US 2020 and the move to PyCon US 2021 online. Additionally, inflation in event costs and lower projected sponsorships have combined to create a significant loss for PyCon US and the PSF in 2024.
Because the PSF is a nonprofit, we keep PyCon US registration costs much lower than comparable technology conferences. Our goal is for PyCon US to remain accessible to the widest group possible. For us to continue to do so and ensure the sustainability of the event, we’ve had to make some changes to the structure of PyCon US. We know that an important piece of making changes to PyCon US is notice and awareness to our wonderful community– and we want to be transparent about the reasons behind them.
Ticket Prices
To continue providing the content, program, and opportunities of PyCon US this year and in the future and to offset these rising costs, we’ve decided to raise ticket prices this year by $50 USD for the Individual and Corporate rates and $25 USD for the Student rate.
New PyCon US 2025 registration prices:
- Corporate: $800 USD
- Individual: $450 USD
- Student: $125 USD
By adjusting ticket prices, we hope to continue delivering a valuable and enriching conference experience while still being financially accessible to our community.
PyCon US Online
PyCon US moved to online for the first year in 2021 during the pandemic and we’ve been proud to offer the Online attendance option for the past three years since. While we recognize the value that PyCon US Online provides in terms of accessibility, the challenges of delivering a seamless and engaging experience for both in-person and virtual attendees have been significant. From a cost perspective alone, cutting the online event will free up resources that can be put back into the in person and after-conference experience.
PyCon US will record all Talk tracks, Keynotes and Lightning Talks on the main days of the conference (Friday - Sunday) and we plan to publish these recordings very quickly after the sessions take place so those who can’t join us in person can access them while the topics are still current.
We appreciate the enthusiasm and participation of everyone who was part of the PyCon US Online experience and encourage everyone who can to join us in person in Pittsburgh this year. If you are unable to join, we welcome you to subscribe to our PyCon US YouTube channel to receive notifications of new content.
Tutorial Recordings
The pre-conference tutorials are a valuable part of the PyCon US conference and provide an opportunity for Pythonistas around the world to learn and grow their skills in hands-on sessions.
Tutorial ticket sales are a large source of revenue for PyCon US, but in the past few years, the margin on the tutorials has declined due to rising costs associated with recording these sessions.
Because tutorials are longer than talks and are in an interactive classroom-like setting, they require many resources such as power, better internet connectivity, and catering. Additionally, tutorial instructors are compensated due to the amount of work involved in preparing and providing these sessions.
Since tutorial registration is at an additional cost, we want to prioritize the experience for those attending in person and to do so, we will be removing tutorial recordings this year. The value of the face-to-face learning, discussions, and hands-on engagement that occurs during the tutorials outweighs the benefits of capturing the sessions for later viewing and can’t be replicated in a recorded format. We hope this change will encourage more attendees to participate in the tutorials onsite and engage with the content the instructors work very hard to provide.
Speaker Travel Grant Limit
The support of our community has allowed us to offer generous Travel Grants based on individual needs including travel, accommodation, and conference tickets. The Travel Grant program helps us shape a conference that is diverse on a number of axes from technical to geographic.
However, with decreased Travel Grant funds, PyCon US wants to ensure that we can still spread out our funding to support as many people as possible. Currently, a large part of the Travel Grant budget goes to speakers, and all speakers who are included on the proposal for an accepted talk are automatically eligible for a travel grant. For 2025, there will be a limit of two Travel Grant-funded speakers per accepted proposal across all tracks. We believe this will allow us to spread our funding capacity across our community and welcome as many Pythonistas to PyCon US as we can, while not impacting the quality and diversity of our programming.
Thank you!
PyCon US has been around for over 20 years, and we want to ensure it sticks around for many more! We recognize that these changes will impact many folks in a variety of ways, and we hope that our transparency about why these decisions were made will help bring understanding.
By making these changes and being willing to adapt, PyCon US can continue to be a welcoming, community conference, where everyone can meet new people and learn new things, connect with old friends and tell people about their projects. We appreciate our community’s support and understanding regarding these changes. If you have any feedback or questions, please feel free to reach out to the organizing team.
November 01, 2024 10:00 AM UTC
Zero to Mastery
Python Monthly Newsletter 💻🐍
59th issue of Andrei Neagoie's must-read monthly Python Newsletter: Python 3.13, Django Project Ideas, GPU Bubble is Bursting, and much more. Read the full newsletter to get up-to-date with everything you need to know from last month.
November 01, 2024 10:00 AM UTC
Nick Coghlan
The origin of venvstacks
There has been a longstanding gap in the Python packaging ecosystem that has somewhat annoyed me, but not enough to do anything about it: we haven't really had a good way to compose multiple layers of Python virtual environments together, allowing large dependencies (like AI and machine learning libraries) to be shared across multiple different application environments without having to install them directly into the base runtime environment.
Utilities for collecting up an entire Python runtime, an application, and all its dependencies into a single deployable artifact have existed since before the turn of the century.
We've had standardised virtual environments (allowing multiple applications to share a base Python runtime and its directly installed third party packages) for almost as long.
We've had zip applications for a long time as well (and other utilities which build on that feature).
We've had tools like wagon
which allow us to ship a bundle of prebuilt Python wheel archives and
install them on a destination system without needing to download
anything else from the internet at installation time.
We've had tools like conda
(and more recently uv
),
which make intelligent use of hard links on local systems to avoid
making duplicate copies of completely identical versions of packages.
We've technically had platform specific mechanisms like Linux container images, where the contents of an environment can be built up across multiple container image layers, with the lower layers being shared across multiple image definitions, but have lacked a convenient way to handle the dependency management complications involved in using these tools to share large Python libraries.
But we've never had something which specifically took full advantage of the way Python's import system works to enable robust structural decomposition of Python applications into independently updatable subcomponents (with a granularity larger than single packages).
All of this history meant that I was thoroughly intrigued when a mutual acquaintance introduced me to the creators of the LM Studio personal AI desktop application to discuss a Python packaging problem they had looming on their technical road map: it was clear from user demand and the rate of evolution in the Python AI/ML ecosystem that they needed a way to ship Python AI/ML components directly to their users without having to wait for those capabilities to be made available through native interfaces in other languages (such as Swift, C++, or JavaScript), but it didn't seem obvious to them how they could readily integrate that capability into LM Studio without making the application installation process substantially more complicated for their users.
What started as a consulting contract for a technical proof of concept,
and has since turned into a permanent position with the organisation,
proved fruitful,
and the result is the recently published
open source venvstacks
utility,
which is specifically designed to enable the kind of portable deterministic
artifact publishing setup that LM Studio needed, including:
- Base runtime layers
(based on
python-build-standalone
) - Framework layers (for shipping large dependencies, such as Apple MLX or PyTorch)
- Application layers (including additional unpackaged "launch modules" for app execution)
There are certainly still some technical limitations to be addressed (the
dynamic linking problem
with layering virtual environments like this is notorious amongst Python packaging
experts for a reason), but even in its current form, venvstacks
is already capable
enough to power the recent inclusion of
Apple MLX support
in LM Studio.
November 01, 2024 04:21 AM UTC
October 31, 2024
John Cook
How hard is constraint programming?
I’ve been writing code for the Z3 SMT solver for several months now. Here are my findings.
Python is used here as the base language. Python/Z3 feels like a two-layer programming model—declarative code for Z3, imperative code for Python. In this it seems reminiscent of C++/CUDA programming for NVIDIA GPUs—in that case, mixed CPU and GPU imperative code. Either case is a clever combination of methodologies that is surprisingly fluent and versatile, albeit not a perfect blend of seamless conceptual cohesion.
Other comparisons:
- Both have two separate memory spaces (CUDA CPU/GPU memories for one; pure Python variables and Z3 variables for the other).
- Both can be tricky to debug. In earlier days, CUDA had no debugger, so one had to fall back to the trusty “printf” statement (for a while it didn’t even have that!). If the code crashed, you might get no output at all. To my knowledge, Z3 has no dedicated debugger. If the problem being solved comes back as satisfiable, you can print out the discovered model variables, but if satisfiability fails, you get very little information. Like some other novel platforms, something of a “black box.”
- In both cases, programmer productivity can be well-served by developing custom abstractions. I developed a Python class to manage multidimensional arrays of Z3 variables, this was a huge time saver.
There are differences too, of course.
- In Python, “=” is assignment, but in Z3, one only has “==”, logical or numeric equality, not assignment per se. Variables are set once and can’t be changed—sort of a “write-once variables” programming model—as is natural to logic programming.
- Code speed optimization is challenging. Code modifications for Z3 constraints/variables can have extreme and unpredictable runtime effects, so it’s hard to optimize. Z3 is solving an NP-complete problem after all, so runtimes can theoretically increase massively. Speedups can be massive also; one round of changes I made gave 2000X speedup on a test problem. Runtime of CUDA code can be unpredictable to a lesser degree, depending on the PTX and SASS code generation phases and the aggressive code optimizations of the CUDA compiler. However, it seems easier to “see through” CUDA code, down to the metal, to understand expected performance, at least for smaller code fragments. The Z3 solver can output statistics of the solve, but these are hard to actionably interpret for a non-expert.
- Z3 provides many, many algorithmic tuning parameters (“tactics”), though it’s hard to reason about which ones to pick. Autotuners like FastSMT might help. Also there have been some efforts to develop tools to visualize the solve process, this might be of help.
It would be great to see more modern tooling support and development of community best practices to help support Z3 code developers.
The post How hard is constraint programming? first appeared on John D. Cook.eGenix.com
PyDDF Python Herbst Sprint 2024
The following text is in German, since we're announcing a Python sprint in Düsseldorf, Germany.
Ankündigung
Python Meeting Herbst Sprint 2024 in
Düsseldorf
Samstag, 09.11.2024, 10:00-18:00 Uhr
Sonntag, 10.11.2024. 10:00-18:00 Uhr
Eviden / Atos Information Technology GmbH, Am Seestern 1, 40547 Düsseldorf
Informationen
Das Python Meeting Düsseldorf (PyDDF) veranstaltet mit freundlicher Unterstützung von Eviden Deutschland ein Python Sprint Wochenende.Der Sprint findet am Wochenende 09./10.11.2024 in der Eviden / Atos Niederlassung, Am Seestern 1, in Düsseldorf statt.Folgende Themengebiete sind als Anregung bereits angedacht:
- AI/ML: Bilderkennung mit Azure Computervision
- AI/ML: Texte und Meta Daten aus Presseseiten extrahieren, mit Hilfe eines lokalen LLMs
- AI/ML: Transkription von Videos/Audiodateien mit Whisper
- Kodi Add-Ons für ARD, ZDF und ARTE
Anmeldung, Kosten und weitere Infos
Alles weitere und die Anmeldung findet Ihr auf der Meetup Sprint Seite:
WICHTIG: Ohne Anmeldung können wir den Gebäudezugang nicht vorbereiten. Eine spontane Anmeldung am Sprint Tag wird daher vermutlich nicht funktionieren.
Teilnehmer sollten sich zudem in der PyDDF Telegram Gruppe registrieren, da wir uns dort koordinieren:
Über das Python Meeting Düsseldorf
Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python-Begeisterte aus der Region wendet.
Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf.
Marc-André Lemburg, eGenix.com
October 30, 2024
The Python Show
49 - EdgeDB and Python with Yury Selivanov
In this episode of The Python Show Podcast, we welcome Yury Selivanov as our guest. Yury is a core CPython developer and co-founder of EdgeDB and MagicStack.
We chatted about many different topics, including the following:
Core Python development
EdgeDB and how it differs from relational databases
Python without the GIL
Python subinterpreters
Memhive
and more!
Links
Learn more about our guest and the topics we talked about with the following links:
Yury’s GitHub page
EdgeDB on GitHub
EdgeDB’s website
PyCon 2024 - Yury Selivanov: Overcoming GIL with subinterpreters and immutability
An article about Memhive
Real Python
Python Closures: Common Use Cases and Examples
In Python, a closure is typically a function defined inside another function. This inner function grabs the objects defined in its enclosing scope and associates them with the inner function object itself. The resulting combination is called a closure.
Closures are a common feature in functional programming languages. In Python, closures can be pretty useful because they allow you to create function-based decorators, which are powerful tools.
In this tutorial, you’ll:
- Learn what closures are and how they work in Python
- Get to know common use cases of closures
- Explore alternatives to closures
To get the most out of this tutorial, you should be familiar with several Python topics, including functions, inner functions, decorators, classes, and callable instances.
Get Your Code: Click here to download the free sample code that shows you how to use closures in Python.
Take the Quiz: Test your knowledge with our interactive “Python Closures: Common Use Cases and Examples” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python Closures: Common Use Cases and ExamplesIn this quiz, you'll test your understanding of Python closures. Closures are a common feature in functional programming languages and are particularly popular in Python because they allow you to create function-based decorators.
Getting to Know Closures in Python
A closure is a function that retains access to its lexical scope, even when the function is executed outside that scope. When the enclosing function returns the inner function, then you get a function object with an extended scope.
In other words, closures are functions that capture the objects defined in their enclosing scope, allowing you to use them in their body. This feature allows you to use closures when you need to retain state information between consecutive calls.
Closures are common in programming languages that are focused on functional programming, and Python supports closures as part of its wide variety of features.
In Python, a closure is a function that you define in and return from another function. This inner function can retain the objects defined in the non-local scope right before the inner function’s definition.
To better understand closures in Python, you’ll first look at inner functions because closures are also inner functions.
Inner Functions
In Python, an inner function is a function that you define inside another function. This type of function can access and update names in their enclosing function, which is the non-local scope.
Here’s a quick example:
>>> def outer_func():
... name = "Pythonista"
... def inner_func():
... print(f"Hello, {name}!")
... inner_func()
...
>>> outer_func()
Hello, Pythonista!
>>> greeter = outer_func()
>>> print(greeter)
None
In this example, you define outer_func()
at the module level or global scope. Inside this function, you define the name
local variable. Then, you define another function called inner_func()
. Because this second function lives in the body of outer_func()
, it’s an inner or nested function. Finally, you call the inner function, which uses the name
variable defined in the enclosing function.
When you call outer_func()
, inner_func()
interpolates name
into the greeting string and prints the result to your screen.
Note: To learn more about inner functions, check out the Python Inner Functions: What Are They Good For? tutorial.
In the above example, you defined an inner function that can use the names in the enclosing scope. However, when you call the outer function, you don’t get a reference to the inner function. The inner function and the local names won’t be available outside the outer function.
In the following section, you’ll learn how to turn an inner function into a closure, which makes the inner function and the retained variables available to you.
Function Closures
All closures are inner functions, but not all inner functions are closures. To turn an inner function into a closure, you must return the inner function object from the outer function. This may sound like a tongue twister, but here’s how you can make outer_func()
return a closure object:
>>> def outer_func():
... name = "Pythonista"
... def inner_func():
... print(f"Hello, {name}!")
... return inner_func
...
>>> outer_func()
<function outer_func.<locals>.inner_func at 0x1066d16c0>
>>> greeter = outer_func()
>>> greeter()
Hello, Pythonista!
Read the full article at https://realpython.com/python-closure/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Ned Batchelder
GitHub action security: zizmor
Zizmor is a new tool to check your GitHub action workflows for security concerns. I found it really helpful to lock down actions.
Action workflows can be esoteric, and continuous integration is not everyone’s top concern, so it’s easy for them to have subtle flaws. A tool like zizmor is great for drawing attention to them.
When I ran it, I had a few issues to fix:
- Some data available to actions is manipulable by unknown people, so you have
to avoid interpolating it directly into shell commands. For example, you might
want to add the branch name to the action summary:
But- name: "Summarize"
run: |
echo "### From branch ${{ github.ref }}" >> $GITHUB_STEP_SUMMARYgithub.ref
is a branch name chosen by the author of the pull request. It could have a shell injection which could let an attacker exfiltrate secrets. Instead, put the value into an environment variable, then use it to interpolate:- name: "Summarize"
env:
REF: ${{ github.ref }}
run: |
echo "### From branch ${REF}" >> $GITHUB_STEP_SUMMARY - The actions/checkout step should avoid persisting credentials:
- name: "Check out the repo"
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false - In steps where I was pushing to GitHub, this meant I needed to explicitly
set a remote URL with credentials:
- name: "Push digests to pages"
env:
GITHUB_TOKEN: ${{ secrets.token }}
run: |
git config user.name nedbat
git config user.email ned@nedbatchelder.com
git remote set-url origin https://x-access-token:${GITHUB_TOKEN}@github.com/${GITHUB_REPOSITORY}.git
There were some other things that were easy to fix, and of course, you might have other issues. One improvement to zizmor: it could link to explanations of how to fix the problems it finds, but it wasn’t hard to find resources, like GitHub’s Security hardening for GitHub Actions.
William Woodruff is zizmor’s author. He was incredibly responsive when I had problems or questions about using zizmor. If you hit a snag, write an issue. It will be a good experience.
If you are like me, you have repos lying around that you don’t think about much. These are a special concern, because their actions could be years old, and not well maintained. These dusty corners could be a good vector for an attack. So I wanted to check all of my repos.
With Claude’s help I wrote a shell script to find all git repos I own and run zizmor on them. It checks the owner of the repo because my drive is littered with git repos I have no control over:
#!/bin/bash
# zizmor-repos.sh
echo "Looking for workflows in repos owned by: $*"
# Find all git repositories in current directory and subdirectories
find . \
-type d \( \
-name "Library" \
-o -name "node_modules" \
-o -name "venv" \
-o -name ".venv" \
-o -name "__pycache__" \
\) -prune \
-o -type d -name ".git" -print 2>/dev/null \
| while read gitdir; do
# Get the repository directory (parent of .git)
repo_dir="$(dirname "$gitdir")"
# Check if .github/workflows exists
if [ -d "${repo_dir}/.github/workflows" ]; then
# Get the GitHub remote URL
remote_url=$(git -C "$repo_dir" remote get-url origin)
# Check if it's our repository
# Handle both HTTPS and SSH URL formats
for owner in $*; do
if echo "$remote_url" | grep -q "github.com[/:]$owner/"; then
echo ""
echo "Found workflows in $owner repository: $repo_dir"
~/.cargo/bin/zizmor $repo_dir/.github/workflows
fi
done
fi
done
After fixing issues, it’s very satisfying to see:
% zizmor-repos.sh nedbat BostonPython
Looking for workflows in repos owned by: nedbat BostonPython
Found workflows in nedbat repository: ./web/stellated
🌈 completed ping-nedbat.yml
No findings to report. Good job!
Found workflows in nedbat repository: ./web/nedbat_nedbat
🌈 completed build.yml
No findings to report. Good job!
Found workflows in nedbat repository: ./scriv
🌈 completed tests.yml
No findings to report. Good job!
Found workflows in nedbat repository: ./lab/gh-action-tests
🌈 completed matrix-play.yml
No findings to report. Good job!
Found workflows in nedbat repository: ./aptus/trunk
🌈 completed kit.yml
No findings to report. Good job!
Found workflows in nedbat repository: ./cog
🌈 completed ci.yml
No findings to report. Good job!
Found workflows in nedbat repository: ./dinghy/nedbat
🌈 completed test.yml
🌈 completed daily-digest.yml
🌈 completed docs.yml
No findings to report. Good job!
Found workflows in nedbat repository: ./dinghy/sample
🌈 completed daily-digest.yml
No findings to report. Good job!
Found workflows in nedbat repository: ./coverage/badge-samples
🌈 completed samples.yml
No findings to report. Good job!
Found workflows in nedbat repository: ./coverage/django_coverage_plugin
🌈 completed tests.yml
No findings to report. Good job!
Found workflows in nedbat repository: ./coverage/trunk
🌈 completed dependency-review.yml
🌈 completed publish.yml
🌈 completed codeql-analysis.yml
🌈 completed quality.yml
🌈 completed kit.yml
🌈 completed python-nightly.yml
🌈 completed coverage.yml
🌈 completed testsuite.yml
No findings to report. Good job!
Found workflows in BostonPython repository: ./bospy/about
🌈 completed past-events.yml
No findings to report. Good job!
Nice.
Seth Michael Larson
How to export OPML from Omnivore
How to export OPML from Omnivore
Omnivore recently announced they were bought by ElevenLabs, which is an AI company funded by Trump-supporting VC firm Andreessen Horowitz. As a part of this deal, they are shuttering the service on an extremely tight deadline.
This is very disappointing to me, I've previously recommended Omnivore to others and have donated for the past year that I've used the service. It goes without saying that I want nothing to do with Omnivore.
At the recommendation of a few friends I am going to try out Feedbin, which costs $5/month (the same price that I was willing to donate to Omnivore) and has a generous 30-day trial. This service has been around for much longer and appears to not be adding AI garbage to their app (thank you, Feedbin!)
The Omnivore team has unhelpfully given an extremely tight deadline to migrate your content out before they shutter the service: November 15th. Exporting your content (links, tags, rendered HTML pages) worked fine for me but I had to restart the export process once. Please do this ASAP to avoid losing your data.
I will need to write a few scripts to import the content into Feedbin. But your data export doesn't give you your RSS or newsletter subscriptions (again, screw you Omnivore).
So I wrote a short Python script to do that. First install the dependencies (OmnivoreQL and PyOPML):
$ python -m pip install omnivoreql pyopml
You'll also need to create an Omnivore API token and place the value in the script:
import opml
from omnivoreql import OmnivoreQL
omnivore_api_token = "<YOUR API TOKEN GOES HERE>"
omnivore = OmnivoreQL(api_token=omnivore_api_token)
subscriptions = omnivore.get_subscriptions()
with open("newsletters.txt", mode="w") as newsletters_fileobj:
newsletters_fileobj.truncate()
feeds_opml = opml.OpmlDocument(
title="Omnivore Feeds Export"
)
for subscription in subscriptions["subscriptions"]["subscriptions"]:
if subscription["newsletterEmail"] is not None:
newsletters_fileobj.write(subscription["name"] + "\n")
else:
feeds_opml.add_rss(
text=subscription["name"],
title=subscription["name"],
xml_url=subscription["url"],
description=subscription["description"],
)
with open("feeds.opml", mode="wb") as feeds_fileobj:
feeds_fileobj.truncate()
feeds_opml.dump(feeds_fileobj, pretty=True)
After running the script you'll have two files: feeds.opml
and newsletters.txt
.
The feeds.opml
file can be imported into RSS readers that support the OPML format (Feedbin and many other services do).
The newsletters.txt
file is mostly to remind you which newsletters you've subscribed to. Because these readers use custom email addresses to handle newsletters
you'll need to manually set these subscriptions up on the new reader service you choose to use.
If there's any issues with the script above, get in contact and I'll try to fix any issues.
︎Have thoughts or questions? Let's chat over email or social:
sethmichaellarson@gmail.com
@sethmlarson@fosstodon.org
Want more articles like this one? Get notified of new posts by subscribing to the RSS feed or the email newsletter. I won't share your email or send spam, only whatever this is!
Want more content now? This blog's archive has ready-to-read articles. I also curate a list of cool URLs I find on the internet.
Find a typo? This blog is open source, pull requests are appreciated.
Thanks for reading! ♡ This work is licensed under CC BY-SA 4.0
Armin Ronacher
Make It Ephemeral: Software Should Decay and Lose Data
Most software that exists today does not forget. Creating software that remembers is easy, but designing software that deliberately “forgets” is a bit more complex. By “forgetting,” I don't mean losing data because it wasn’t saved or losing it randomly due to bugs. I'm referring to making a deliberate design decision to discard data at a later time. This ability to forget can be an incredibly benefitial property for many applications. Most importantly software that forgets enables different user experiences.
I'm willing to bet that your cloud storage or SaaS applications likely serve as dumping grounds for outdated, forgotten files and artifacts. This doesn’t have to be the case.
Older computer software often aimed to replicate physical objects and experiences. This approach (skeuomorphism) was about making digital interfaces feel familiar to older physical objects. They resembled the appearance and behavior even though they didn't need to. Ironically though skeuomorphism despite focusing on look and feel, rarely considers some of the hidden affordances of the physical world. Critically, rarely does digial software feature degradation. Yes, the trash bin was created as an appoximation of this, but the bin seemingly did not make it farther than file or email management software. It also does not go far enough.
In the physical world, much of what we create has a natural tendency to decay and that is really useful information. A sticky note on a monitor gathers dust and fades. A notebook fills with notes and random scribbles, becomes worn, and eventually ends up in a cabinet to finally end its life discarded in a bin. We probably all clear out our desk every couple of months, tossing outdated items to keep the space manageable. When I do that, a key part of this is quickly judging how “old” some paper looks. But even without regular cleaning, things are naturally lost or discarded over time on my desk. Yet software rarely behaves this way. I think that’s a problem.
When data is kept indefinitely by default, it changes our relationship with that software. People sometimes may hesitate to create anything in shared spaces for fear of cluttering them, while others might indiscriminately litter them. In file-based systems, this may be manageable, but in shared SaaS applications, everything created (dashboards, notebooks, diagrams) lingers indefinitely and remains searchable and discoverable. This persistence seems advantageous but can quickly lead to more and more clutter.
Adding new data to software is easy. Scheduling it for automatic deletion is a bit harder. Simulating any kind of “visual decay” to hint at age or relevance is rarely seen in today's software though it wouldn't be all that hard to add. I'm not convinced that the work required to implement any of those things is why it does not exist, I think it's more likely that there is a belief that keeping stuff around forever is a benefit over the limitations of the real world.
The reality is that even though the entities we create are sticking around forever, the information contained within them ages badly. Of the 30 odd "test" dashboards that are in our Datadog installation, most of them don't show data any more. The same is true for hundreds of notebooks. We have a few thousand notebooks and quite a few of them at this point are anchored to data that is past the retention period or are referencing metrics that are gone.
In shared spaces with lots of users, few things are intended to last forever. I hope that it will become more popular for software to take age more intentional into account. For instance one can start fading out old documents that are rarely maintained or refreshed. I want software to hide old documents, dashboards etc. and that includes most critically not showing up in search. I don't want to accidentally navigate to old and unused dashboards in the mids of an incident.
Sorting by frequency of use is insufficient to me. Ideally software embraced an “ephemeral by default” approach. While there’s some risk of data loss, you can make the deletion purely virtual (at least for a while). Imagine dashboard software with built-in “garbage collection”: everything created starts with a short time-to-live (say, 30 days), after which it moves to a “to sort” folder. If it’s not actively sorted and saved within six months, it's moved to a trash and eventually deleted.
This idea extends far beyond dashboards! Wiki and innformation management software like Notion could benefit from decaying notes, as the information they hold often becomes outdated quickly. I routinely encounter more outdated pages than current ones. While outright deletion may not be the solution, irrelevant notes and documents showing up in searches add to the clutter and make finding useful information harder. “But I need my data sometimes years later” I hear you say. What about making it intentional? Archive them in year books. Make me intentionally “dig into the archives” if I really have to. There are many very intentional ways of dealing with this problem.
And even if software does not want to go down that path I would at least wish for scheduled deletion. I will forget to delete, and I'm lazy and given the tools available I rarely clean up. Yet many of the things I create I already know I really only need for a week or to. So give me a button I can press to schedule deletion. Then I don't have to remember to clean up after myself a few months later, but I can make that call already today when I create my thing.
October 29, 2024
PyCoder’s Weekly
Issue #653 (Oct. 29, 2024)
#653 – OCTOBER 29, 2024
View in Browser »
Sudoku in Python Packaging
Simon writes about a Soduku solver written by Konstin that uses the Python packaging mechanisms to do Soduku puzzles. The results are output using a requirements.txt file, where soduku-0-3==5
represents the (0,3) cell’s answer of 5.
SIMON WILLISON
Python Thread Safety: Using a Lock and Other Techniques
In this tutorial, you’ll learn about the issues that can occur when your code is run in a multithreaded environment. Then you’ll explore the various synchronization primitives available in Python’s threading module, such as locks, which help you make your code safe.
REAL PYTHON
OpenAPI Python SDK Creation: Speakeasy vs Open Source
Open-source OpenAPI generators are great for experimentation and hobby projects but lack the reliability, performance, and intuitive developer experience required for critical applications. Speakeasy creates idiomatic SDKs that meet the bar for enterprise use. Check out this comparison guide →
SPEAKEASY sponsor
Python and Sigstore
PEP 761 proposes removing PGP signatures from CPython artifacts and solely relying on Sigstore. But just what is Sigstore? This post explains how CPython gets signed and why Sigstore is a good choice.
SETH LARSON
Discussions
Articles & Tutorials
Introducing the New Starter Kit for Wagtail CMS
Currently, if you want to play around with Wagtail, most people try the Bakery Demo test site, but it is not meant to be a starter site. They’ve created a new starter template that provides you with a high-performance, production-grade Wagtail site. This introduces you to it.
JAKE HOWARD • Shared by Meagen Voss
Understanding Python’s Global Interpreter Lock (GIL)
Python’s Global Interpreter Lock or GIL, in simple words, is a mutex (or a lock) that allows only one thread to hold the control of the Python interpreter at any one time. In this video course you’ll learn how the GIL affects the performance of your Python programs.
REAL PYTHON course
Write Code as if Failure Doesn’t Exist
“With Temporal, we…regularly reduced lines of code from 300 to 30.” -DigitalOcean. Ever wish you could eliminate recovery logic, callbacks, and timers? Temporal’s open source programming model allows you to stop plumbing and start focusing on what matters: building awesome features. Check it out now →
TEMPORAL TECHNOLOGIES sponsor
Python and SysV Shared Memory
At work, Adriaan deals with code that interfaces with Unix SysV’s shared memory components. For convenience he wanted to get at this from Python, and “because work”, from Python 3.7. This post talks about how he solved the problem.
ADRIAAN DE GROOT
Python 3.13, What Didn’t Make the Headlines
Bite Code summarizes some of the lesser covered changes to Python in the 3.13 release, including how some of the REPL improvements made it into pdb
, improvements to shutil
, and small additions to the asyncio library.
BITE CODE!
Django and HTMX Tutorial: Easier Web Development
This tutorial explores how the htmx library can bring dynamic functionality, like lazy loading, search-as-you-type, and infinite scroll to your Django project with almost no JavaScript necessary.
PYCHARM CHANNEL video
Python 3.12 vs 3.13: Performance Testing
This post shows the results of a performance comparison between Python 3.12 and 3.13 on two different processors. TL;DR: Python 3.13 is faster, but there are a couple of hairy edge cases.
WŁODZIMIERZ LEWONIEWSKI
Software Engineer Titles Have (Almost) Lost All Their Meaning
This post examines the devaluation of software engineer titles and its impact on the integrity of the tech industry.
TREVOR LASN
PyData Amsterdam 2024 Talks
This is a listing of the recorded talks from PyData Amsterdam.
YOUTUBE video
Projects & Code
Scrapling: Lightning-Fast, Adaptive Web Scraping for Python
GITHUB.COM/D4VINCI • Shared by Karim Shoair
Events
Weekly Real Python Office Hours Q&A (Virtual)
October 30, 2024
REALPYTHON.COM
PyCon FR 2024
October 31 to November 3, 2024
PYCON.FR
PyCon Zimbabwe
October 31 to November 3, 2024
PYCON.ORG
SPb Python Drinkup
October 31, 2024
MEETUP.COM
PyDelhi User Group Meetup
November 2, 2024
MEETUP.COM
Melbourne Python Users Group, Australia
November 4, 2024
J.MP
Happy Pythoning!
This was PyCoder’s Weekly Issue #653.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]