Planet Python
Last update: December 05, 2023 04:42 PM UTC
December 05, 2023
Real Python
How to Get the Current Time in Python
Getting the current time in Python is a nice starting point for many time-related operations. One very important use case is creating timestamps. In this tutorial, you’ll learn how to get, display, and format the current time with the datetime
module.
To effectively use the current time in your Python applications, you’ll add a few tools to your belt. For instance, you’ll learn how to read attributes of the current time, like the year, minutes, or seconds. To make the time more easily readable, you’ll explore options for printing it. You’ll also get to know different formats of time and understand how to deal with time zones.
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
December 05, 2023 02:00 PM UTC
Mike Driscoll
Viewing an Animated GIF with Python
Animated GIFs are a fun way to share silent videos on social media. This website has a tutorial to help you learn how to create your own animated GIFs with Python. But what if you wanted to view an animated GIF with Python?
If you’re wondering if Python can present an animated GIF to the viewer, then you’ve found the right tutorial!
You will learn how to display a GIF with Python using the following methods:
- Using tkinter to view an animated GIF
- Using PySimpleGUI to view an animated GIF
- Using Jupyter Notebook to view an animated GIF
Getting Started
The first step is to find an animated GIF you’d like to display. You can get GIFs from Google or Giphy. There are also many applications that you can use to make your own GIFs.
For the purposes of this tutorial, you can use this silly example GIF that shows a “hello world” example from Asciimatics, a Python TUI package:
Save this file as asicmatics_hello.gif to your computer so you can use it in the examples in this tutorial.
Now that you have an animated GIF, you are ready to learn how to display it using Python!
Viewing the Animated GIF with tkinter
StackOverflow has many suggestions you can use to view animated GIFs with Python. One suggestion was to use tkinter, the Python GUI toolkit included with Python, to view an animated GIF.
The following code is a cleaned up variation on the suggestion from that post:
from tkinter import * from PIL import Image, ImageTk class MyLabel(Label): def __init__(self, master, filename): im = Image.open(filename) seq = [] try: while 1: seq.append(im.copy()) im.seek(len(seq)) # skip to next frame except EOFError: pass # we're done try: self.delay = im.info['duration'] except KeyError: self.delay = 100 first = seq[0].convert('RGBA') self.frames = [ImageTk.PhotoImage(first)] Label.__init__(self, master, image=self.frames[0]) temp = seq[0] for image in seq[1:]: temp.paste(image) frame = temp.convert('RGBA') self.frames.append(ImageTk.PhotoImage(frame)) self.idx = 0 self.cancel = self.after(self.delay, self.play) def play(self): self.config(image=self.frames[self.idx]) self.idx += 1 if self.idx == len(self.frames): self.idx = 0 self.cancel = self.after(self.delay, self.play) class Main(): def __init__(self): root = Tk() self.anim = MyLabel(root, 'asciimatics_hello.gif') self.anim.pack() Button(root, text='stop', command=self.stop_it).pack() root.mainloop() def stop_it(self): self.anim.after_cancel(self.anim.cancel) main = Main()
This code will import tkinter and create a Label() widget. The label can contain all kinds of different things, including images. For this example, you will create a list of frames from the GIF and store them in self.frames as instances of ImageTk.PhotoImage.
Then you will call tkinter’s handy after() method to make it play the frames back after a short delay. You use this command to call play() recursively.
When you run this code, you should see the following:
You’ll note that in Tkinter, it seems to default to playing the GIF in black-and-white. You can do some more searching yourself and see if you can find a way to make it display the GIF in color.
Now, let’s try viewing it with PySimpleGUI!
Viewing the Animated GIF with PySimpleGUI
Tkinter isn’t the only GUI toolkit in town. You can also use PySimpleGUI, a wrapper around Tkinter, wxPython, and PyQt. If you’d like to know more, you can check out this brief intro PySimpleGUI.
You will need to install PySimpleGUI since it doesn’t come with Python. You can use pip to do that:
python -m pip install pysimplegui
Now you’re ready to write some code. The following code is based on an example from the PySimpleGUI GitHub page:
import PySimpleGUI as sg from PIL import Image, ImageTk, ImageSequence gif_filename = 'asciimatics_hello.gif' layout = [[sg.Image(key='-IMAGE-')]] window = sg.Window('Window Title', layout, element_justification='c', margins=(0,0), element_padding=(0,0), finalize=True) interframe_duration = Image.open(gif_filename).info['duration'] while True: for frame in ImageSequence.Iterator(Image.open(gif_filename)): event, values = window.read(timeout=interframe_duration) if event == sg.WIN_CLOSED: exit(0) window['-IMAGE-'].update(data=ImageTk.PhotoImage(frame) )
This code is a little shorter than the Tkinter example, so that’s pretty neat. Give it a try and see what happens!
Now you’re ready to try out viewing a GIF in Jupyter Notebook!
Viewing the Animated GIF in Jupyter Notebook
Jupyter Notebooks are great ways to share code along with some markup. You can create presentations, code examples, and their output and even widgets using Jupyter.
You can also insert images into Jupyter Notebook cells. That includes animated GIFs too! If you’d like to learn more about Jupyter Notebook, you should check out this introductory tutorial.
You’ll need to install Jupyter to be able to follow along with this part of the tutorial. Fortunately, that’s only a pip-install away:
python -m pip install jupyter
Now that you have Jupyter Notebook installed, open up your terminal or Powershell and run the following command:
jupyter notebook
The next step is to create a new Notebook. Click the New button on the top right in the new web page that should have loaded in your default browser. Look for a URL like this: http://localhost:8888/tree
You will be presented with several choices when you click the New button. You should pick the top choice, which is Notebook. A new browser tab will open, and you may see a pop-up window asking which Kernel to choose. The default will probably be Python 3, which is what you want. Pick that, and you should be good to go!
In the middle of the toolbar along the top of the Jupyter Notebook Window, you will see a drop-down labeled Code. Click on that and change it to Markdown. Now the current cell is n Markdown mode instead of Code mode. That’s great!
Enter the following into your cell:

Make sure that asciimatics_hello.gif is in the same folder as your new Jupyter Notebook is in. Or you can put the absolute path to the file in there instead.
Now, you need to run the cell to see the animated GIF. You can click the play button in the toolbar or press SHIFT+ENTER. Either way that should run the cell.
At this point, you should see the following:
You did it! That looks great!
Wrapping Up
Python has many different ways to view animated GIFs. You can use the built-in Tkinter GUI package, the PySimpleGUI package, or Jupyter Notebook.
There are probably others you can use as well. Feel free to drop a comment and let us all know what version you like best or a different package that you’ve tried.
The post Viewing an Animated GIF with Python appeared first on Mouse Vs Python.
December 05, 2023 01:35 PM UTC
December 04, 2023
James Bennett
Easy HTTP status codes in Python
This is part of a series of posts I’m doing as a sort of Python/Django Advent calendar for Advent 2023, offering a small tip or piece of information each day from the first Sunday of Advent through Christmas Eve. See the first post in the series for an introduction.
The most useful test
I could be misremembering, but I think Frank Wiles was the first person I ever heard explain that, for a web application, âŠ
December 04, 2023 08:56 PM UTC
Python Software Foundation
It's time for our annual year-end PSF fundraiser and membership drive đ
Support Python in 2023!
There are two ways to join in the drive this year:
- Donate directly to the PSF! Every dollar makes a difference. (Does every dollar also make a puppyâs tail wag? We make no promises, but may you should try, just in case? đ¶)
- Become a member! Sign up as a Supporting member of the PSF. Be a part of the PSF, and help us sustain what we do with your annual support.
Or, heck, why not do both? đ„ł
Your Donations:
- Keep Python thriving
- Invest directly in CPython and PyPI progress
- Bring the global Python community together
- Make our community more diverse and robust every year
Letâs take a look back on 2023:
PyCon US - We held our 20th PyCon US, in Salt Lake City and online, which was an exhilarating success! For the online component, PyCon US OX, we added two moderated online hallway tracks (in Spanish and English) and saw a 33% increase in virtual engagement. It was great to see everyone again in 2023, and weâre grateful to all the speakers, volunteers, attendees, and sponsors who made it such a special event.
Security Developer in Residence - Seth Larson joined the PSF earlier this year as our first ever Security Developer-in-Residence. Seth is already well-known to the Python community â he was named a PSF Fellow in 2022 and has already written a lot about Python and security on his blog. This critical role would not be possible without funding from the OpenSSF Alpha-Omega Project.
PyPI Safety & Security Engineer - Mike Fiedler joined the PSF earlier this year as our first ever PyPI Safety & Security Engineer. Mike is already a dedicated member of the Python packaging community â he has been a Python user for some 15 years, maintains and contributes to open source, and became a PyPI Maintainer in 2022. You can see some of what he's achieved for PyPI already on the PyPI blog. This critical role would not be possible without funding from AWS.
Welcome, Marisa and Marie! - In 2023 we were able to add two new full time staff members to the PSF. Marisa Comacho joined as Community Events Manager and Marie Nordin joined as Community Communications Manager. We are excited to add two full time dedicated staff members to the PSF to support PyCon US, our communications, and the community as a whole.
CPython Developer in Residence - Our CPython Developer in Residence, Ćukasz Langa, continued to provide trusted support and advancement of the Python language, including oversight for the releases of Python 3.8 and 3.9, adoption of Sigstore, and stewardship of PEP 703 (to name a few of many!). Ćukasz also engaged with the community by orchestrating the Python Language Summit and participating in events such as PyCon US 2023, EuroPython, and PyCon Colombia. This critical role would not be possible without funding from Meta.
Authorized as CVE Numbering Authority (CNA) - Being authorized as a CNA is one milestone in the Python Software Foundation's strategy to improve the vulnerability response processes of critical projects in the Python ecosystem. The Python Software Foundation CNA scope covers Python and pip, two projects which are fundamental to the rest of Python ecosystem.
Five new Fiscal Sponsorees - Welcome to Bandit, BaPya, Twisted, PyOhio, and North Bay Python as new Fiscal Sponsorees of the PSF! The PSF provides 501(c)(3) tax-exempt status to fiscal sponsorees and provides back office support so they can focus on their missions.
Our Thanks:
Thank you for being a part of this drive and of the Python community! Keep an eye on this space and on our social media in the coming weeks for updates on the drive and the PSF đ
Your support means the world to us. Weâre incredibly grateful to be in community with you!
December 04, 2023 04:50 PM UTC
Zato Blog
Smart IoT integrations with Akenza and Python
Smart IoT integrations with Akenza and Python
Overview
The Akenza IoT platform, on its own, excels in collecting and managing data from a myriad of IoT devices. However, it is integrations with other systems, such as enterprise resource planning (ERP), customer relationship management (CRM) platforms, workflow management or environmental monitoring tools that enable a complete view of the entire organizational landscape.
Complementing Akenza's capabilities, and enabling the smooth integrations, is the versatility of Python programming. Given how flexible Python is, the language is a natural choice when looking for a bridge between Akenza and the unique requirements of an organization looking to connect its intelligent infrastructure.
This article is about combining the two, Akenza and Python. At the end of it, you will have:
- A bi-directional connection to Akenza using Python and WebSockets
- A Python service subscribed to and receiving events from IoT devices through Akenza
- A Python service that will be sending data to IoT devices through Akenza
Since WebSocket connections are persistent, their usage enhances the responsiveness of IoT applications which in turn helps to exchange occurs in real-time, thus fostering a dynamic and agile integrated ecosystem.
Python and Akenza WebSocket connections
First, let's have a look at full Python code - to be discussed later.
# -*- coding: utf-8 -*-
# Zato
from zato.server.service import WSXAdapter
# ###############################################################################################
# ###############################################################################################
if 0:
from zato.server.generic.api.outconn.wsx.common import OnClosed, \
OnConnected, OnMessageReceived
# ###############################################################################################
# ###############################################################################################
class DemoAkenza(WSXAdapter):
# Our name
name = 'demo.akenza'
def on_connected(self, ctx:'OnConnected') -> 'None':
self.logger.info('Akenza OnConnected -> %s', ctx)
# ###############################################################################################
def on_message_received(self, ctx:'OnMessageReceived') -> 'None':
# Confirm what we received
self.logger.info('Akenza OnMessageReceived -> %s', ctx.data)
# This is an indication that we are connected ..
if ctx.data['type'] == 'connected':
# .. for testing purposes, use a fixed asset ID ..
asset_id:'str' = 'abc123'
# .. build our subscription message ..
data = {'type': 'subscribe', 'subscriptions': [{'assetId': asset_id, 'topic': '*'}]}
ctx.conn.send(data)
else:
# .. if we are here, it means that we received a message other than type "connected".
self.logger.info('Akenza message (other than "connected") -> %s', ctx.data)
# ##############################################################################################
def on_closed(self, ctx:'OnClosed') -> 'None':
self.logger.info('Akenza OnClosed -> %s', ctx)
# ##############################################################################################
# ##############################################################################################
Now, deploy the code to Zato and create a new outgoing WebSocket connection. Replace the API key with your own and make sure to set the data format to JSON.
Receiving messages from WebSockets
The WebSocket Python services that you author have three methods of interest, each reacting to specific events:
-
on_connected - Invoked as soon as a WebSocket connection has been opened. Note that this is a low-level event and, in the case of Akenza, it does not mean yet that you are able to send or receive messages from it.
-
on_message_received - The main method that you will be spending most time with. Invoked each time a remote WebSocket sends, or pushes, an event to your service. With Akenza, this method will be invoked each time Akenza has something to inform you about, e.g. that you subscribed to messages, that
-
on_closed - Invoked when a WebSocket has been closed. It is no longer possible to use a WebSocket once it has been closed.
Let's focus on on_message_received, which is where the majority of action takes place. It receives a single parameter of type OnMessageReceived which describes the context of the received message. That is, it is in the "ctx" that you will both the current request as well as a handle to the WebSocket connection through which you can reply to the message.
The two important attributes of the context object are:
-
ctx.data - A dictionary of data that Akenza sent to you
-
ctx.conn - The underlying WebSocket connection through which the data was sent and through you can send a response
Now, the logic from lines 30-40 is clear:
-
First, we check if Akenza confirmed that we are connected (type=='connected'). You need to check the type of a message each time Akenza sends something to you and react to it accordingly.
-
Next, because we know that we are already connected (e.g. our API key was valid) we can subscribe to events from a given IoT asset. For testing purposes, the asset ID is given directly in the source code but, in practice, this information would be read from a configuration file or database.
-
Finally, for messages of any other type we simply log their details. Naturally, a full integration would handle them per what is required in given circumstances, e.g. by transforming and pushing them to other applications or management systems.
A sample message from Akenza will look like this:
INFO - WebSocketClient - Akenza message (other than "connected") -> {'type': 'subscribed',
'replyTo': None, 'timeStamp': '2023-11-20T13:32:50.028Z',
'subscriptions': [{'assetId': 'abc123', 'topic': '*', 'tagId': None, 'valid': True}],
'message': None}
How to send messages to WebSockets
An aspect not to be overlooked is communication in the other direction, that is, sending of messages to WebSockets. For instance, you may have services invoked through REST APIs, or perhaps from a scheduler, and their job will be to transform such calls into configuration commands for IoT devices.
Here is the core part of such a service, reusing the same Akenza WebSocket connection:
# -*- coding: utf-8 -*-
# Zato
from zato.server.service import Service
# ##############################################################################################
# ##############################################################################################
class DemoAkenzaSend(Service):
# Our name
name = 'demo.akenza.send'
def handle(self) -> 'None':
# The connection to use
conn_name = 'Akenza'
# Get a connection ..
with self.out.wsx[conn_name].conn.client() as client:
# .. and send data through it.
client.send('Hello')
# ##############################################################################################
# ##############################################################################################
Note that responses to the messages sent to Akenza will be received using your first service's on_message_received method - WebSockets-based messaging is inherently asynchronous and the channels are independent.
Now, we have a complete picture of real-time, IoT connectivity with Akenza and WebSockets. We are able to establish persistent, responsive connections to assets, we can subscribe to and send messages to devices, and that lets us build intelligent automation and integration architectures that make use of powerful, emerging technologies.
December 04, 2023 04:00 PM UTC
Daniel Roy Greenfeld
TIL: Forcing pip to use virtualenv
Necessary because installing things into your base python causes false positives, true negatives, and other head bangers.
Set this environment variable, preferably in your rc file:
# ~/.zshrc
export PIP_REQUIRE_VIRTUALENV=true
Now if I try to use pip outside a virtualenv:
dj-notebook on î main [$] is đŠ v0.6.1 via đ v3.10.6
⯠pip install ruff
ERROR: Could not find an activated virtualenv (required).
This TIL is thanks to David Winterbottom.
December 04, 2023 03:30 PM UTC
Real Python
Serialize Your Data With Python
Whether youâre a data scientist crunching big data in a distributed cluster, a back-end engineer building scalable microservices, or a front-end developer consuming web APIs, you should understand data serialization. In this comprehensive guide, youâll move beyond XML and JSON to explore several data formats that you can use to serialize data in Python. Youâll explore them based on their use cases, learning about their distinct categories.
By the end of this tutorial, youâll have a deep understanding of the many data interchange formats available. Youâll master the ability to persist and transfer stateful objects, effectively making them immortal and transportable through time and space. Finally, youâll learn to send executable code over the network, unlocking the potential of remote computation and distributed processing.
In this tutorial, youâll learn how to:
- Choose a suitable data serialization format
- Take snapshots of stateful Python objects
- Send executable code over the wire for distributed processing
- Adopt popular data formats for HTTP message payloads
- Serialize hierarchical, tabular, and other shapes of data
- Employ schemas for validating and evolving the structure of data
To get the most out of this tutorial, you should have a good understanding of object-oriented programming principles, including classes and data classes, as well as type hinting in Python. Additionally, familiarity with the HTTP protocol and Python web frameworks would be a plus. This knowledge will make it easier for you to follow along with the tutorial.
You can download all the code samples accompanying this tutorial by clicking the link below:
Get Your Code: Click here to download the free sample code that shows you how to serialize your data with Python.
Feel free to skip ahead and focus on the part that interests you the most, or buckle up and get ready to catapult your data management skills to a whole new level!
Get an Overview of Data Serialization
Serialization, also known as marshaling, is the process of translating a piece of data into an interim representation thatâs suitable for transmission through a network or persistent storage on a medium like an optical disk. Because the serialized form isnât useful on its own, youâll eventually want to restore the original data. The inverse operation, which can occur on a remote machine, is called deserialization or unmarshaling.
Note: Although the terms serialization and marshaling are often used interchangeably, they can have slightly different meanings for different people. In some circles, serialization is only concerned with the translation part, while marshaling is also about moving data from one place to another.
The precise meaning of each term depends on whom you ask. For example, Java programmers tend to use the word marshaling in the context of remote method invocation (RMI). In Python, marshaling refers almost exclusively to the format used for storing the compiled bytecode instructions.
Check out the comparison of serialization and marshaling on Wikipedia for more details.
The name serialization implies that your data, which may be structured as a dense graph of objects in the computerâs memory, becomes a linear sequenceâor a seriesâof bytes. Such a linear representation is perfect to transmit or store. Raw bytes are universally understood by various programming languages, operating systems, and hardware architectures, making it possible to exchange data between otherwise incompatible systems.
When you visit an online store using your web browser, chances are it runs a piece of JavaScript code in the background to communicate with a back-end system. That back end might be implemented in Flask, Django, or FastAPI, which are Python web frameworks. Because JavaScript and Python are two different languages with distinct syntax and data types, they must share information using an interchange format that both sides can understand.
In other words, parties on opposite ends of a digital conversation may deserialize the same piece of information into wildly different internal representations due to their technical constraints and specifications. However, it would still be the same information from a semantic point of view.
Tools like Node.js make it possible to run JavaScript on the back end, including isomorphic JavaScript that can run on both the client and the server in an unmodified form. This eliminates language discrepancies altogether but doesnât address more subtle nuances, such as big-endian vs little-endian differences in hardware.
Other than that, transporting data from one machine to another still requires converting it into a network-friendly format. Specifically, the format should allow the sender to partition and put the data into network packets, which the receiving machine can later correctly reassemble. Network protocols are fairly low-level, so they deal with streams of bytes rather than high-level data types.
Depending on your use case, youâll want to pick a data serialization format that offers the best trade-off between its pros and cons. In the next section, youâll learn about various categories of data formats used in serialization. If you already have prior knowledge about these formats and would like to explore their respective scenarios, then feel free to skip the basic introduction coming up next.
Compare Data Serialization Formats
There are many ways to classify data serialization formats. Some of these categories arenât mutually exclusive, making certain formats fall under a few of them simultaneously. In this section, youâll find an overview of the different categories, their trade-offs, and use cases, as well as examples of popular data serialization formats.
Later, youâll get your hands on some practical applications of these data serialization formats under different programming scenarios. To follow along, download the sample code mentioned in the introduction and install the required dependencies from the included requirements.txt
file into an active virtual environment by issuing the following command:
(venv) $ python -m pip install -r requirements.txt
This will install several third-party libraries, frameworks, and tools that will allow you to navigate through the remaining part of this tutorial smoothly.
Textual vs Binary
At the end of the day, all serialized data becomes a stream of bytes regardless of its original shape or form. But some byte valuesâor their specific arrangementâmay correspond to Unicode code points with a meaningful and human-readable representation. Data serialization formats whose syntax consists purely of characters visible to the naked eye are called textual data formats, as opposed to binary data formats meant for machines to read.
The main benefit of a textual data format is that people like you can read serialized messages, make sense of them, and even edit them by hand when needed. In many cases, these data formats are self-explanatory, with descriptive element or attribute names. For example, take a look at this excerpt from the Real Python web feed with information about the latest tutorials and courses published:
Read the full article at https://realpython.com/python-serialize-data/ »
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
December 04, 2023 02:00 PM UTC
Django Weblog
Django 5.0 released
The Django team is happy to announce the release of Django 5.0.
The release notes cover a deluge of exciting new features in detail, but a few highlights are:
- The database-computed default values allow for defining database-computed defaults to model fields.
- Continuing the trend of expanding the Django ORM, the generated model field allows the creation of database generated columns.
- The concept of a field group was added to the templates system to simplify form field rendering.
You can get Django 5.0 from our downloads page or from the Python Package Index. The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E.
With the release of Django 5.0, Django 4.2 has reached the end of mainstream support. The final minor bug fix release, 4.2.8, was issued today. Django 4.2 is an LTS release and will receive security and data loss fixes until April 2026. All users are encouraged to upgrade before then to continue receiving fixes for security issues.
Django 4.1 has reached the end of extended support. The final security release (4.1.13) was issued on November 1st. All Django 4.1 users are encouraged to upgrade to Django 4.2 or later.
See the downloads page for a table of supported versions and the future release schedule.
December 04, 2023 11:17 AM UTC
PyCharm
Django 5.0 Delight: Unraveling the Newest Features
Hello everyone! As 2023 draws to a close, our reasons for celebration extend well beyond the upcoming holidays and vacation. Exciting developments await in the technology realm, including the unveiling of Python 3.12 and the much-anticipated Django 5.0. This latest release of our Python-based web framework (still the most popular option on the market) signifies […]
December 04, 2023 10:20 AM UTC
Django Weblog
Django bugfix release: 4.2.8
Today we've issued the 4.2.8 bugfix release.
The release package and checksums are available from our downloads page, as well as from the Python Package Index. The PGP key ID used for this release is Mariusz Felisiak: 2EF56372BA48CD1B.
December 04, 2023 08:33 AM UTC
James Bennett
A Python/Django Advent calendar
Advent is the liturgical season preceding Christmas in many Christian traditions, and generally begins on a Sunday — often the fourth Sunday before Christmas Day, but it varies depending on the church and the rite, which can put the first Sunday in Advent in either late November or early December.
The concept of an Advent “calendar” which counts down to Christmas, and in which each day has a door or panel which opens to reveal a âŠ
December 04, 2023 01:03 AM UTC
December 03, 2023
TechBeamers Python
44 Python Data Analyst Interview Questions
Here are 44 Python data analytics interview questions focused on Python programming along with their answers. You may like to check these out. Python Data Analyst Interview Questions and Answers. Question: How do you read data from a CSV file in Python? Answer: To read data from a CSV file, you can use the pandas [...]
The post 44 Python Data Analyst Interview Questions appeared first on TechBeamers.
December 03, 2023 08:39 PM UTC
December 01, 2023
Marcos Dione
ikiwiki to nikola: the script
People asked for it:
#! /usr/bin/python3 import argparse from datetime import datetime from glob import glob from os import stat from os.path import basename, splitext import re import sys import time footnote_re = re.compile(r'\[(?P<foot_number>\d+)\]') taglink_re = re.compile(r'\[\[!taglink (?P<tag_name>[^\]]*)\]\]') image_re = re.compile(r'\[\[!img (?P<path>.*)\]\]') format_start_re = re.compile(r'^\[\[!format (?P<language>.*) """$') format_end_re = re.compile(r'^"""\]\]$') def rewrite_footnotes_line(line, text_block, footnote_block, taglink_block, foot_number): new_line = line changed = False while footnote := footnote_re.search(new_line): # remove the []s start = footnote.start('foot_number') - 1 end = footnote.end('foot_number') + 1 prefix = new_line[:start] postfix = new_line[end:] foot_number = footnote.group('foot_number') if text_block: new_line = f"{prefix}[^{foot_number}]{postfix}" elif footnote_block: new_line = f"{prefix}[^{foot_number}]:{postfix}" else: raise ValueError('found a footnote in the taglink_block!') changed = True else: if not changed and footnote_block and len(line) > 0: # '[^]: ' <-- 5 extra chars new_line = f"{' ' * (len(foot_number) + 5)}{line.strip()}" return new_line, foot_number def rewrite_footnotes(src): lines = src.splitlines() hr_count = len([ line for line in lines if line.startswith('---') ]) new_lines = [] text_block = True footnote_block = False taglink_block = False hr_seen = 0 foot_number = '' for line in lines: line_length = len(line) if line_length > 4 and line[:4] == ' ': # it's an inline code block, leave alone new_lines.append(line) continue if line.startswith('---'): hr_seen += 1 # if there is only one hr, then we have text + taglink blocks # if there are two or more, it's text + footnote + taglink blocks if text_block and hr_count >= 2 and hr_seen == hr_count - 1: text_block = False footnote_block = True # don't keep it continue elif hr_seen == hr_count: text_block = False footnote_block = False taglink_block = True # we'll need it later new_lines.append(line) continue try: new_line, foot_number = rewrite_footnotes_line(line, text_block, footnote_block, taglink_block, foot_number) except Exception as e: print(f"got `{e}ÂŽ for `{line}ÂŽ.") raise new_lines.append(new_line) return '\n'.join(new_lines) + '\n' def rewrite_taglinks(src): new_lines = [] new_tags = [] for line in src.splitlines(): if len(line) > 0 and line == '-' * len(line): # don't keep it continue tags = taglink_re.findall(line) if len(tags) > 0: new_tags.extend(tags) else: new_lines.append(line) return '\n'.join(new_lines) + '\n', new_tags def rewrite_images(src): new_lines = [] for line in src.splitlines(): image = image_re.search(line) if image is not None: # get the text before and after the whole directive start = image.start(0) end = image.end(0) prefix = line[:start] postfix = line[end:] path = image.group('path') # the root to which this 'absolute' path points is the website's root new_line = f"{prefix}{postfix}" new_lines.append(new_line) else: new_lines.append(line) return '\n'.join(new_lines) + '\n' lang_map = dict( py='python', sh='bash', ) def rewrite_format(src): new_lines = [] for line in src.splitlines(): start = format_start_re.match(line) if start is not None: lang = start.group('language') # if there's no mapping return the same lang new_line = f"```{lang_map.get(lang, lang)}" new_lines.append(new_line) continue if format_end_re.match(line): new_lines.append('```') continue new_lines.append(line) return '\n'.join(new_lines) + '\n' def titlify(src): words = src.split('-') words[0] = words[0].title() return ' '.join(words) def test_offesetify(): src = -3600 dst = '+0100' assert offsetify(src) == dst def offsetify(src): hours, seconds = divmod(src, 3600) # "offsets are always in minutes" sounds like one item in 'things dveloper believe about timezones' minutes, _ = divmod(seconds, 60) # NOTE: time.timezone returns seconds west of UTC, which is opposite of what usual offsets go if src > 0: sign = '-' else: sign = '+' return f"{sign}{-hours:02d}{minutes:02d}" def datify(src): '''1701288755.377908 -> 2023-11-29 21:12:35 +0100''' # BUG: I'm gonna assume current timezone. # thanks SirDonNick#python@libera.chat # dto=DT(2023,11,29, 12,13,59, tzinfo=UTC_TZ); DT.astimezone( dto , getTZ('Europe/Brussels') ) #==> 2023-11-29 13:13:59+01:00 offset = time.timezone dt = datetime.fromtimestamp(src) return f"{dt.strftime('%Y-%m-%d %H:%M:%S')} {offsetify(offset)}" # zoneinfo for some reason doesn't know about CEST, so I'll just hack a mapping here tzname_to_utc_offset = dict( CEST='+0200', CET='+0100', ) month_name_to_number = dict( jan= 1, ene= 1, feb= 2, mar= 3, apr= 4, abr= 4, may= 5, jun= 6, jul= 7, aug= 8, ago= 8, sep= 9, oct=10, nov=11, dec=12, dic=12, ) def dedatify(src): # 0 1 2 3 4 5 6 7 # src=['Posted', 'Sun', '26', 'Aug', '2012', '11:27:16', 'PM', 'CEST'] month = month_name_to_number[src[3].lower()] utc_offset = tzname_to_utc_offset[src[7]] h, m, s = [ int(x) for x in src[5].split(':') ] if src[6].upper() == 'PM': h += 12 # TODO: support 12PM return f"{src[4]}-{month:02d}-{int(src[2]):02d} {h:02d}:{m:02d}:{s:02d} {utc_offset}" def build_meta(filepath, tags, date=None): filename = splitext(basename(filepath))[0] if date is None: mtime = stat(filepath).st_mtime date_string = datify(mtime) else: date_string = dedatify(date) meta = f""".. title: {titlify(filename)} .. slug: {filename} .. date: {date_string} .. tags: {', '.join(tags)} .. type: text """ return filename, meta def import_post(opts): src = open(opts.filepath).read() mid, tags = rewrite_taglinks(rewrite_footnotes(src)) dst = rewrite_format(rewrite_images(mid)) if opts.date is None: filename, meta = build_meta(opts.filepath, tags) else: filename, meta = build_meta(opts.filepath, tags, date=opts.date) open(f"posts/{filename}.md", 'w+').write(dst) open(f"posts/{filename}.meta", 'w+').write(meta) def parse_args(): parser = argparse.ArgumentParser() parser.add_argument('filepath', metavar='FILE') parser.add_argument('-d', '--date', nargs=8, help='Just pass something like "Posted Wed 12 Sep 2012 08:19:23 PM CEST".') return parser.parse_args() if __name__ == '__main__': opts = parse_args() import_post(opts)
I removed all the tests, but they all looked like this:
def test_dedatify(): src = 'Posted Wed 12 Sep 2012 08:19:23 PM CEST'.split() dst = '2012-09-12 20:19:23 +0200' assert dedatify(src) == dst
Enjoy.
December 01, 2023 11:31 PM UTC
Real Python
The Real Python Podcast â Episode #182: Building a Python JSON Parser & Discussing Ideas for PEPs
Have you thought of a way to improve the Python language? How do you share your idea with core developers and start a discussion in the Python community? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
December 01, 2023 12:00 PM UTC
Tryton News
Newsletter December 2023
In the last month we focused on fixing bugs, improving the behaviour of things, speeding-up performance issues and adding new features for you.
Changes for the User
Accounting, Invoicing and Payments
We ease the former unique constraint on IBAN account numbers to allow now multiple equal deactivated IBAN account numbers.
When changing the company or party of an invoice, now the tax identifier are cleared when they are no longer valid.
On payment terms we now display the fields to define the payment term delta in the correct order of application.
Now it is possible to shorten or extend the fiscal year as long as all periods are still in its date range.
On refunds we now show the external payment ID from the payment provider.
Now we order payable/receivable lines by maturity date or move date.
Parties and CRM
The height of the street widget is now reduced to three lines.
New Releases
We released bug fixes for the currently maintained long term supported series
7.0, 6.0 and the penultimate series 6.8.
Changes for the System Administrator
Tryton now fully supports the update of database records via CSV data. The missing piece has been the handling for removing links in xxx2Many fields on update, which is done now. To unlink or remove existing xxx2Many target records, just exclude them in the CSV data to import. This way the imported data is similar to the stored records in the database.
Now the Tryton client cleans-up all temporary files and directories on exit.
Changes for Implementers and Developers
Now it is possible to specify a database statement timeout on RPC calls. The new timeout
parameter on RPC calls helps to avoid costly database queries. The default value is 60 sec and can be modified in the configuration.
We included a new policy to require documentation update for modules when contributing new feature to an existing module. Weâve been applying such rule for one month, which already improved the documentation of some modules.
A new contrib group have been included on the heptapod repository. This includes some tools related to Tryton which provide web integration, filestore integration and even a module to send SMS. We are happy to include more similar projects in the group, feel free to contribute yours!
1 post - 1 participant
December 01, 2023 08:00 AM UTC
Armin Ronacher
Untyped Python: The Python That Was
A lot has been said about Python typing. If you have been following me on Twitter (or you have the dubious pleasure of working with me), you probably know my skepticism towards Python typing. This stems from the syntax's complexity, the sluggishness of mypy, the overall cumbersome nature of its implementation and awkwardness of interactions with it. I won't dwell on these details today, instead I want to take you on a little journey back to my early experiences with Python. Why? Because I believe the conflict between the intrinsic philosophy of Python and the concept of typing is fundamental and profound, but also not new.
The concept of typed programming languages predates 2015 by a long stretch. They were not invented now. Debates over the necessity of typing are not a recent phenomenon at all. When you wanted to start a new software project, particularly something that resembles a web service you always had a choice of programming language. Back in 2004 when I started diving into programming, there were plenty of languages to chose. The conventional choice was not Python, the obvious choice was not even PHP rather Java. Java was the go-to for serious web application projects, given its typing system and enterprise-grade features. PHP was for toys, Python was nowhere to be found. PHP was popular, but in my circles it was always seen as an entirely ridiculous concept and the idea that someone would build a business on it even more so. I remember in my first year of University the prevalent opinion was that the real world runs on .NET, Java and C++. PHP was ridiculed, Python and Ruby did not appear in conversations and JavaScript on the server was non existent.
Yet here I was, I built stuff in PHP and Python. My choice wasn't driven by an aversion to static typing out of laziness but by the exceptional developer experience these languages offered, to a large part because of the lack of types. There was a stellar developer experience. Yes it did not have intellisense, but all the changes that I did appear on the web instantly. I recall directly modifying live websites via FTP in real time. Later editing web sites straight from vim on the production server. Was it terrible and terrifying? Absolutely. But damn it was productive. I learned a lot from that. They taught me valuable lessons about trade-offs. It was not just me that learned that, an entire generation of developers in those languages learned that our biggest weakness (it not being typed, and i wasn't compiled) was also our biggest strength. It required a bit of restraint and it required a slightly different way of programming, but it was incredibly productive.
There was the world of XPath, there was the world of DTDs, there was the world of SOAP and WSDL. There was the world where the inherent complexity of the system was so great, that you absolutely required an IDE, code generation and compile time tooling. In contrast there was my world. My world had me sitting with Vim, CVS and SVN and a basic Linux box and I was able to build things that I was incredibly proud of. I eventually swapped PHP for Python because it had better trade offs for me. But I will never not recognize what PHP gave me: I learned from it that not everything has to be pretty, it has to solve problems. And it did.
But in the same way with PHP, the total surface area between me and the Python language runtime was tiny. The code I wrote, was transformed by the interpreter into bytecode instructions (which you could even look at!) and evaluated by a tiny loop in the interpreter. The interpreter was Open Source, it was easy to read, and most importantly I was able to poke around in it. Not only was I able to learn more about computers this way, it also made it incredibly easy for me to understand what exactly was going on. Without doubt I was able to understand everything between the code that I wrote, and the code that ran end to end.
Yes, there was no static type checking and intellisense was basically non existing. Companies like Microsoft did not even think that Python was a language yet. But screw it, we were productive! Not only that, we build large software projects. We knew were the tradeoffs were. We had runtime errors flying left and right in production because bad types were passed, but we also had the tools to work with it! I distinctly remember how blown away a colleague from the .NET world was when I showed him some of the tools I had. That after I deployed bad code and it blew up in someone's face, I got an email that not only shows a perfectly readable stack trace, but also a line of source code for the frames. He was even more blown away when I showed him that I had a module that allowed me to attach remotely to the running interpreter and execute Python code on the fly to debug it. The developer experience was built around there being very few layers in the onion.
But hear me out: all the arguments against dynamic languages and dynamic typing systems were already there! Nothing new has been invented, nothing really has changed. We all knew that there was value in typing, and we also all collectively said: screw it. We don't need this, we do duck typing. Let's play this to our advantage.
Here is what has changed: we no longer trust developers as much and we are re-introducing the complexity that we were fighting. Modern Python can at times be impossible to comprehend for a developer. In a way in some areas we are creating the new Java. We became the people we originally displaced. Just that when we are not careful we are on a path to the world's worst Java. We put typing on a language that does not support it, our interpreter is slow, it has a GIL. We need to be careful not to forget that our roots are somewhere else. We should not collectively throw away the benefits we had.
The winds changed, that's undeniable. Other languages have shown that types add value in new and exciting ways. When I had the arguments with folks about Python vs Java typing originally, Java did not even have generics. JavaScript was fighting against its reputation of being an insufferable toy. TypeScript was years away from being created. While nothing new has been invented, some things were popularized. Abstract data types are no longer a toy for researchers. .NET started mixing static and dynamic typing, TypeScript later popularized adding types to languages originally created without them. There are also many more developers in our community who are less likely to understand what made those languages appealing in the first place.
So, where does this leave us? Is this a grumpy me complaining about times gone and how types are ruining everything? Hardly. There's undeniable utility in typing, and there is an element that could lead to greater overall productivity. Yet, the inherent trade-offs remain unchanged, and opting for or against typing should be a choice free from stigma. The core principles of this decision have not altered: types add value and they add cost.
Post script: Python is in a spot now where the time spent for me typing it, does not pay dividends. TypeScript on the other hand tilts more towards productivity for me. Python could very well reach that point. I will revisit this.
December 01, 2023 12:00 AM UTC
Matt Layman
Switch an Existing Python Project To Ruff
On a recent Building SaaS stream, we switched from using flake8, Black, isort, and bandit completely over to a single tool, Ruff. Watch an experienced Pythonista work through many of the options and do a full conversion to this powerful tool
December 01, 2023 12:00 AM UTC
November 30, 2023
Marcos Dione
Migrating from ikiwiki to nikola
As I mentioned several times already, my ikiwiki
setup for this glob is falling apart in my machine. As it is written
in perl
, a language I haven't touched in may many years, and its community seems to have dwindled and almost
disappeared, I've been thinking of migrating to something else. As a pythonista, one obvious option is nikola
. Also
because I know the original developer :)
But what would it take to do this? Well, my ikiwiki
posts are written in Markdown, and nikola
also reads that format.
At the beginning I thought of converting to reStructuredText because I have an issue: because of a bad command (probably
a cp
instead of rsync
or tar
), I lost the original file times. With reStructuredText, I can provide the date as a
directive, and I can recover the original dates from archive.org's snapshots of my glob. But then I read that the same
data can be put in a sidecar .meta
file, so I can keep my original file format. Also, many things I wanted work best
with Markdown, most notably footnotes, which, I don't know if you noticed, never worked on this glob :) Thanks
+ChrisWarrick#nikola@libera.chat
for all the help!
Still, ikiwiki
handles a few things not very Markdown'ly, including images, code snippets and tags. To be honest, the
last two are not really a part of Markdown, but it still means I have to convert one markup into another.
I had used pytest
in the past, but not much really. I usually write a test()
function where I test with assert
everything, and once all tests pass, I call main()
at script start instead. This was another quick hack, but I wanted
to give it a spin. I started with some pure TDD, writing input and outputs in test functions and just
assert f(input) == output
and pytest
did everything else for me, including showing me a diff that points out to the
small errors I was making. The iteration pace was feverish.
All in all, it took me 3 21-23h hackatons to mostly finish it. I wrote one function for each step (footnotes, tags,
images and code snippets), all of them looking all the input lines all over again, but it doesn't really matter, as I
have to import many files by hand to specify the original publishing date. I also tested each regexp1 individually,
like I was discussing the other day2. They were short enough not to
follow my first tip, but by $GOD I used the other two a lot. There are another four helper functions (convert slugs to
titles; convert time.timezone
format to UTC offset (for instance, +0100
); convert timestamps to a certain date
format; and convert another date format to the same one), all also well tested. Then one short function to write the
sidecar file, one that glues everything together, and one for parsing command line parameters. All that, tests and their
data and all, in 538 lines of very hacky Python :) I'll try to post the code some other day, but frankly I run out of
steam and I still have lots of posts to import by hand.
And that's it! Hopefully this will be the first post in the new glob version. I imported a few old posts already and it's working just fine. I expect a few tweaks in the future, as we're talking about ~300 posts and I can't promise the very old ones follow the same format. I set the feed size to one and I'll grow for the next nine posts so I don't break planets and feed readers. I hope I got that right :)
November 30, 2023 08:38 PM UTC
Zero to Mastery
Python Monthly Newsletter đ»đ
48th issue of Andrei Neagoie's must-read monthly Python Newsletter: Python Errors, Tools in Python Land, Architecting Monorepos, and much more. Read the full newsletter to get up-to-date with everything you need to know from last month.
November 30, 2023 10:00 AM UTC
Talk Python to Me
#440: Talking to Notebooks with Jupyter AI
We all know that LLMs and generative AI has been working its way into many products. It's Jupyter's turn to get a really awesome integration. We have David Qiu here to tell us about Jupyter AI. Jupyter AI provides a user-friendly and powerful way to apply generative AI to your notebooks. It lets you choose from many different LLM providers and models to get just the help you're looking for. And it does way more than just a chat pane in the UI. Listen to find out.<br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>David Qiu</b>: <a href="https://www.linkedin.com/in/dlq/" target="_blank" rel="noopener">linkedin.com</a><br/> <br/> <b>Jupyter AI</b>: <a href="https://jupyter-ai.readthedocs.io/en/latest/" target="_blank" rel="noopener">jupyter-ai.readthedocs.io</a><br/> <br/> <b>Asking about something in your notebook</b>: <a href="https://jupyter-ai.readthedocs.io/en/latest/users/index.html#asking-about-something-in-your-notebook" target="_blank" rel="noopener">jupyter-ai.readthedocs.io</a><br/> <b>Generating a new notebook</b>: <a href="https://jupyter-ai.readthedocs.io/en/latest/users/index.html#generating-a-new-notebook" target="_blank" rel="noopener">jupyter-ai.readthedocs.io</a><br/> <b>Learning about local data</b>: <a href="https://jupyter-ai.readthedocs.io/en/latest/users/index.html#learning-about-local-data" target="_blank" rel="noopener">jupyter-ai.readthedocs.io</a><br/> <b>Formatting the output</b>: <a href="https://jupyter-ai.readthedocs.io/en/latest/users/index.html#formatting-the-output" target="_blank" rel="noopener">jupyter-ai.readthedocs.io</a><br/> <b>Interpolating in prompts</b>: <a href="https://jupyter-ai.readthedocs.io/en/latest/users/index.html#interpolating-in-prompts" target="_blank" rel="noopener">jupyter-ai.readthedocs.io</a><br/> <b>JupyterCon 2023 Talk</b>: <a href="https://www.youtube.com/watch?v=bbj_oDh81hY" target="_blank" rel="noopener">youtube.com</a><br/> <b>PyData Seattle 2023 Talk</b>: <a href="https://www.youtube.com/watch?v=T0rzH_KslKQ" target="_blank" rel="noopener">youtube.com</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=7Sxw6gh6Gr8" target="_blank" rel="noopener">youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/440/talking-to-notebooks-with-jupyter-ai" target="_blank" rel="noopener">talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" rel="noopener">youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div><br/> <strong>--- Episode sponsors ---</strong><br/> <a href='https://talkpython.fm/posit'>Posit</a><br> <a href='https://talkpython.fm/training'>Talk Python Training</a>
November 30, 2023 08:00 AM UTC
Test and Code
210: TDD - Refactor while green
Test Driven Development. Red, Green, Refactor.Â
- Do we have to do the refactor part?Â
- Does the refactor at the end include tests?Â
- Or can I refactor the tests at any time?
- Why is refactor at the end?Â
This episode is to talk about this with a an example.
Sponsored by PyCharm Pro
- Use code PYTEST for 20% off PyCharm Professional at jetbrains.com/pycharm
- First 10 to sign up this month get a free month of AI Assistant
- See how easy it is to run pytest from PyCharm at pythontest.com/pycharm
The Complete pytest Course
- For the fastest way to learn pytest, go to courses.pythontest.com
- Whether your new to testing or pytest, or just want to maximize your efficiency and effectiveness when testing.
November 30, 2023 12:29 AM UTC
Armin Ronacher
Bundleless: Not Doing Things Makes You Fast
I recently came across a tweet and one statement in it really triggered me: the claim that a bundleless dev server does not work. The idea here being that you cannot avoid bundling during development for performance reasons. This challenges the code concept and premise of vite's design. Its dev server primarily operates by serving individual files post-initial transpiling.
There's some belief that bundleless development isn't feasible, especially for projects with thousands of modules, due to potential performance issues. However, I contend that this thinking overlooks the benefits of a bundleless approach.
There is obviously some truth to it having issues. If you have thousands of modules, that can take a while to load and on contrast if most of those are bundled up into a single file, that will take less time to load.
I believe this to be the wrong way to think of this issue. Consider Python as an illustrative example: Python loads each module as needed from the file system, without bundling numerous modules into larger files. This approach has a downside: in large applications, the startup time can become impractically long due to excessive code execution during import.
The solution isn't to increase bundling but to reduce overall code execution, particularly at startup. By optimizing module structure, minimizing cross-dependencies, and adopting lazy loading, you can significantly decrease load times and enable hot reloading of components. Don't forget that in addition to all the bytes you're not loading, you're also not parsing or executing code. You become faster by not doing all of this.
The objective for developers, both end-users and framework creators, should be to make bundleless development viable and at least in principle preferred. This means structuring applications to minimize initial load requirements, thereby enhancing iteration speeds. With a focus on doing less, the elimination of the bundling step becomes an attainable and beneficial goal. This is also one of the larger lessons I took from creating Flask: the many side effects of decorators and imports are a major frustration for large scale apps.
Then once that has been accomplished, bundleless does away with the last bit of now not important part: the bundling step which has a lot of other benefits on its own.
Of course, there are nuances. For instance, rarely changing third-party libraries with hundreds of internal modules will still benefit from bundling. Tools like Vite do address this need by optimizing this case.
Therefore, when embarking on a new project or framework, prioritize lazy loading and effective import management from the outset. Avoid circular dependencies and carefully manage code isolation. This initial effort in organizing your code will pay dividends as your project expands, making future development faster and more efficient.
Future you will be happy â and bundleless as evidenced by vite, with the right project setup works.
November 30, 2023 12:00 AM UTC
Matt Layman
Message Parsing and Ruff - Building SaaS with Python and Django #176
In this episode, we finished off the core portion of the application by parsing entries out of the messages sent back by SendGrid. We set up the Heroku Scheduler to start the daily flow of emails to get the system started. After completing that, I set up the project to use Ruff instead of the collection of tools used previously.
November 30, 2023 12:00 AM UTC
November 29, 2023
Ned Batchelder
Say it again: values not expressions
Sometimes you can explain a simple thing for the thousandth time, and come away with a deeper understanding yourself. It happened to me the other day with Python mutable argument default values.
This is a classic Python “gotcha”: you can provide a default value for a function argument, but it will only be evaluated once:
>>> def doubled(item, the_list=[]):
... the_list.append(item)
... the_list.append(item)
... return the_list
...
>>> print(doubled(10))
[10, 10]
>>> print(doubled(99))
[10, 10, 99, 99] # WHAT!?
I’ve seen people be surprised by this and ask about it countless times. And countless times I’ve said, “Yup, the value is only calculated once, and stored on the function.”
But recently I heard someone answer with, “it’s a value, not an expression,” which is a good succinct way to say it. And when a co-worker brought it up again the other day, I realized, it’s right in the name: people ask about “default values” not “default expressions.” Of course it’s calculated only once, it’s a default value, not a default expression. Somehow answering the question for the thousandth time made those words click into place and make a connection I hadn’t realized before.
Maybe this seems obvious to others who have been fielding this question, but to me it was a satisfying alignment of the terminology and the semantics. I’d been using the words for years, but hadn’t seen them as so right before.
This is one of the reasons I’m always interested to help new learners: even well-trodden paths can reveal new insights.
November 29, 2023 11:30 PM UTC
Paolo Melchiorre
Pelican 4.9: classless Simple theme with semantic HTML
Introducing the updated version of the “Simple” theme in the new Pelican 4.9 version, with semantic and classless HTML and customizable out-of-the-box.