skip to navigation
skip to content

Planet Python

Last update: September 16, 2025 07:43 AM UTC

September 16, 2025


Tryton News

Security Release for issue #14220

Luis Falcon has found that trytond may log sensitive data like passwords when the logging level is set to INFO.

Impact

CVSS v3.0 Base Score: 4.2

Workaround

Increasing the logging level above INFO prevents logging of the sensitive data.

Resolution

All affected users should upgrade trytond to the latest version.

Affected versions per series:

Non affected versions per series:

Reference

Concerns?

Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/ with the confidential checkbox checked.

1 post - 1 participant

Read full topic

September 16, 2025 06:00 AM UTC

September 15, 2025


Jacob Perkins

Python Async Gather in Batches

Python’s asyncio.gather function is great for I/O bound parallel processing. There’s a simple utility function I like to use that I call gather_in_batches:

async def gather_in_batches(tasks, batch_size=100, return_exceptions=False):
    for i in range(0, len(tasks), batch_size):
        batch = tasks[i:i+batch_size]
        for result in await asyncio.gather(*batch, return_exceptions=return_exceptions):
            yield result

The way you use it is

  1. Generate a list of tasks
  2. Gather your results

Here’s some simple sample code to demonstrate:

tasks = [process_async(obj) for obj in objects]
return [result async for result in gather_in_batches(tasks)]

objects could be all sorts of things:

And process_async is an async function that would just do whatever processing you need to do on that object. Assuming it is mostly I/O bound, then this is very simple and effective method to process data in parallel, without getting into threads, multi-processing, greenlets, or any other method.

You’ll need to experiment to figure out what the optimal batch_size is for your use case. And unless you don’t care about errors, you should set return_exceptions=True, then check if isinstance(result, Exception) to do proper error handling.

September 15, 2025 08:30 PM UTC


PyCoder’s Weekly

Issue #699: Feature Flags, Type Checker Showdown, Null in pandas, and More (Sept. 15, 2025)

#699 – SEPTEMBER 15, 2025
View in Browser »

The PyCoder’s Weekly Logo


Feature Flags in Depth

Feature flags are a way to enable or disable blocks of code without needing to re-deploy your software. This post shows you several different approaches to feature flags.
RAPHAEL GASCHIGNARD

A Deep Dive Into Ty, Pyrefly, and Zuban

A comparison of three new Rust-based Python type checkers through the lens of typing spec conformance: Astral’s ty, Meta’s pyrefly, and David Halter’s zuban
ROB HAND

Building and Monitoring AI Agents and MCP Servers [Free Workshop]

alt

Get hands-on with agent monitoring, deep dive into MCP debugging, and use Sentry’s Seer to resolve elusive AI crashes and failures →
SENTRY sponsor

How to Drop Null Values in pandas

Learn how to use .dropna() to drop null values from pandas DataFrames so you can clean missing data and keep your Python analysis accurate.
REAL PYTHON

Quiz: How to Drop Null Values in pandas

Quiz yourself on pandas .dropna(): remove nulls, clean missing data, and prepare DataFrames for accurate analysis.
REAL PYTHON

PEP 803: Stable ABI for Free-Threaded Builds (Added)

PYTHON.ORG

DjangoCon US: Call for Venue Proposals 2027-28

DEFNA.ORG

Articles & Tutorials

Production-Grade Python Logging Made Easier With Loguru

While Python’s standard logging module is powerful, navigating its system of handlers, formatters, and filters can often feel like more work than it should be. This article describes how to achieve the same (and better) results with a fraction of the complexity using Loguru.
AYOOLUWA ISAIAH • Shared by Ayooluwa Isaiah

Simplify IPs, Networks, and Subnets With the ipaddress

Python’s built-in ipaddress module makes handling IP addresses and networks clean and reliable. This article shows how to validate, iterate, and manage addresses and subnets while avoiding common pitfalls of string-based handling.
MOHAMED HAZIANE • Shared by Mohamed Haziane

Boost Agent Resilience with the OpenAI Agents SDK + Temporal Integration

alt

Join our live webinar with OpenAI to see how the Agents SDK and Temporal’s Python integration make AI agents resilient, scalable, and easy to debug. Watch a live multi-agent demo and learn how Durable Execution powers production-ready systems →
TEMPORAL sponsor

Django: Overriding Translations From Dependencies

When building a multi-lingual website in Django, you may end up encountering translated strings in a third party language that don’t match you site’s languages. This post tells you how to deal with that.
GONÇALO VALÉRIO

Creating a Website With Sphinx and Markdown

Sphinx is a Python-based documentation builder and in fact, the Python documentation itself is written using Sphinx. Learn how to build a static site with RST or Markdown and Sphinx.
MIKE DRISCOLL

Python REPL Shortcuts & Features

Discover Python REPL features from keyboard shortcuts to block navigation, this reference guide will help you better utilize Python’s interactive shell.
TREY HUNNER

uv Cheatsheet

A cheatsheet with the most common and useful uv commands to manage projects and dependencies, publish projects, manage tools, and more.
RODRIGO GIRÃO SERRÃO

Python String Splitting

Master Python string splitting with .split() and re.split() to handle whitespace, delimiters & multiline text.
REAL PYTHON course

CodeRabbit: Free AI Code Reviews in CLI

CodeRabbit CLI gives instant code reviews in your terminal. It plugs into any AI coding CLI and catches bugs, security issues, and AI hallucinations before they reach your codebase.
CODERABBIT sponsor

The Most Popular Python Frameworks and Libraries in 2025

Discover the top Python frameworks and libraries based on insights from over 30,000 Python developers.
EVGENIA VERBINA

Benchmarking MicroPython

This post compares the performance of running Python on several microcontroller boards.
MIGUEL GRINBERG

Projects & Code

markdown-it-py: Markdown Parser, Done Right

GITHUB.COM/EXECUTABLEBOOKS

awesome-public-datasets: List of Open Datasets

GITHUB.COM/AWESOMEDATA

djhtmx: UI Components for Django and HTMX

GITHUB.COM/EDELVALLE

Hexora: Static Analysis Tool for Malicious Python Scripts

GITHUB.COM/RUSHTER • Shared by Artem Golubin

Async-Native WebTransport Implementation

GITHUB.COM/LEMONSTERFY • Shared by lemonsterfy

Events

Weekly Real Python Office Hours Q&A (Virtual)

September 17, 2025
REALPYTHON.COM

PyData Bristol Meetup

September 18, 2025
MEETUP.COM

PyLadies Dublin

September 18, 2025
PYLADIES.COM

PyCon UK 2025

September 19 to September 23, 2025
PYCONUK.ORG

Chattanooga Python User Group

September 19 to September 20, 2025
MEETUP.COM

PyDelhi User Group Meetup

September 20, 2025
MEETUP.COM

PyCon Ghana 2025

September 25 to September 28, 2025
PYCON.ORG

PyCon JP 2025

September 26 to September 28, 2025
PYCON.JP


Happy Pythoning!
This was PyCoder’s Weekly Issue #699.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

September 15, 2025 07:30 PM UTC


Go Deh

From all truths to (ir)relevancies

 


Following up on my previous post about truth tables, I now ask a subtler question: which inputs actually matter? Some variables, though present, leave no trace on the output. In this post, I uncover those quiet bits — the irrelevant inputs — and learn how to spot them with precision.

Building on the previous

The previous post showed that for a given number of inputs, there is a finite, but rapidly growing, number of possible truth tables. For two inputs, there are 16. For three inputs, there are 256. This leads to a powerful idea: we can create a standardized format for all truth tables and uniquely identify the parts in each one.

The Standardized Truth Table (STT) Format

 

i[1] i[0] : r ================= 0 0 : 0 0 1 : 0 1 0 : 0
1 1 : 1  

Colour Key

  • Inputs
  • Input count
  • Result, r 
  • Result vector: 0001 

We have a standardized truth table with i inputs. Each row in the inputs section of the truth table is an increasing binary count from 0 to 2**i1. The result column for the truth table has 2**i rows, leading to 2**(2**i) different possible truth table result vectors for i inputs.

Any one possible result, r, of the standardized truth table is read as the binary bits of the output going from the topmost row (where the inputs are all zero) down to the bottom-most row, in order. This single binary number, the "result number" or vector, uniquely identifies the truth table and the boolean function it represents.

Irrelevant Variables in Boolean Expressions

A variable in a boolean expression is considered irrelevant if its value has no effect on the final output. In a truth table, this means that changing the value of that input variable while all other inputs remain the same does not change the output.

While this is easy to understand, it can be difficult to spot for complex functions. This is where the standardized truth table format, STT, comes in handy.

Irrelevancy Calculation

For an i-input truth table with a 2**2**i bit result, r, we can efficiently check for irrelevant variables. The key insight is that for any single input variable, say i[n], the other input bits change in the exact same order when i[n]=0 as they do when i[n]=1, in the input count region of the STT.

Therefore, if the result bits for the rows where i[n]=0 are the same as the result bits for the rows where i[n]=1, then the input variable i[n] has no effect on the output and is therefore irrelevant.

Let's illustrate with an example for a 3-input truth table.

Example


i[2] i[1] i[0] : Output
0 0 0 : r0​
0 0 1 : r1​
0 1 0 : r2​
0 1 1 : r3​
1 0 0 : r4​
1 0 1 : r5​
1 1 0 : r6​
1 1 1 : r7​

 

Consider checking if i[1] is irrelevant. The values of the other inputs (i[2] and i[0]) follow the sequence 00, 01, 10, 11 both when i[1] is 0 and when i[1] is 1.

  • When i[1]=0, the corresponding output bits are r0,r1,r4,r5.

  • When i[1]=1, the corresponding output bits are r2,r3,r6,r7.

If the sequence of bits (r0,r1,r4,r5) is identical to the sequence of bits (r2,r3,r6,r7), then the input variable i[1] has no effect on the output and is therefore irrelevant.

This method allows for a very efficient, algorithmic approach to simplifying boolean expressions.

 

from collections import defaultdict
import pprint


def is_irrelevant(considered: int,
                  input_count:int,
                  result_vector: int) -> bool:
    """
    Determine whether a specific input variable is irrelevant to the output of the truth table.
   
    Args:
        considered: Index of the input variable to test.
        input_count: Total number of input variables.
        result_vector: Integer representing the output bits of the truth table.
   
    Returns:
        True if the input is irrelevant, False otherwise.
    """
    considered_to_resultbits = {0: [], 1: []}

    for in_count in range(2**input_count):
        considered_bit = (in_count >> considered) & 1
        resultbit = (result_vector >> in_count) & 1
        considered_to_resultbits[considered_bit].append(resultbit)
   
    return considered_to_resultbits[0] == considered_to_resultbits[1]


def find_irrelevant_ttable_inputs(input_count:int,
                                  result_vector: int) -> list[int]:
    """
    Identify which input variables are irrelevant in a standardized truth table.
   
    Args:
        input_count: Number of input variables.
        result_vector: Integer representing the output bits of the truth table.
   
    Returns:
        List of input indices that are irrelevant.
    """
    irrelevant = [i for i in range(input_count)
                  if is_irrelevant(i, input_count, result_vector)]
   
    return irrelevant


 

Relevant and irrelevant result vectors for STT's

I am interested in the truthtables/boolean expressions of an increasing number of inputs. Previousely I was taking all the zero input STT, then all the 1-input, all the 2-input, ... 
That had repettitions and irrelevancies. I can now take just the relevant result vectors for each case, or, take the maximum inputs I canhandle and sort the result vectors so that those with the most irrelevancies for that maximum number of inputs, come first.

Here's the code I used to investigate these properties of irrelevances:

 

def print_STT_results_sensitive_to_inputs(max_input_count: int) -> None:
    """
    Print a summary of irrelevance patterns across all standardized truth tables (STTs)
    for input counts from 0 up to `max_input_count - 1`.

    For each input count `i`, the function:
      - Computes all possible result vectors (2**(2**i))
      - Identifies which inputs are irrelevant for each result
      - Summarizes how many result vectors have at least one irrelevant input
      - Prints spacing patterns (diffs) between result indices with irrelevance
      - Checks whether irrelevance indices for input count `i` begin with those from `i-1`


    Args:
        max_input_count: The exclusive upper bound on input counts to analyze.
    """

    imax2irrelevances = {}
    for imax in range(max_input_count):
        print(f"\nINPUTS = {imax}\n==========")
        result_count = 2**2**imax
        print(f"  total_possible_tt_results =", result_count)

        result2irrelevances = {r:[i for i in range(imax)
                                  if is_irrelevant(i, imax, r)]
                               for r in range(result_count)}
        txt = pprint.pformat(result2irrelevances, compact=True, indent=2).replace('\n', '\n  ')
        txt = txt.replace('{', '{\n   ')
        #print("  result2irrelevances = ", txt)

        irrelevances = [k for k, v in result2irrelevances.items()
                        if v]
       
        imax2irrelevances[imax] = irrelevances

        relevances = result_count - len(irrelevances)
        print(f"  {irrelevances = }")
        print(f"    {len(irrelevances) = }, {relevances = }")

        table = chop_line(format_irrelevance_table_full(result2irrelevances, imax)[0], 220)
        print("\nSTT Result Irrelevance vs Input Table"
              f"\n{table}")

        # First-order differences between irrelevance indices
        diff0 = [irrelevances[j] - irrelevances[j-1]
                 for j in range(1, len(irrelevances))]
        #print(f"    {diff0 = }")

        # Second-order differences between irrelevance indices
        diff1 = [diff0[j] - diff0[j-1]
                 for j in range(1, len(diff0))]
        #print(f"    {diff1 = }")
   
    print("\n\nIrrelevance indices reflected about the center of the r count?"
          f"\n  {is_irrelevance_reflected(imax2irrelevances)}")  # True so far

    print('Irrelevances for `i` inputs begin with those from `i-1`?')
    previous_prefixed = all((i0 := imax2irrelevances[i-1]) == imax2irrelevances[i][:len(i0)]
                            for i in range(1, max_input_count))
    print(f"  {previous_prefixed}")  # True so far


def is_irrelevance_reflected(ins2irrel: dict[int, list[int]]) -> bool:
    """
    Check whether irrelevance indices are symmetric about the center of the result vector space.

    For each input count `i`, the result vector space has size 2**(2**i).
    This function checks whether for every irrelevance index `r < half_range`,
    its mirror index `full_range - 1 - r` is also marked as irrelevant.

    Args:
        ins2irrel: Mapping from input count to list of result indices with irrelevance.

    Returns:
        True if all irrelevance sets are symmetric about the center, False otherwise.
    """
    for imax, irrelevances in ins2irrel.items():
        irrel_set = set(irrelevances)
        full_range = 2**2**imax
        half_range = full_range / 2

        if not all((full_range - 1 - i) in irrel_set for i in irrel_set if i < half_range):
            return False
    return True


def format_irrelevance_table(result2irrelevances: dict[int, list[int]], input_count: int) -> tuple[str, str]:
    """
    Create a compact table showing which inputs are irrelevant for each result vector `r`.

    Each column is sized to match the width of its corresponding `r` value.
    Only includes columns for `r` values that have at least one irrelevant input.
    Each row corresponds to an input index, with '@' if that input is irrelevant for that `r`, else blank.

    Returns:
        A tuple of (plain text table string, markdown table string)
    """
    if input_count == 0:
        msg = "No inputs to analyze (input_count = 0)."
        md = "| Input | (none) |\n|-------|--------|\n| (none) |        |"
        return msg, md

    filtered_r = [r for r, irrels in result2irrelevances.items() if irrels]
    if not filtered_r:
        msg = "No irrelevant inputs found for any result vector."
        md = "| Input | (none) |\n|-------|--------|\n" + "\n".join(
            f"| i[{i}] |        |" for i in range(input_count)
        )
        return msg, md

    # Determine individual column widths based on r string length
    r_labels = [str(r) for r in filtered_r]
    col_widths = [len(label) for label in r_labels]

    # Header row
    header_plain = " " * 8 + " ".join(label.rjust(w) for label, w in zip(r_labels, col_widths))
    header_md = "| Input | " + " | ".join(label.rjust(w) for label, w in zip(r_labels, col_widths)) + " |"

    # Markdown separator row
    separator_md = "|-------|" + "|".join("-" * (w + 2) for w in col_widths) + "|"

    # Rows
    rows_plain = []
    rows_md = []
    for i in range(input_count):
        label = f"i[{i}]".ljust(8)
        row_plain = label + " ".join("@" .rjust(w) if i in result2irrelevances[r] else " " * w
                                     for r, w in zip(filtered_r, col_widths))
        row_md = "| " + f"i[{i}]".ljust(5) + " | " + " | ".join("@" .rjust(w) if i in result2irrelevances[r] else " " * w
                                                               for r, w in zip(filtered_r, col_widths)) + " |"
        rows_plain.append(row_plain)
        rows_md.append(row_md)

    plain_table = "\n".join([header_plain] + rows_plain)
    markdown_table = "\n".join([header_md, separator_md] + rows_md)

    return plain_table, markdown_table

def format_irrelevance_table_full(result2irrelevances: dict[int, list[int]], input_count: int) -> tuple[str, str]:
    """
    Create a full table showing which inputs are irrelevant for each result vector `r`.

    Includes columns for all result vector indices (0 to max(r)).
    Each row corresponds to an input index, with '@' if that input is irrelevant for that `r`, else blank.

    Returns:
        A tuple of (plain text table string, markdown table string)
    """
    if input_count == 0:
        msg = "No inputs to analyze (input_count = 0)."
        md = "| Input | (none) |\n|-------|--------|\n| (none) |        |"
        return msg, md

    all_r = sorted(result2irrelevances.keys())
    r_labels = [str(r) for r in all_r]
    col_widths = [len(label) for label in r_labels]

    # Header row
    header_plain = " " * 8 + " ".join(label.rjust(w) for label, w in zip(r_labels, col_widths))
    header_md = "| Input | " + " | ".join(label.rjust(w) for label, w in zip(r_labels, col_widths)) + " |"

    # Markdown separator row
    separator_md = "|-------|" + "|".join("-" * (w + 2) for w in col_widths) + "|"

    # Rows
    rows_plain = []
    rows_md = []
    for i in range(input_count):
        label = f"i[{i}]".ljust(8)
        row_plain = label + " ".join("@" .rjust(w) if i in result2irrelevances.get(r, []) else " " * w
                                     for r, w in zip(all_r, col_widths))
        row_md = "| " + f"i[{i}]".ljust(5) + " | " + " | ".join("@" .rjust(w) if i in result2irrelevances.get(r, []) else " " * w
                                                               for r, w in zip(all_r, col_widths)) + " |"
        rows_plain.append(row_plain)
        rows_md.append(row_md)

    plain_table = "\n".join([header_plain] + rows_plain)
    markdown_table = "\n".join([header_md, separator_md] + rows_md)

    return plain_table, markdown_table

def chop_line(table_text: str, max_length: int) -> str:
    """
    Chop each line in a multi-line string to a maximum length.
    If a line exceeds `max_length`, it is truncated to `max_length - 3` and '...' is appended.

    Args:
        table_text: The full multi-line string to process.
        max_length: The maximum allowed line length.

    Returns:
        A new multi-line string with long lines chopped and marked with '...'.
    """
    chopped_lines = []
    for line in table_text.splitlines():
        if len(line) > max_length:
            chopped_lines.append(line[:max_length - 3] + '...')
        else:
            chopped_lines.append(line)
    return '\n'.join(chopped_lines)


print_STT_results_sensitive_to_inputs(5)

 

Irrelevances: Output

 

INPUTS = 0
==========
  total_possible_tt_results = 2
  irrelevances = []
    len(irrelevances) = 0, relevances = 2

STT Result Irrelevance vs Input Table
No inputs to analyze (input_count = 0).

INPUTS = 1
==========
  total_possible_tt_results = 4
  irrelevances = [0, 3]
    len(irrelevances) = 2, relevances = 2

STT Result Irrelevance vs Input Table
        0 1 2 3
i[0]    @     @

INPUTS = 2
==========
  total_possible_tt_results = 16
  irrelevances = [0, 3, 5, 10, 12, 15]
    len(irrelevances) = 6, relevances = 10

STT Result Irrelevance vs Input Table
        0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
i[0]    @     @                    @        @
i[1]    @         @          @              @

INPUTS = 3
==========
  total_possible_tt_results = 256
  irrelevances = [0, 3, 5, 10, 12, 15, 17, 34, 48, 51, 60, 63, 68, 80, 85, 90, 95, 102, 119, 136, 153, 160, 165, 170, 175, 187, 192, 195, 204, 207, 221, 238, 240, 243, 245, 250, 252, 255]
    len(irrelevances) = 38, relevances = 218

STT Result Irrelevance vs Input Table
        0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 ...
i[0]    @     @                    @        @                                                                                                  @        @                          @        @                            ...
i[1]    @         @          @              @                                                                                                                                                                            ...
i[2]    @                                         @                                                  @                                                  @                                                  @             ...

INPUTS = 4
==========
  total_possible_tt_results = 65536
  irrelevances = [0, 3, 5, 10, 12, 15, 17, 34, 48, 51, 60, 63, 68, 80, 85, 90, 95, 102, 119, 136, 153, 160, 165, 170, 175, 187, 192, 195, 204, 207, 221, 238, 240, 243, 245, 250, 252, 255, 257, 514, 768, 771, 780, 783, 816, 819, 828, 831, 960, 963, 972, 975, 1008, 1011, 1020, 1023, 1028, 1280, 1285, 1290, 1295, 1360, 1365, 1370, 1375, 1440, 1445, 1450, 1455, 1520, 1525, 1530, 1535, 1542, 1799, 2056, 2313, 2560, 2565, 2570, 2575, 2640, 2645, 2650, 2655, 2720, 2725, 2730, 2735, 2800, 2805, 2810, 2815, 2827, 3072, 3075, 3084, 3087, 3120, 3123, 3132, 3135, 3264, 3267, 3276, 3279, 3312, 3315, 3324, 3327, 3341, 3598, 3840, 3843, 3845, 3850, 3852, 3855, 3888, 3891, 3900, 3903, 3920, 3925, 3930, 3935, 4000, 4005, 4010, 4015, 4032, 4035, 4044, 4047, 4080, 4083, 4085, 4090, 4092, 4095, 4112, 4352, 4369, 4386, 4403, 4420, 4437, 4454, 4471, 4488, 4505, 4522, 4539, 4556, 4573, 4590, 4607, 4626, 4883, 5140, 5397, 5654, 5911, 6168, 6425, 6682, 6939, 7196, 7453, 7710, 7967, 8224, 8481, 8704, 8721, 8738, 8755, 8772, 8789, 8806, 8823, 8840, 8857, 8874, 8891, 8908, 8925, 8942, 8959, 8995, 9252, 9509, 9766, 10023, 10280, 10537, 10794, 11051, 11308, 11565, 11822, 12079, 12288, 12291, 12300, 12303, 12336, 12339, 12348, 12351, 12480, 12483, 12492, 12495, 12528, 12531, 12540, 12543, 12593, 12850, 13056, 13059, 13068, 13071, 13073, 13090, 13104, 13107, 13116, 13119, 13124, 13141, 13158, 13175, 13192, 13209, 13226, 13243, 13248, 13251, 13260, 13263, 13277, 13294, 13296, 13299, 13308, 13311, 13364, 13621, 13878, 14135, 14392, 14649, 14906, 15163, 15360, 15363, 15372, 15375, 15408, 15411, 15420, 15423, 15552, 15555, 15564, 15567, 15600, 15603, 15612, 15615, 15677, 15934, 16128, 16131, 16140, 16143, 16176, 16179, 16188, 16191, 16320, 16323, 16332, 16335, 16368, 16371, 16380, 16383, 16448, 16705, 16962, 17219, 17408, 17425, 17442, 17459, 17476, 17493, 17510, 17527, 17544, 17561, 17578, 17595, 17612, 17629, 17646, 17663, 17733, 17990, 18247, 18504, 18761, 19018, 19275, 19532, 19789, 20046, 20303, 20480, 20485, 20490, 20495, 20560, 20565, 20570, 20575, 20640, 20645, 20650, 20655, 20720, 20725, 20730, 20735, 20817, 21074, 21331, 21588, 21760, 21765, 21770, 21775, 21777, 21794, 21811, 21828, 21840, 21845, 21850, 21855, 21862, 21879, 21896, 21913, 21920, 21925, 21930, 21935, 21947, 21964, 21981, 21998, 22000, 22005, 22010, 22015, 22102, 22359, 22616, 22873, 23040, 23045, 23050, 23055, 23120, 23125, 23130, 23135, 23200, 23205, 23210, 23215, 23280, 23285, 23290, 23295, 23387, 23644, 23901, 24158, 24320, 24325, 24330, 24335, 24400, 24405, 24410, 24415, 24480, 24485, 24490, 24495, 24560, 24565, 24570, 24575, 24672, 24929, 25186, 25443, 25700, 25957, 26112, 26129, 26146, 26163, 26180, 26197, 26214, 26231, 26248, 26265, 26282, 26299, 26316, 26333, 26350, 26367, 26471, 26728, 26985, 27242, 27499, 27756, 28013, 28270, 28527, 28784, 29041, 29298, 29555, 29812, 30069, 30326, 30464, 30481, 30498, 30515, 30532, 30549, 30566, 30583, 30600, 30617, 30634, 30651, 30668, 30685, 30702, 30719, 30840, 31097, 31354, 31611, 31868, 32125, 32382, 32639, 32896, 33153, 33410, 33667, 33924, 34181, 34438, 34695, 34816, 34833, 34850, 34867, 34884, 34901, 34918, 34935, 34952, 34969, 34986, 35003, 35020, 35037, 35054, 35071, 35209, 35466, 35723, 35980, 36237, 36494, 36751, 37008, 37265, 37522, 37779, 38036, 38293, 38550, 38807, 39064, 39168, 39185, 39202, 39219, 39236, 39253, 39270, 39287, 39304, 39321, 39338, 39355, 39372, 39389, 39406, 39423, 39578, 39835, 40092, 40349, 40606, 40863, 40960, 40965, 40970, 40975, 41040, 41045, 41050, 41055, 41120, 41125, 41130, 41135, 41200, 41205, 41210, 41215, 41377, 41634, 41891, 42148, 42240, 42245, 42250, 42255, 42320, 42325, 42330, 42335, 42400, 42405, 42410, 42415, 42480, 42485, 42490, 42495, 42662, 42919, 43176, 43433, 43520, 43525, 43530, 43535, 43537, 43554, 43571, 43588, 43600, 43605, 43610, 43615, 43622, 43639, 43656, 43673, 43680, 43685, 43690, 43695, 43707, 43724, 43741, 43758, 43760, 43765, 43770, 43775, 43947, 44204, 44461, 44718, 44800, 44805, 44810, 44815, 44880, 44885, 44890, 44895, 44960, 44965, 44970, 44975, 45040, 45045, 45050, 45055, 45232, 45489, 45746, 46003, 46260, 46517, 46774, 47031, 47288, 47545, 47802, 47872, 47889, 47906, 47923, 47940, 47957, 47974, 47991, 48008, 48025, 48042, 48059, 48076, 48093, 48110, 48127, 48316, 48573, 48830, 49087, 49152, 49155, 49164, 49167, 49200, 49203, 49212, 49215, 49344, 49347, 49356, 49359, 49392, 49395, 49404, 49407, 49601, 49858, 49920, 49923, 49932, 49935, 49968, 49971, 49980, 49983, 50112, 50115, 50124, 50127, 50160, 50163, 50172, 50175, 50372, 50629, 50886, 51143, 51400, 51657, 51914, 52171, 52224, 52227, 52236, 52239, 52241, 52258, 52272, 52275, 52284, 52287, 52292, 52309, 52326, 52343, 52360, 52377, 52394, 52411, 52416, 52419, 52428, 52431, 52445, 52462, 52464, 52467, 52476, 52479, 52685, 52942, 52992, 52995, 53004, 53007, 53040, 53043, 53052, 53055, 53184, 53187, 53196, 53199, 53232, 53235, 53244, 53247, 53456, 53713, 53970, 54227, 54484, 54741, 54998, 55255, 55512, 55769, 56026, 56283, 56540, 56576, 56593, 56610, 56627, 56644, 56661, 56678, 56695, 56712, 56729, 56746, 56763, 56780, 56797, 56814, 56831, 57054, 57311, 57568, 57825, 58082, 58339, 58596, 58853, 59110, 59367, 59624, 59881, 60138, 60395, 60652, 60909, 60928, 60945, 60962, 60979, 60996, 61013, 61030, 61047, 61064, 61081, 61098, 61115, 61132, 61149, 61166, 61183, 61423, 61440, 61443, 61445, 61450, 61452, 61455, 61488, 61491, 61500, 61503, 61520, 61525, 61530, 61535, 61600, 61605, 61610, 61615, 61632, 61635, 61644, 61647, 61680, 61683, 61685, 61690, 61692, 61695, 61937, 62194, 62208, 62211, 62220, 62223, 62256, 62259, 62268, 62271, 62400, 62403, 62412, 62415, 62448, 62451, 62460, 62463, 62708, 62720, 62725, 62730, 62735, 62800, 62805, 62810, 62815, 62880, 62885, 62890, 62895, 62960, 62965, 62970, 62975, 63222, 63479, 63736, 63993, 64000, 64005, 64010, 64015, 64080, 64085, 64090, 64095, 64160, 64165, 64170, 64175, 64240, 64245, 64250, 64255, 64507, 64512, 64515, 64524, 64527, 64560, 64563, 64572, 64575, 64704, 64707, 64716, 64719, 64752, 64755, 64764, 64767, 65021, 65278, 65280, 65283, 65285, 65290, 65292, 65295, 65297, 65314, 65328, 65331, 65340, 65343, 65348, 65360, 65365, 65370, 65375, 65382, 65399, 65416, 65433, 65440, 65445, 65450, 65455, 65467, 65472, 65475, 65484, 65487, 65501, 65518, 65520, 65523, 65525, 65530, 65532, 65535]
    len(irrelevances) = 942, relevances = 64594

STT Result Irrelevance vs Input Table
        0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 ...
i[0]    @     @                    @        @                                                                                                  @        @                          @        @                            ...
i[1]    @         @          @              @                                                                                                                                                                            ...
i[2]    @                                         @                                                  @                                                  @                                                  @             ...
i[3]    @                                                                                                                                                                                                                ...


Irrelevance indices reflected about the center of the r count?
  True
Irrelevances for `i` inputs begin with those from `i-1`?
  True

OEIS

The sequence of relevances: 2, 2, 10, 218, 64594 is already present as A000371 on OEIS.

The sequences of irelevances: 0, 2, 6, 38, 942 had no exact match although A005530 came close


END.

 

September 15, 2025 06:45 PM UTC


Python Engineering at Microsoft

Python in Visual Studio Code – September 2025 Release

We’re excited to announce the September 2025 release of the Python, Pylance and Jupyter extensions for Visual Studio Code!

This release includes the following announcements:

If you’re interested, you can check the full list of improvements in our changelogs for the Python, Jupyter and Pylance extensions.

This month you can also help shape the future of Python typing by filling out the 2025 Python Type System and Tooling Survey: https://jb.gg/d7dqty

Experimental AI-powered hover summaries with Pylance

A new experimental AI Hover Summaries feature is now available for Python files when using the pre-release version of Pylance with GitHub Copilot. When you enable the setting(python.analysis.aiHoverSummaries) setting, you can get helpful summaries on the fly for symbols that do not already have documentation. This makes it easier to understand unfamiliar code and boosts productivity as you explore Python projects. At the moment, AI Hover Summaries are currently available to GitHub Copilot Pro, Pro+, and Enterprise users.

We look forward to bringing this experimental experience to the stable version soon!

AI-powered hover summaries with Pylance

Run Code Snippet AI tool

The Pylance Run Code Snippets tool is a powerful feature designed to streamline your Python experience with GitHub Copilot. Instead of relying on terminal commands like python -c "code" or creating temporary files to be executed, this tool allows GitHub Copilot to execute Python snippets entirely in memory. It automatically uses the correct Python interpreter configured for your workspace, and it eliminates common issues with shell escaping and quoting that sometimes arise during terminal execution.

One of the standout benefits is the clean, well-formatted output it provides, with both stdout and stderr interleaved for clarity. This makes it ideal when using Agent mode with GitHub Copilot to test small blocks of code, run quick scripts, validate Python expressions, or checking imports, all within the context of your workspace.

To try it out, make sure you’re using the latest pre-release version of Pylance. Then, you can select the pylancerunCodeSnippet tool via the Add context… menu in the VS Code Chat panel.

Note: As with all AI-generated code, please make sure to inspect the generated code before allowing this tool to be executed. Reviewing the logic and intent of the code ensures it aligns with your project’s goals and maintains safety and correctness.

pylance-run-code-snippet

Python Environments extension improvements

We appreciate your feedback and are excited to share several enhancements to the Python Environments extension. Thank you to everyone who submitted bug reports and suggestions to help us improve!

Improvements to Conda experience

We focused on removing friction and unexpected behavior when working with Conda environments:

Pipenv support

Pipenv environments are now discovered and listed in the Environments Manager view.

Better diagnostics and control

We’ve made it easier to identify and resolve environment-related issues. When there are issues with the default environment manager, such as missing executables, clear warnings are now surfaced to guide you through resolution.

Additionally, there’s a new Run Python Environment Tool (PET) in Terminal command which gives you direct access to running the back-end environment tooling by hand. This tool simplifies troubleshooting by allowing you to manually trigger detection operations, making it easier to diagnose and fix edge cases in environment setup.

Quality of life improvements

We also reduced paper cuts to make your experience with the extension smoother. These include:

We are continuing to roll-out the extension. To use it, make sure the extension is installed and add the following to your VS Code settings.json file: "python.useEnvironmentsExtension": true. If you are experiencing issues with the extension, please report issues in our repo, and you can disable the extension by setting "python.useEnvironmentsExtension": false in your settings.json file.

Call for Community Feedback

This month, the Python community is coming together to gather insights on how type annotations are used in Python. Whether you’re a seasoned user or have never used types at all, your input is valuable! Take a few minutes to help shape the future of Python typing by participating in the 2025 Python Type System and Tooling Survey: https://jb.gg/d7dqty.

Other Changes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python and Jupyter Notebooks in Visual Studio Code. Some notable changes include:

We would also like to extend special thanks to this month’s contributors:

Try out the new improvements by downloading the Python extension and the Jupyter extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.

The post Python in Visual Studio Code – September 2025 Release appeared first on Microsoft for Python Developers Blog.

September 15, 2025 06:22 PM UTC


Real Python

What Does -> Mean in Python Function Definitions?

In Python, the arrow symbol (->) appears in function definitions as a notation to indicate the expected return type. This notation is optional, but when you include it, you clarify what data type a function should return:

Python
>>> def get_number_of_titles(titles: list[str]) -> int:
...     return len(titles)
...

You may have observed that not all Python code includes this particular syntax. What does the arrow notation mean? In this tutorial, you’ll learn what it is, how to use it, and why its usage sometimes seems so inconsistent.

Get Your Code: Click here to download the free sample code that you’ll use to learn what -> means in Python function definitions.

In Short: The -> Notation Indicates a Function’s Return Type in Python

In Python, every value stored in a variable has a type. Because Python is dynamically typed, variables themselves don’t have fixed types—they can hold values of any type at different times. This means the same variable might store an integer at one moment and a string the next. In contrast, statically typed languages like C++ or Java require explicit type declarations, and variables are bound to a specific type throughout their lifetime.

You can see an example of a dynamically typed Python variable in the following code example:

Python
>>> my_number = 32
>>> my_number = "32"

You start by declaring a variable called my_number, and setting it to the integer value 32. You then change the variable’s value to the string value "32". When you run this code in a Python environment, you don’t encounter any problems with the value change.

Dynamic typing also means you might not always know what data type a Python function will return, if it returns anything at all. Still, it’s often useful to know the return type. To address this, Python 3.5 introduced optional type hints, which allow developers to specify return types. To add a type hint, you place a -> after a function’s parameter list and write the expected return type before the colon.

You can also add type hints to function parameters. To do this, place a colon after the parameter’s name, followed by the expected type—for example, int or str.

Basic Type Hint Examples

To further explore type hints, suppose that you’re creating a Python application to manage inventory for a video game store. The program stores a list of game titles, tracks how many copies are in stock, and suggests new games for customers to try.

You’ve already seen the type hint syntax in the code example introduced earlier, which returns the number of titles in a list of games that the game store carries:

Python
>>> def get_number_of_titles(titles: list[str]) -> int:
...     return len(titles)
...
>>> games = ["Dragon Quest", "Final Fantasy", "Age of Empires"]
>>> print("Number of titles:", get_number_of_titles(games))
Number of titles: 3

Here you see a type hint. You define a function called get_number_of_titles(), which takes a list of game titles as input. Next, you add a type hint for the titles parameter, indicating that the function takes a list of strings. Finally, you also add another type hint for the return type, specifying that the function returns an int value.

The function returns the length of the list, which is an integer. You test this out in the next line, where you create a variable that stores a list of three game titles. When you invoke the function on the list, you verify that the output is 3.

Note that in a real-world application, creating a separate function just to return a list’s length might be redundant. However, for instructional purposes, the function shown in the example is a straightforward way to demonstrate the type hint concept.

You can use type hints with any Python type. You’ve already seen an example with an int return type, but consider another example with a str return type. Suppose you want to recommend a random game for a customer to try. You could do so with the following short example function:

Read the full article at https://realpython.com/what-does-arrow-mean-in-python/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

September 15, 2025 02:00 PM UTC


Mike Driscoll

Erys – A TUI for Jupyter Notebooks

Have you ever thought to yourself: “Wouldn’t it be nice to run Jupyter Notebooks in my terminal?” Well, you’re in luck. The new Erys project not only makes running Jupyter Notebooks in your terminal a reality, but Erys also lets you create and edit the notebooks in your terminal!

Erys is written using the fantastic Textual package. While Textual handles the front-end in much the same way as your browser would normally do, the jupyter-client handles the backend, which executes your code and manages your kernel.

Let’s spend a few moments learning more about Erys and taking it for a test drive.

Installation

The recommended method of installing Erys is to use the uv package manager.  If you have uv installed, you can run the following command in your terminal to install the Erys application:

$ uv tool install erys

Erys also supports using pipx to install it, if you prefer.

Once you have Erys installed, you can run it in your terminal by executing the erys command.

Using Notebooks in Your Terminal

When you run Erys, you will see something like the following in your terminal:

Erys - New Notebook

This is an empty Jupyter Notebook. If you would prefer to open an existing notebook, you would run the following command:

erys PATH_TO_NOTEBOOK

If you passed in a valid path to a Notebook, you will see one loaded. Here is an example using my Python Logging talk Notebook:

Erys - Load Notebook

You can now run the cells, edit the Notebook and more!

Wrapping Up

Erys is a really neat TUI application that gives you the ability to view, create, and edit Jupyter Notebooks and other text files in your terminal. It’s written in Python using the Textual package.

The full source code is on GitHub, so you can check it out and learn how it does all of this or contribute to the application and make it even better.

Check it out and give it a try!

The post Erys – A TUI for Jupyter Notebooks appeared first on Mouse Vs Python.

September 15, 2025 12:35 PM UTC


Real Python

Quiz: Python Project Management With uv

In this quiz, you will review how to use uv, the high-speed Python package and project manager. You will practice key commands, explore the files uv creates for you, and work with project setup tasks.

This is a great way to reinforce project management basics with uv and get comfortable with its streamlined workflows.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

September 15, 2025 12:00 PM UTC

Quiz: What Does -> Mean in Python Function Definitions?

In this quiz, you will revisit how Python uses the arrow notation (->) in function signatures to provide return type hints. Practice identifying correct syntax, annotating containers, and understanding the role of tools like mypy.

Brush up on key concepts, clarify where and how to use return type hints, and see practical examples in What Does -> Mean in Python Function Definitions?.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

September 15, 2025 12:00 PM UTC


Python Bytes

#449 Suggestive Trove Classifiers

<strong>Topics covered in this episode:</strong><br> <ul> <li><em>* <a href="https://news.itsfoss.com/mozilla-lifeline-is-safe/?featured_on=pythonbytes">Mozilla’s Lifeline is Safe After Judge’s Google Antitrust Ruling</a></em>*</li> <li><em>* <a href="https://github.com/adamghill/troml?featured_on=pythonbytes">troml - suggests or fills in trove classifiers for your projects</a></em>*</li> <li><em>* <a href="https://github.com/manojkarthick/pqrs?featured_on=pythonbytes">pqrs: Command line tool for inspecting Parquet files</a></em>*</li> <li><em>* Testing for Python 3.14</em>*</li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=ZpGE9jkRCvk' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="449">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1: <a href="https://news.itsfoss.com/mozilla-lifeline-is-safe/?featured_on=pythonbytes">Mozilla’s Lifeline is Safe After Judge’s Google Antitrust Ruling</a></strong></p> <ul> <li>A judge lets Google <em>keep</em> paying Mozilla to make Google the default search engine but only if those deals aren’t exclusive.</li> <li>More than 85% of Mozilla’s revenue comes from Google search payments.</li> <li>The ruling forbids Google from making exclusive contracts for Search, Chrome, Google Assistant, or Gemini, and forces data sharing and search syndication so rivals get a fighting chance.</li> </ul> <p><strong>Brian #2: <a href="https://github.com/adamghill/troml?featured_on=pythonbytes">troml - suggests or fills in trove classifiers for your projects</a></strong></p> <ul> <li>Adam Hill</li> <li>This is super cool and so welcome.</li> <li>Trove Classifiers are things like <a href="https://pypi.org/search/?c=Programming+Language+%3A%3A+Python+%3A%3A+3.14&featured_on=pythonbytes">&lt;code>Programming Language :: Python :: 3.14&lt;/code></a> that allow for some fun stuff to show up in PyPI, like the versions you support, etc.</li> <li>Note that just saying you require 3.9+ doesn’t tell the user that you’ve actually tested stuff on 3.14. I like to keep Trove Classifiers around for this reason.</li> <li>Also, <a href="https://peps.python.org/pep-0639/#deprecate-license-classifiers">License classifier is deprecated</a>, and if you include it, it shows up in two places, in Meta, and in the Classifiers section. Probably good to only have one place. So I’m going to be removing it from classifiers for my projects.</li> <li>One problem, classifier text has to be an exact match to something in the <a href="https://pypi.org/classifiers/?featured_on=pythonbytes">classifier list</a>, so we usually recommend copy/pasting from that list.</li> <li>But no longer! Just use troml!</li> <li>It just fills it in for you (if you run <code>troml suggest --fix</code>). How totally awesome is that!</li> <li>I tried it on <a href="https://pypi.org/project/pytest-check/?featured_on=pythonbytes">pytest-check</a>, and it was mostly right. It suggested me adding 3.15, which I haven’t tested yet, so I’m not ready to add that just yet. :)</li> <li>BTW, <a href="https://pythontest.com/testandcode/episodes/197-python-project-trove-classifiers-do-you-need-this-bit-of-pyproject-toml-metadata/?featured_on=pythonbytes">I talked with Brett Cannon about classifiers back in ‘23</a> if you want some more in depth info on trove classifiers.</li> </ul> <p><strong>Michael #3: <a href="https://github.com/manojkarthick/pqrs?featured_on=pythonbytes">pqrs: Command line tool for inspecting Parquet files</a></strong></p> <ul> <li><code>pqrs</code> is a command line tool for inspecting <a href="https://parquet.apache.org/?featured_on=pythonbytes">Parquet</a> files</li> <li>This is a replacement for the <a href="https://github.com/apache/parquet-mr/tree/master/parquet-tools-deprecated?featured_on=pythonbytes">parquet-tools</a> utility written in Rust</li> <li>Built using the Rust implementation of <a href="https://github.com/apache/arrow-rs/tree/master/parquet?featured_on=pythonbytes">Parquet</a> and <a href="https://github.com/apache/arrow-rs/tree/master/arrow?featured_on=pythonbytes">Arrow</a></li> <li><code>pqrs</code> roughly means "parquet-tools in rust"</li> <li>Why Parquet? <ul> <li>Size <ul> <li>A 200 MB CSV will usually shrink to somewhere between about <strong>20-100 MB</strong> as Parquet depending on the data and compression. Loading a Parquet file is typically <strong>several times faster</strong> than parsing CSV, often <strong>2x-10x faster</strong> for a full-file load and much faster when you only read some columns.</li> </ul></li> <li>Speed <ul> <li><strong>Full-file load into pandas</strong>: Parquet with pyarrow/fastparquet is usually <strong>2x–10x faster</strong> than reading CSV with pandas because CSV parsing is CPU intensive (text tokenizing, dtype inference). <ul> <li>Example: if <code>read_csv</code> is 10 seconds, <code>read_parquet</code> might be ~1–5 seconds depending on CPU and codec.</li> </ul></li> <li><strong>Column subset</strong>: Parquet is <strong>much faster</strong> if you only need some columns — often <strong>5x–50x</strong> faster because it reads only those column chunks.</li> <li><strong>Predicate pushdown &amp; row groups</strong>: When using dataset APIs (pyarrow.dataset) you can push filters to skip row groups, reducing I/O dramatically for selective queries.</li> <li><strong>Memory usage</strong>: Parquet avoids temporary string buffers and repeated parsing, so peak memory and temporary allocations are often lower.</li> </ul></li> </ul></li> </ul> <p><strong>Brian #4: Testing for Python 3.14</strong></p> <ul> <li>Python 3.14 is just around the corner, with a final release scheduled for October.</li> <li><a href="https://docs.python.org/3.14/whatsnew/3.14.html#what-s-new-in-python-3-14">What’s new in Python 3.14</a></li> <li><a href="https://peps.python.org/pep-0745/?featured_on=pythonbytes">Python 3.14 release schedule</a></li> <li>Adding 3.14 to your CI tests in GitHub Actions <ul> <li>Add “3.14” and optionally “3.14t” for freethreaded</li> <li>Add the line <code>allow-prereleases: true</code></li> </ul></li> <li>I got stuck on this, and asked folks on <a href="https://fosstodon.org/@brianokken/115205427424856431">Mastdon</a> and <a href="https://bsky.app/profile/brianokken.bsky.social/post/3lytjr224gs24?featured_on=pythonbytes">Bluesky</a></li> <li>A couple folks suggested the <code>allow-prereleases: true</code> step. Thank you!</li> <li>Ed Rogers also suggested Hugo’s article <a href="https://hugovk.dev/blog/2025/free-threaded-python-on-github-actions/?featured_on=pythonbytes">Free-threaded Python on GitHub Actions</a>, which I had read and forgot about. Thanks Ed! And thanks Hugo!</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://github.com/adamghill/dj-toml-settings?featured_on=pythonbytes"><strong>dj-toml-settings</a> :</strong> Load Django settings from a TOML file. - Another cool project from Adam Hill</li> <li><a href="https://github.com/samhenrigold/LidAngleSensor?featured_on=pythonbytes">LidAngleSensor for Mac</a> - from Sam Henri Gold, with examples of <a href="https://hachyderm.io/@samhenrigold/115159295473019599?featured_on=pythonbytes">creaky door</a> and <a href="https://hachyderm.io/@samhenrigold/115159854830332329?featured_on=pythonbytes">theramin</a></li> <li>Listener Bryan Weber found a Python version via Changelog, <a href="https://github.com/tcsenpai/pybooklid?featured_on=pythonbytes">pybooklid</a>, from tcsenpai</li> <li>Grab PyBay</li> </ul> <p>Michael:</p> <ul> <li>Ready prek go! <a href="https://mastodon.social/@hugovk/115175447890438321?featured_on=pythonbytes">by Hugo van Kemenade</a></li> </ul> <p><strong>Joke: <a href="https://x.com/pr0grammerhum0r/status/1964901542395347027?s=12&featured_on=pythonbytes">Console Devs Can’t Find a Date</a></strong></p>

September 15, 2025 08:00 AM UTC

September 14, 2025


Armin Ronacher

What’s a Foreigner?

Across many countries, resistance to immigration is rising — even places with little immigration, like Japan, now see rallies against it. I’m not going to take a side here. I want to examine a simpler question: who do we mean when we say “foreigner”?

I would argue there isn’t a universal answer. Laws differ, but so do social definitions. In Vienna, where I live, immigration is visible: roughly half of primary school children don’t speak German at home. Austria makes citizenship hard to obtain. Many people born here aren’t citizens; at the same time, EU citizens living here have broad rights and labor-market access similar to native Austrians. Over my lifetime, the fear of foreigners has shifted: once aimed at nearby Eastern Europeans, it now falls more on people from outside the EU, often framed through religion or culture. Practically, “foreigner” increasingly ends up meaning “non-EU.” Keep in mind that over the last 30 years the EU went from 12 countries to 27. That’s a signifcant increase in social mobility.

I believe this is quite different from what is happening in the United States. The present-day US debate is more tightly tied to citizenship and allegiance, which is partly why current fights there include attempts to narrow who gets citizenship at birth. The worry is less about which foreigners come and more about the terms of becoming American and whether newcomers will embrace what some define as American values.

Inside the EU, the concept of EU citizenship changes social reality. Free movement, aligned standards, interoperable social systems, and easier labor mobility make EU citizens feel less “foreign” to each other — despite real frictions. The UK before Brexit was a notable exception: less integrated in visible ways and more hostile to Central and Eastern European workers. Perhaps another sign that the level of integration matters. In practical terms, allegiances are also much less clearly defined in the EU. There are people who live their entire live in other EU countries and whos allegiance is no longer clearly aligned to any one country.

Legal immigration itself is widely misunderstood. Most systems are both far more restrictive in some areas and far more permissive than people assume. On the one hand, what’s called “illegal” is often entirely lawful. Many who are considered “illegal” are legally awaiting pending asylum decisions or are accepted refugees. These are processes many think shouldn’t exist, but they are, in fact, legal. On the other hand, the requirements for non-asylum immigration are very high, and most citizens of a country themselves would not qualify for skilled immigration visas. Meanwhile, the notion that a country could simply “remove all foreigners” runs into practical and ethical dead ends. Mobility pressures aren’t going away; they’re reinforced by universities, corporations, individual employers, demographics, and geopolitics.

Citizenship is just a small wrinkle. In Austria, you generally need to pass a modest German exam and renounce your prior citizenship. That creates odd outcomes: native-born non-citizens who speak perfect German but lack a passport, and naturalized citizens who never fully learned the language. Legally clear, socially messy — and not unique to Austria. The high hurdle to obtaining a passport also leads many educated people to intentionally opt out of becoming citizens. The cost that comes with renouncing a passport is not to be underestimated.

Where does this leave us? The realities of international mobility leave our current categories of immigration straining and misaligned with what the population at large thinks immigration should look like. Economic anxiety, war, and political polarization are making some groups of foreigners targets, while the deeper drivers behind immigration will only keep intensifying.

Perhaps we need to admit that we’re all struggling with these questions. The person worried about their community or country changing too quickly and the immigrant seeking a better life are both responding to forces larger than themselves. In a world where capital moves freely but most people cannot, where climate change might soon displace millions, and where birth rates are collapsing in wealthy nations, our immigration systems will be tested and stressed, and our current laws and regulations are likely inadequate.

September 14, 2025 12:00 AM UTC

September 13, 2025


Django Weblog

Nominate a Djangonaut for the 2025 Malcolm Tredinnick Memorial Prize

Hello Everyone 👋 It is that time of year again when we recognize someone from our community in memory of our friend Malcolm.

Malcolm was an early core contributor to Django and had a huge influence on Django as we know it today. Besides being knowledgeable he was also especially friendly to new users and contributors. He exemplified what it means to be an amazing Open Source contributor. We still miss him to this day.

The prize

Our prizes page summarizes it nicely:

The Malcolm Tredinnick Memorial Prize is a monetary prize, awarded annually, to the person who best exemplifies the spirit of Malcolm’s work - someone who welcomes, supports, and nurtures newcomers; freely gives feedback and assistance to others, and helps to grow the community. The hope is that the recipient of the award will use the award stipend as a contribution to travel to a community event -- a DjangoCon, a PyCon, a sprint -- and continue in Malcolm’s footsteps.

Please make your nominations using our form: 2025 Malcolm Tredinnick Memorial Prize nominations. Nominations are welcome from everyone.

Submit a nomination

We will take nominations until Saturday, September 27th, 2025, 23:59 Anywhere on Earth, and will announce the results in early October. If you have any questions please use our dedicated forum thread or contact the DSF Board.

September 13, 2025 08:18 PM UTC


Seth Michael Larson

SCREAM CIPHER (“ǠĂȦẶAẦ ĂǍÄẴẶȦ”)

You've probably heard of stream ciphers, but what about a scream cipher 😱? Today I learned there are more “Latin capital letter A” Unicode characters than there are letters in the English alphabet. You know what that means, it's time to scream:

CIPHER = {
"A":"A",  # Round-trip!
"B":"Á","G":"Ẳ","L":"Ậ","Q":"Ǟ","V":"À",
"C":"Ă","H":"Ẵ","M":"Ầ","R":"Ȧ","W":"Ả",
"D":"Ắ","I":"Ǎ","N":"Ẩ","S":"Ǡ","X":"Ȃ",
"E":"Ặ","J":"Â","O":"Ẫ","T":"Ạ","Y":"Ā",
"F":"Ằ","K":"Ấ","P":"Ä","U":"Ȁ","Z":"Ą",
}
CIPHER.update({map(str.lower, kv) for kv in CIPHER.items()})
UNCIPHER = {v: k for k, v in CIPHER.items()}

def SCREAM(text: str) -> str:
    return "".join(CIPHER.get(ch, ch) for ch in text)

def unscream(scream: str) -> str:
    return "".join(UNCIPHER.get(ch, ch) for ch in scream)


print(s := SCREAM("SCREAM CIPHER"))
# ǠĂȦẶAẦ ĂǍÄẴẶȦ

print(unscream(s))
# SCREAM CIPHER


Thanks for keeping RSS alive! ♥

September 13, 2025 12:00 AM UTC


Brian Okken

Timeline of Selected Software Events

There are a lot of events in the history of software development. This is a list of dates that have some significance in either the stuff I work with or methodologies. I’ve compiled this list for my own benefit in thinking about my history and how these things have led to my current software philosophies.

I’m publishing the list as a “what the heck, why not?” kinda thing. If I’ve gotten something wrong, feel free to contact me.

September 13, 2025 12:00 AM UTC

September 12, 2025


Real Python

The Real Python Podcast – Episode #265: Python App Hosting Choices & Documenting Python's History

What are your options for hosting your Python application or scripts? What are the advantages of a platform as a service, container-based hosts, or setting up a virtual machine? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

September 12, 2025 12:00 PM UTC


Seth Michael Larson

Infinite Precision CVSS Calculator

CVSS is a scoring system for the severity of a software vulnerability. The scores range from 0 to 10, but that doesn't mean it's a “10-point system”. A single value after a decimal (“8.7”) is allowed too, meaning there are 100 potential CVSS scores. But what if we need more precision?

Look no further than the NEW “Infinite Precision CVSS Calculator”.


















NOTE: This page is a joke, do not use for actual software vulnerability CVSS calculations.



Thanks for keeping RSS alive! ♥

September 12, 2025 12:00 AM UTC


Graham Dumpleton

Status of wrapt (September 2025)

The Python wrapt package recently turned 12 years old. Originally intended to be a module for monkey patching Python code, its wrapper object turned out to be a useful basis for creating Python decorators.

Back then, constructing decorators using function closures had various short comings and the resulting wrappers didn't preserve introspection and various other attributes associated with the wrapped function. Many of these issues have been resolved in updates to Python and the functools.wraps helper function, but wrapt based decorators were still useful for certain use cases such as being able to create a decorator where you could work out whether it was applied to a function, instance method, class method or even a class.

Downloads of wrapt

It is hard to tell how many people directly use wrapt, but helped by it being used internally in some widely used Python packages has actually resulted in wrapt making it into the list of PyPi top 100 packages by number of downloads. As I write this post it sits at position number 65, but it was at one point as high as about position 45. Either way, I am still surprised it makes it into that list at all, especially since the primary purpose of wrapt is to do something most view as dangerous, ie., monkey patch code.

Maintenance mode

As to the core problem that wrapt originally tried to solve, that was achieved many years ago and as such it has more or less been kept in maintenance mode ever since. Most changes over recent years have focused around ensuring it works with each new Python version and expanding on the set of Python wheels that are being released for different architectures. There are occassionaly bug fixes or tweaks but generally they relate to corner cases which have arisen due to people trying to do strange things when monkey patching code.

Version update

Even though few changes are being made, the plan is to soon release a new version. This version number for this release will jump from 1.17.X to 2.0.X.

The jump in major version isn't due to any known incompatibilities but more just to be safe since in this version all the old legacy code from Python 2 and early Python 3 versions has been removed.

PyCharm issues

The only obscure change which have any concern over relates to a specific corner case that should never occur in actual use, but does occur when using Python debugger IDEs such as PyCharm which offer live views of Python objects.

The problem in the case of PyCharm is that it can attempt to display the state of a Python object before the __init__() method has been called. Due to the wrapper object in wrapt being implemented in C code, that there is an accessor for the __wrapped__ attribute is visible, but until __init__() is called it has no value. All the same, PyCharm tries to access __wrapped__ during that time resulting in an exception.

For this exception when __wrapped__ is accessed in an uninitialised state a ValueError exception was being raised on the basis that the object was in an unknown state. The problem is that PyCharm doesn't gracefully handle this exception in this case and this somehow causes problems for PyCharm users.

What PyCharm prefers in this case is to see an AttributeError exception which results in it simply ignoring the attribute. Because it isn't known whether raising AttributeError instead of ValueError would cause issues for other existing code, I have been reticent to change the type of exception being raised.

At one point someone suggested raising a custom exception which inherits from both AttributeError and ValueError as a middle ground, allowing PyCharm to work but not cause issues for existing code.

Because one can't use multiple inheritence for custom exceptions implemented in C code, a fiddle has been required whereby the C extension code of wrapt reaches back to get a reference to an exception type implemented in Python code. Not elegant but appears to work so going to go with that and see if it works. Remember that this situation where the exception is raised should not even occur normally, so it may only be PyCharm that triggers it. Fingers crossed.

Typing hints

The other notable change for the next wrapt release will be the addition of support for type hints.

Adding type hints has been slow coming because wrapt supported Python 2 and older Python 3 versions well past when those versions became obsolete. Since such old versions were still supported, I didn't see it as practical to add support for type hints.

Even now, although the next wrapt version will still support Python version 3.8 and 3.9, the support for type hints will only be available if using Python 3.10 or later. This is because I finally realised that when using .pyi files to inject type hints, you could have version checks and only add them for selected versions. 🤦‍♂️

Another reason type hint support took so long to be added was simply because I wasn't that familiar with using them. Some users did propose changes to add type hints, but they seemed to me to be very minimal and not try and add support across the full APIs wrapt provided. Because I didn't understand type hints enough, I didn't want to risk adding them without really understanding how they worked.

This is not to say I am now an expert on type hints, I am definitely still a newbie and know just enough to be dangerous.

The resulting type hint support I added looks plenty scary to me and unfortunately still doesn't do everything I would like it to. My current belief is that due to the complicated overloading that wrapt does in certain APIs, that it just isn't possible within the limits of what the type hinting system can handle to do better than what I have managed. I haven't given up, but I also don't want to delay the next wrapt release any further, so will release it as is and revisit it later on to see if it can be made better.

Release candidates

With that all said, a release candidate for the next version of wrapt has been available for about a month. This can be installed by explicitly using wrapt==2.0.0rc2 or if your package installer supports it, to use unstable package versions such as release candidate versions.

After being out for a month I have not had any reports of the release candidate version causing any issues. That said, no one has said it works fine either. There has been something like 190 thousand downloads of the release candidate in that time though, so that is a good sign at least.

At this point it seems to be safe to release a 2.0.0 final version, so I will be aiming to double check everything over the coming week and get the new version released in time for Python 3.14. That said, Python 3.14 doesn't make it a priority as there was a 1.17.3 patch release version of wrapt a month ago which included Python wheels for Python 3.14, so every thing should be good to go for the new Python version.

List of changes

If you are interested in exactly what changes have made it into the next release, you can check out the develop branch for the wrapt docs on ReadTheDocs.

September 12, 2025 12:00 AM UTC

September 11, 2025


Graham Dumpleton

Back from the dead

I'm back.

Yes, it's been quite a long time. It's actually more than six years since my last blog post.

There are a few reasons for this. The main one is that after IBM acquired Red Hat, and later when I moved to VMware/Broadcom, I didn't always feel comfortable posting on my personal blog or speaking as freely as I would have liked. I was also less involved with my open source projects and busy working on something new that I couldn't actively promote at the time. So, I decided to step away from blogging for a while; I just didn't know how long that would end up being.

After being made redundant at Broadcom about a year ago, I've been on sabbatical, or what some call "micro retirement." Given the current state of the IT industry, this micro retirement may well turn into full retirement.

In my last three jobs, going back to 2010, I've worked remotely from home and only left the house for work when traveling to conferences or, on rare occasions, visiting the office. While I had the opportunity to travel for conferences during my time at New Relic and Red Hat, that changed when I moved to VMware. COVID disrupted everything, and travel was no longer possible. What I had expected to do at VMware also didn't really materialise, and I ended up working on other projects that didn't involve conference travel.

Although I never got to work on what I expected to at VMware, especially with COVID and all the uncertainty surrounding the Broadcom acquisition, by keeping my head down and staying out of sight, I was able to spend quite a lot of time working on a project of my own where I was able to set direction and do what I thought was required. An ideal situation for remote work and it suited me well, even if the overall work environment wasn't the best at the time.

Looking for a job now though, the prospect of being able to find a company who will take on a fully remote worker is very slim. All the companies I feel might have been a good place for me to work at are in the US, and these days even if they are hiring remote workers, you must still be in the US. Gone are the days where it was easier for someone located in Australia to get a job working for a US based company or team of people.

In the Australian job market, anything interesting is on site, or hybrid, requiring 2 or 3 days in the office, which isn't convenient for me. So right now I am a bit stuck on the job front and there is no clear path. The motivation for in office work just isn't there since I am so used to working from home.

Ideally I would find a company who actively supports open source and pays people to work on open source projects. Although I know of a few companies like this, as already noted the problem is they are in the US and will only hire remote workers who are also in the US.

So what next?

For now at least, the aim is to try and build links again with the Python community, since I have lost touch with what has been going on, not attending conferences like I used to, and not actively blogging.

I've only been doing the minimum required on my mod_wsgi and wrapt open source projects, and although I managed a flurry of activity on both a few weeks back, I held back from actually making a new release for each.

First priority, therefore, is to put out some blog posts on the current state of mod_wsgi and wrapt and get some new releases out, hopefully in time for Python 3.14.

I want to then start posting about my other new project called Educates. Although it will be new to others, the project is actually over 5 years old and has been available for quite a while, it just wasn't being promoted. The project may satisfy a small niche of users, but the amount of work I have put into it is significant, more than mod_wsgi or wrapt, so I don't want to cast it aside just yet.

So now that I have resurrected my blog site, hopefully I can keep the momentum up and meaningfully start applying myself to my open source projects and contribute to the Python community in some way, rather than spending an obsessive amount of time watching anime.

September 11, 2025 12:00 AM UTC

September 10, 2025


Real Python

How to Drop Null Values in pandas

Missing values can derail your analysis. In pandas, you can use the .dropna() method to remove rows or columns containing null values—in other words, missing data—so you can work with clean DataFrames. In this tutorial, you’ll learn how this method’s parameters let you control exactly which data gets removed. As you’ll see, these parameters give you fine-grained control over how much of your data to clean.

Dealing with null values is essential for keeping datasets clean and avoiding the issues they can cause. Missing entries can lead to misinterpreted column data types, inaccurate conclusions, and errors in calculations. Simply put, nulls can cause havoc if they find their way into your calculations.

By the end of this tutorial, you’ll understand that:

  • You can use .dropna() to remove rows and columns from a pandas DataFrame.
  • You can remove rows and columns based on the content of a subset of your DataFrame.
  • You can remove rows and columns based on the volume of null values within your DataFrame.

To get the most out of this tutorial, it’s recommended that you already have a basic understanding of how to create pandas DataFrames from files.

You’ll use the Python REPL along with a file named sales_data_with_missing_values.csv, which contains several null values you’ll deal with during the exercises. Before you start, extract this file from the downloadable materials by clicking the link at the end of this section.

The sales_data_with_missing_values.csv file is based on the publicly available and complete sales data file from Kaggle. Understanding the file’s content isn’t essential for this tutorial, but you can explore the Kaggle link above for more details if you’d like.

You’ll also need to install both the pandas and PyArrow libraries to make sure all code examples work in your environment:

Windows PowerShell
PS> python -m pip install pandas pyarrow
Shell
$ python -m pip install pandas pyarrow

It’s time to refine your pandas skills by learning how to handle missing data in a variety of ways.

You’ll find all code examples and the sales_data_with_missing_values.csv file in the materials for this tutorial, which you can download by clicking the link below:

Get Your Code: Click here to download the free sample code that you’ll use to learn how to drop null values in pandas.

Take the Quiz: Test your knowledge with our interactive “How to Drop Null Values in pandas” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

How to Drop Null Values in pandas

Quiz yourself on pandas .dropna(): remove nulls, clean missing data, and prepare DataFrames for accurate analysis.

How to Drop Rows Containing Null Values in pandas

Before you start dropping rows, it’s helpful to know what options .dropna() gives you. This method supports six parameters that let you control exactly what’s removed:

  • axis: Specifies whether to remove rows or columns containing null values.
  • thresh and how: Define how many missing values to remove or retain.
  • subset: Limits the removal of null values to specific parts of your DataFrame.
  • inplace: Determines whether the operation modifies the original DataFrame or returns a new copy.
  • ignore_index: Resets the DataFrame index after removing rows.

Don’t worry if any of these parameters don’t make sense to you just yet—you’ll learn why each is used during this tutorial. You’ll also get the chance to practice your skills.

Note: Although this tutorial teaches you how pandas DataFrames use .dropna(), DataFrames aren’t the only pandas objects that use it.

Series objects also have their own .dropna() method. However, the Series version contains only four parameters—axis, inplace, how, and ignore_index—instead of the six supported by the DataFrame version. Of these, only inplace and ignore_index are used, and they work the same way as in the DataFrame method. The rest are kept for compatibility with DataFrame, but have no effect.

Indexes also have a .dropna() method for removing missing index values, and it contains just one parameter: how.

Before using .dropna() to drop rows, you should first find out whether your data contains any null values:

Python
>>> import pandas as pd

>>> pd.set_option("display.max_columns", None)

>>> sales_data = pd.read_csv(
...     "sales_data_with_missing_values.csv",
...     parse_dates=["order_date"],
...     date_format="%d/%m/%Y",
... ).convert_dtypes(dtype_backend="pyarrow")

>>> sales_data
    order_number           order_date       customer_name  \
0           <NA>  2025-02-09 00:00:00      Skipton Fealty
1          70041                 <NA>  Carmine Priestnall
2          70042  2025-02-09 00:00:00                <NA>
3          70043  2025-02-10 00:00:00     Lanni D'Ambrogi
4          70044  2025-02-10 00:00:00         Tann Angear
5          70045  2025-02-10 00:00:00      Skipton Fealty
6          70046  2025-02-11 00:00:00             Far Pow
7          70047  2025-02-11 00:00:00          Hill Group
8          70048  2025-02-11 00:00:00         Devlin Nock
9           <NA>                 <NA>                <NA>
10         70049  2025-02-12 00:00:00           Swift Inc

                product_purchased discount  sale_price
0    Chili Extra Virgin Olive Oil     True       135.0
1                            <NA>     <NA>       150.0
2       Rosemary Olive Oil Candle    False        78.0
3                            <NA>     True        19.5
4    Vanilla and Olive Oil Candle     <NA>       13.98
5    Basil Extra Virgin Olive Oil     True        <NA>
6    Chili Extra Virgin Olive Oil    False       150.0
7    Chili Extra Virgin Olive Oil     True       135.0
8   Lavender and Olive Oil Lotion    False       39.96
9                            <NA>     <NA>        <NA>
10  Garlic Extra Virgin Olive Oil     True       936.0

To make sure all columns appear on your screen, you configure pd.set_option("display.max_columns", None). By passing None as the second parameter, you make sure all columns are displayed.

You read the sales_data_with_missing_values.csv file into a DataFrame using the pandas read_csv() function, then view the data. The order dates are in the "%d/%m/%Y" format in the file, so to make sure the order_date data is read correctly, you use both the parse_dates and date_format parameters. The output reveals there are ten rows and six columns of data in your file.

Read the full article at https://realpython.com/how-to-drop-null-values-in-pandas/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

September 10, 2025 02:00 PM UTC

Quiz: How to Drop Null Values in pandas

Challenge yourself with this quiz and see how much you understand about dropping null values in pandas.

Working through this quiz is a great way to revisit what you learned in the How to Drop Null Values in pandas tutorial. You’ll find most of the answers in the tutorial content, but for some of the questions, you might need to do some extra digging.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

September 10, 2025 12:00 PM UTC


Python Software Foundation

Sprints are the best part of a conference

When I first started attending Python conferences, my focus was entirely on the talks on the schedule. That's not surprising, there's no conference without talks! Over the years, though, I came to appreciate the so-called hallway track and the usual post-conference sprints that many events include. These days, I mostly come for those. Let's talk about why.

Raw numbers

Before we get into subjective and soft reasons why sprints are great, just consider how productive they are for Python. To give you an idea, let's focus on three Python conferences of different sizes on three continents.

At PyCon US 2025, 370 new PRs were open to the Python organization during, 286 to the cpython repository alone. Close to 300 PRs were merged into the Python GitHub organization during that time. That's for four days of sprints. This is over 2X the number of PRs handled during the same period when there's no sprint happening.

There's been two days of sprints at EuroPython in Prague this year, but they didn't disappoint either: 122 new PRs open to the Python organization, including 99 to the cpython repository. 79 PRs were merged into the Python GitHub organization during this time. This is 1.75X the number of PRs handled during a typical weekend.

Even single-day sprint days at conferences are pretty productive. At PyCon Korea earlier this August the attendees managed to open 59 new PRs to the Python organization, including 35 PRs to the cpython repository. Over 40 PRs were merged into the Python organization that day. Still 1.7X the typical velocity.

Hopefully, you're seeing what I'm seeing: sprints can provide a measurable boost to an open-source project. The longer the sprints are, the bigger this boost is. This is because many contributions need more than a day to bake, some bugs can be pretty stubborn, and many features uncover surprising depth once you start implementing them.

Momentum

There's something magical about a large group of people banding together to attack problems. While this is what open source is in general, adding together physical presence in the same physical space at the same time is the secret sauce. Real-time coordination really is more efficient. We can guess at reasons for this, but we can safely assume a big part is simply that humans are social animals. It's easier to empathize with a person when they're in the same room with you. In my experience, pointing at a screen still beats Internet communication.

Part of what makes sprints so productive is that it is a time-boxed period of uninterrupted time away from your usual work environment. And that's true for everyone, so people have the ability to focus on a specific project or problem for an extended period of time. But since there's a time limit to how long the sprints are, there's also some productive pressure to ship something concrete by the end of your stay. So, it's rare to see people playing games or doomscrolling during sprints. Instead, they want to ship something, even if it's a humble small first contribution.

Better yet, after you spend some time with a person in real life, even online interactions with them afterwards change. My brain does this thing where it reads GitHub comments of people I know in their voice. This little thing additionally humanizes the pixels on screen and makes the interaction smoother. When you come to sprints, you build more lasting connections, because you don't only talk about stuff in the hallway, you're solving problems together.

You're getting for free what you wouldn't be able to buy if you tried

You're solving problems together alongside developers from different companies, backgrounds and specialties. Some of them are maintainers of the projects you're contributing to, with a wealth of expertise they're sharing freely. You get immediate feedback, you can learn at a rate that is impossible to match online. You learn not only by doing and asking questions, but even just by watching others work. You discover better tools or ways to use them you didn't know existed.

To put it bluntly, the experts you work with during sprints would be impossible to hire as tutors, and here you get to work with them free of charge. Think about it, that alone makes it worth staying for sprints. And don't get cold feet, either, because...

You belong

I've heard some newcomers are worried that maybe the expected experience level is too high. I say you will definitely find something productive to do. I even blogged about this specifically for PyCon US this year, so you can read "What to Expect at PyCon US Sprints" to get an idea about how to make your experience great. The PyCon Korea sprint organizer and Steering Council member Donghee Na says: "I notice that the participants who had a good experience at last year's sprint tend to rejoin the sprint this year. I hope that many of them come back next year too." I'm seeing the same thing, and want to see even more of it. We do care about your experience.

Specifically at PyCon US, this year we tried something new. We split the CPython sprint room into two rooms: one dedicated to first-time contributors, and one to seasoned developers that needed to focus on some feature or bugfix they really wanted to ship before leaving Pittsburgh. It turned out great. Talking to attendees on both ends, I think both rooms enjoyed this setup and we will be repeating that for next year. While I was coordinating the first-time contributor room, I was heartened to see that quite a few veteran core developers joined me in the room. It was fun all four days!

At EuroPython, the setup this year was such that Petr Viktorin and I were coordinating the CPython sprint... or so we thought! In parallel, Adam Turner was leading the CPython documentation sprint, but attendees responded so well to him that he quickly organically became the de facto leader of the entire CPython sprint. Kudos, Adam, you did great!

Dedicated sprint events

It's not all roses with sprints that are attached to conferences. After an intense few days of the larger event, people tend to get tired. Introverts run out of steam. Key people that you could use talking to don't stay or are only available on the first day. If only there could be an event where core developers gather for a week just to sprint. No distracting talks and hallway tracks!

CPython actually does this annually since 2016 with the obvious online-only hiccup of 2020 and 2021. We do love those sprints as they are both productive and fun. Last year we returned to Meta while this year we will be sprinting at Arm Ltd in Cambridge UK. Unlike the conference sprints, this is an invite-only event for core developers where we can focus on making the next version of Python shinier than it would otherwise be.

But maybe organizing sprint-first events makes sense in general? It seems to me like that could be pretty helpful. Or maybe this is already a thing? Let us know if you know of sprint-first events in your area.

And in the meantime, consider staying for sprints at the next conference you're attending. It's well worth it!


 

September 10, 2025 09:30 AM UTC


Quansight Labs Blog

Scaling asyncio on Free-Threaded Python

A recap on the work done in Python 3.14 to enable asyncio to scale on the free-threaded build of CPython.

September 10, 2025 12:00 AM UTC

September 09, 2025


PyCoder’s Weekly

Issue #698: Capturing Stdout, REPL Color, Feature History, and More (Sept. 9, 2025)

#698 – SEPTEMBER 9, 2025
View in Browser »

The PyCoder’s Weekly Logo


Python: Capture Stdout and Stderr in Unittest

When testing code that outputs to the terminal through either standard out (stdout) or standard error (stderr), you might want to capture that output and make assertions on it.
ADAM JOHNSO

Customizing Your Python 3.14 REPL’s Color Scheme

The upcoming release of Python 3.14 includes syntax highlighting in the REPL and you can control its color scheme and make it your own.
TREY HUNNER

On Demand: Design Long-Running, Human-Aware MCP

alt

Move beyond chatbots to build durable MCP servers that run for days, survive failures, and orchestrate elicitations and LLM sampling. Learn remote MCP trade-offs and patterns for sophisticated, or ambient, agents in this on-demand webinar →
TEMPORAL sponsor

A History of Python Versions and Features

Explore Python’s evolution from the 1990s to today with a brief history and demos of key features added throughout its lifetime.
REAL PYTHON course

PEP 794: Import Name Metadata (Accepted)

PYTHON.ORG

Python Type System and Tooling Survey 2025

GOOGLE.COM

Django Security Releases Issued: 5.2.6, 5.1.12, and 4.2.24

DJANGO SOFTWARE FOUNDATION

Articles & Tutorials

Large Language Models on the Edge of the Scaling Laws

What’s happening with the latest releases of large language models? Is the industry hitting the edge of the scaling laws, and do the current benchmarks provide reliable performance assessments? This week on the show, Jodie Burchell returns to discuss the current state of LLM releases.
REAL PYTHON podcast

Python Has Had Async for 10 Years

Anthony Shaw poses the question: Python has had async for 10 years, so why isn’t it more popular? He dives deep on where async is useful and where it is limited. Associated HN discussion.
ANTHONY SHAW

Engineer-led Demos, Live AMA, and More – all at Glean:LIVE

alt

Don’t miss Glean’s product launch on Sept 25th. Get a first look at Glean’s new Assistant, see how to vibe code an agent, and catch engineer-led demos. This launch is all about empowering you with a more personalized experience and upleveling the skills only you can bring to your work. Register now! here →
GLEAN sponsor

Top 6 Python Libraries for Visualization: Which One to Use?

The vast number of Python visualization libraries can be overwhelming. This article shows you the pros and cons of some the popular libraries, including Matplotlib, seaborn, Plotly, Bokeh, Altair, and Pygal.
CODECUT.AI • Shared by Khuyen Tran

Open Source Is a Gift

This opinion piece talks about how open source isn’t just a gift of free libraries, but the gift of learning from others who are developing in public.
JOSH THOMAS.

Managing Multiple Python Versions With pyenv

Learn how to use pyenv to manage multiple Python versions, prevent conflicts, and keep your projects compatible and development smooth.
REAL PYTHON

Quiz: Managing Multiple Python Versions With pyenv

REAL PYTHON

uv vs pip: Managing Python Packages and Dependencies

Compare uv and pip with benchmarks, speed tests, and dependency management tips. Learn which tool is best for your Python projects.
REAL PYTHON

Quiz: uv vs pip: Managing Python Packages and Dependencies

REAL PYTHON

Profiling Performance in Python

Learn to profile Python programs with built-in and popular third-party tools, and turn performance insights into faster code.
REAL PYTHON course

Python 3.14: 3 Smaller Features

With a jam packed 3.14 release around the corner, it’s also important to look at the smaller features coming to Python
JAMIE CHANG • Shared by Jamie Chang

Looking Forward to Django 6.0

This post is an update on the progress of Django 6 and describes many of the features that will be in it.
CARLTON GIBSON

You’re a Developer, not a Jenkins Admin

Switch to Bitbucket Pipelines: build, test, deploy, and get back to being a dev.
ATLASSIAN sponsor

When You No Longer Need That Object

Explore reference counting and cyclic garbage collection in Python.
STEPHEN GRUPPETTA

The Flowing River: List Comprehensions

Understanding how Python’s list comprehensions work under the hood
SUBSTACK.COM • Shared by Vivis Dev

Projects & Code

tilf: Tiny Elf Pixel Art Editor Built With PySide6

GITHUB.COM/DANTEROLLE

A GitHub Seach Inpired Interface to DataFrames

GITHUB.COM/WILLIAMBDEAN • Shared by William Dean

vectorwrap: Swap Vector Databases by Changing the URL

GITHUB.COM/MIHIRAHUJA1 • Shared by Mihir Ahuja

furl: URL Parsing and Manipulation Made Easy

GITHUB.COM/GRUNS

pymc: Bayesian Modeling and Probabilistic Programming

GITHUB.COM/PYMC-DEVS

Events

Weekly Real Python Office Hours Q&A (Virtual)

September 10, 2025
REALPYTHON.COM

Python Atlanta

September 11 to September 12, 2025
MEETUP.COM

PyCon India 2025

September 12 to September 16, 2025
PYCON.ORG

PyCon AU 2025

September 12 to September 17, 2025
PYCON.ORG.AU

PyCamp CZ 25 Beta

September 12 to September 15, 2025
PYCAMP.CZ

Django Girls Abraka Workshop

September 12 to September 13, 2025
DJANGOGIRLS.ORG

PyCon Niger 2025

September 13 to September 16, 2025
PYCON.ORG

PyCon UK 2025

September 19 to September 23, 2025
PYCONUK.ORG


Happy Pythoning!
This was PyCoder’s Weekly Issue #698.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

September 09, 2025 07:30 PM UTC


Real Python

Python String Splitting

Python’s .split() method lets you divide a string into a list of substrings based on a specified delimiter. By default, .split() separates at whitespace, including spaces, tabs, and newlines. You can customize .split() to work with specific delimiters using the sep parameter, and control the amount of splits with maxsplit.

By the end of this video course, you’ll understand that:


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

September 09, 2025 02:00 PM UTC


Python⇒Speed

Testing the compiler optimizations your code relies on

In a recent article by David Lattimore, he demonstrates a number of Rust performance tricks, including one that involve writing code that looks like a loop, but which in practice is optimized down to a fixed number of instructions. Having what looks like an O(n) loop turned into a constant operation is great for speed!

But there’s a problem with this sort of trick: how do you know the compiler will keep doing it? What happens when the compiler’s next release comes out? How can you catch performance regressions?

One solution is benchmarking: you measure your code’s speed, and if it gets a lot slower, something has gone wrong. This is useful and important if you care about speed. But it’s also less localized, so it won’t necessarily immediately pinpoint where the regression happened.

In this article I’m going to cover another approach: a test that will only pass if the compiler really did optimize the loop away.

Read more...

September 09, 2025 12:00 AM UTC