skip to navigation
skip to content

Planet Python

Last update: December 19, 2025 10:44 AM UTC

December 18, 2025


Django Weblog

Hitting the Home Stretch: Help Us Reach the Django Software Foundation's Year-End Goal!

As we wrap up another strong year for the Django community, we wanted to share an update and a thank you. This year, we raised our fundraising goal from $200,000 to $300,000, and we are excited to say we are now over 88% of the way there. That puts us firmly in the home stretch, and a little more support will help us close the gap and reach 100%.

So why the higher goal this year? We expanded the Django Fellows program to include a third Fellow. In August, we welcomed Jacob Tyler Walls as our newest Django Fellow. That extra capacity gives the team more flexibility and resilience, whether someone is taking parental leave, time off around holidays, or stepping away briefly for other reasons. It also makes it easier for Fellows to attend more Django events and stay connected with the community, all while keeping the project running smoothly without putting too much pressure on any one person.

We are also preparing to raise funds for an executive director role early next year. That work is coming soon, but right now, the priority is finishing this year strong.

We want to say a sincere thank you to our existing sponsors and to everyone who has donated so far. Your support directly funds stable Django releases, security work, community programs, and the long-term health of the framework. If you or your organization have end-of-year matching funds or a giving program, this is a great moment to put them to use and help push us past the finish line.

If you would like to help us reach that final stretch, you can find all the details on our fundraising page

Other ways to support Django:

Thank you for helping support Django and the people who make it possible. We are incredibly grateful for this community and everything you do to keep Django strong.

December 18, 2025 10:04 PM UTC


Sumana Harihareswara - Cogito, Ergo Sumana

Python Software Foundation, National Science Foundation, And Integrity

Python Software Foundation, National Science Foundation, And Integrity

December 18, 2025 07:43 PM UTC


Django Weblog

Introducing the 2026 DSF Board

Thank You to Our Outgoing Directors

We extend our gratitude to Thibaud Colas and Sarah Abderemane, who are completing their terms on the board. Their contributions shaped the foundation in meaningful ways, and the following highlights only scratch the surface of their work.

Thibaud served as President in 2025 and Secretary in 2024. He was instrumental in governance improvements, the Django CNA initiative, election administration, and creating our first annual report. He also led our birthday campaign and helped with the creation of several new working groups this year. His thoughtful leadership helped the board navigate complex decisions.

Sarah served as Vice President in 2025 and contributed significantly to our outreach efforts, working group coordination, and membership management. She also served as a point of contact for the Django CNA initiative alongside Thibaud.

Both Thibaud and Sarah did too many things to list here. They were amazing ambassadors for the DSF, representing the board at many conferences and events. They will be deeply missed, and we are happy to have their continued membership and guidance in our many working groups.

On behalf of the board, thank you both for your commitment to Django and the DSF. The community is better for your service.

Thank You to Our 2025 Officers

Thank you to Tom Carrick and Jacob Kaplan-Moss for their service as officers in 2025.

Tom served as Secretary, keeping our meetings organized and our records in order. Jacob served as Treasurer, providing careful stewardship of the foundation's finances. Their dedication helped guide the DSF through another successful year.

Welcome to Our Newly Elected Directors

We welcome Priya Pahwa and Ryan Cheley to the board, and congratulate Jacob Kaplan-Moss on his re-election.

2026 DSF Board Officers

The board unanimously elected our officers for 2026:

I'm honored to serve as President for 2026. The DSF has important work ahead, and I'm looking forward to building on the foundation that previous boards have established.

Our monthly board meeting minutes may be found at dsf-minutes, and December's minutes are available.

If you have a great idea for the upcoming year or feel something needs our attention, please reach out to us via our Contact the DSF page. We're always open to hearing from you.

December 18, 2025 06:50 PM UTC


Ned Batchelder

A testing conundrum

In coverage.py, I have a class for computing the fingerprint of a data structure. It’s used to avoid doing duplicate work when re-processing the same data won’t add to the outcome. It’s designed to work for nested data, and to canonicalize things like set ordering. The slightly simplified code looks like this:

class Hasher:
    """Hashes Python data for fingerprinting."""

    def __init__(self) -> None:
        self.hash = hashlib.new("sha3_256")

    def update(self, v: Any) -> None:
        """Add `v` to the hash, recursively if needed."""
        self.hash.update(str(type(v)).encode("utf-8"))
        match v:
            case None:
                pass
            case str():
                self.hash.update(v.encode("utf-8"))
            case bytes():
                self.hash.update(v)
            case int() | float():
                self.hash.update(str(v).encode("utf-8"))
            case tuple() | list():
                for e in v:
                    self.update(e)
            case dict():
                for k, kv in sorted(v.items()):
                    self.update(k)
                    self.update(kv)
            case set():
                self.update(sorted(v))
            case _:
                raise ValueError(f"Can't hash {v = }")
        self.hash.update(b".")

    def digest(self) -> bytes:
        """Get the full binary digest of the hash."""
        return self.hash.digest()

To test this, I had some basic tests like:

def test_string_hashing():
    # Same strings hash the same.
    # Different strings hash differently.
    h1 = Hasher()
    h1.update("Hello, world!")
    h2 = Hasher()
    h2.update("Goodbye!")
    h3 = Hasher()
    h3.update("Hello, world!")
    assert h1.digest() != h2.digest()
    assert h1.digest() == h3.digest()

def test_dict_hashing():
    # The order of keys doesn't affect the hash.
    h1 = Hasher()
    h1.update({"a": 17, "b": 23})
    h2 = Hasher()
    h2.update({"b": 23, "a": 17})
    assert h1.digest() == h2.digest()

The last line in the update() method adds a dot to the running hash. That was to solve a problem covered by this test:

def test_dict_collision():
    # Nesting matters.
    h1 = Hasher()
    h1.update({"a": 17, "b": {"c": 1, "d": 2}})
    h2 = Hasher()
    h2.update({"a": 17, "b": {"c": 1}, "d": 2})
    assert h1.digest() != h2.digest()

The most recent change to Hasher was to add the set() clause. There (and in dict()), we are sorting the elements to canonicalize them. The idea is that equal values should hash equally and unequal values should not. Sets and dicts are equal regardless of their iteration order, so we sort them to get the same hash.

I added a test of the set behavior:

def test_set_hashing():
    h1 = Hasher()
    h1.update({(1, 2), (3, 4), (5, 6)})
    h2 = Hasher()
    h2.update({(5, 6), (1, 2), (3, 4)})
    assert h1.digest() == h2.digest()
    h3 = Hasher()
    h3.update({(1, 2)})
    assert h1.digest() != h3.digest()

But I wondered if there was a better way to test this class. My small one-off tests weren’t addressing the full range of possibilities. I could read the code and feel confident, but wouldn’t a more comprehensive test be better? This is a pure function: inputs map to outputs with no side-effects or other interactions. It should be very testable.

This seemed like a good candidate for property-based testing. The Hypothesis library would let me generate data, and I could check that the desired properties of the hash held true.

It took me a while to get the Hypothesis strategies wired up correctly. I ended up with this, but there might be a simpler way:

from hypothesis import strategies as st

scalar_types = [
    st.none(),
    st.booleans(),
    st.integers(),
    st.floats(allow_infinity=False, allow_nan=False),
    st.text(),
    st.binary(),
]

scalars = st.one_of(*scalar_types)

def tuples_of(strat):
    return st.lists(strat, max_size=3).map(tuple)

hashable_types = scalar_types + [tuples_of(s) for s in scalar_types]

# Homogeneous sets: all elements same type.
homogeneous_sets = (
    st.sampled_from(hashable_types)
    .flatmap(lambda s: st.sets(s, max_size=5))
)

# Full nested Python data.
python_data = st.recursive(
    scalars,
    lambda children: (
        st.lists(children, max_size=5)
        | tuples_of(children)
        | homogeneous_sets
        | st.dictionaries(st.text(), children, max_size=5)
    ),
    max_leaves=10,
)

This doesn’t make completely arbitrary nested Python data: sets are forced to have elements all of the same type or I wouldn’t be able to sort them. Dictionaries only have strings for keys. But this works to generate data similar to the real data we hash. I wrote this simple test:

from hypothesis import given

@given(python_data)
def test_one(data):
    # Hashing the same thing twice.
    h1 = Hasher()
    h1.update(data)
    h2 = Hasher()
    h2.update(data)
    assert h1.digest() == h2.digest()

This didn’t find any failures, but this is the easy test: hashing the same thing twice produces equal hashes. The trickier test is to get two different data structures, and check that their equality matches their hash equality:

@given(python_data, python_data)
def test_two(data1, data2):
    h1 = Hasher()
    h1.update(data1)
    h2 = Hasher()
    h2.update(data2)

    if data1 == data2:
        assert h1.digest() == h2.digest()
    else:
        assert h1.digest() != h2.digest()

This immediately found problems, but not in my code:

> assert h1.digest() == h2.digest()
E AssertionError: assert b'\x80\x15\xc9\x05...' == b'\x9ap\xebD...'
E
E   At index 0 diff: b'\x80' != b'\x9a'
E
E   Full diff:
E   - (b'\x9ap\xebD...)'
E   + (b'\x80\x15\xc9\x05...)'
E Falsifying example: test_two(
E     data1=(False, False, False),
E     data2=(False, False, 0),
E )

Hypothesis found that (False, False, False) is equal to (False, False, 0), but they hash differently. This is correct. The Hasher class takes the types of the values into account in the hash. False and 0 are equal, but they are different types, so they hash differently. The same problem shows up for 0 == 0.0 and 0.0 == -0.0. The theory of my test was incorrect: some values that are equal should hash differently.

In my real code, this isn’t an issue. I won’t ever be comparing values like this to each other. If I had a schema for the data I would be comparing, I could use it to steer Hypothesis to generate realistic data. But I don’t have that schema, and I’m not sure I want to maintain that schema. This Hasher is useful as it is, and I’ve been able to reuse it in new ways without having to update a schema.

I could write a smarter equality check for use in the tests, but that would roughly approximate the code in Hasher itself. Duplicating product code in the tests is a good way to write tests that pass but don’t tell you anything useful.

I could exclude bools and floats from the test data, but those are actual values I need to handle correctly.

Hypothesis was useful in that it didn’t find any failures others than the ones I described. I can’t leave those tests in the automated test suite because I don’t want to manually examine the failures, but at least this gave me more confidence that the code is good as it is now.

Testing is a challenge unto itself. This brought it home to me again. It’s not easy to know precisely what you want code to do, and it’s not easy to capture that intent in tests. For now, I’m leaving just the simple tests. If anyone has ideas about how to test Hasher more thoroughly, I’m all ears.

December 18, 2025 10:30 AM UTC


Eli Bendersky

Plugins case study: mdBook preprocessors

mdBook is a tool for easily creating books out of Markdown files. It's very popular in the Rust ecosystem, where it's used (among other things) to publish the official Rust book.

mdBook has a simple yet effective plugin mechanism that can be used to modify the book output in arbitrary …

December 18, 2025 10:10 AM UTC


Peter Bengtsson

Autocomplete using PostgreSQL instead of Elasticsearch

Here on my blog I have a site search. Before you search, there's autocomplete. The autocomplete is solved by using downshift in React and on the backend, there's an API /api/v1/typeahead?q=bla. Up until today, that backend was powered by Elasticsearch. Now it's powered by PostgreSQL. Here's how I implemented it.

Indexing

A cron job loops over all titles in all blog posts and finds portions of the words in the titles as singles, doubles, and triples. For each one, the popularity of the blog post is accumulated to the extracted keywords and combos.

These are then inserted into a Django ORM model that looks like this:


class SearchTerm(models.Model):
    term = models.CharField(max_length=100, db_index=True)
    popularity = models.FloatField(default=0.0)
    add_date = models.DateTimeField(auto_now=True)
    index_version = models.IntegerField(default=0)

    class Meta:
        unique_together = ("term", "index_version")
        indexes = [
            GinIndex(
                name="plog_searchterm_term_gin_idx",
                fields=["term"],
                opclasses=["gin_trgm_ops"],
            ),
        ]

The index_version is used like this, in the indexing code:


current_index_version = (
    SearchTerm.objects.aggregate(Max("index_version"))["index_version__max"]
    or 0
)
index_version = current_index_version + 1

...

SearchTerm.objects.bulk_create(bulk)

SearchTerm.objects.filter(index_version__lt=index_version).delete()

That means that I don't have to delete previous entries until new ones have been created. So if something goes wrong during the indexing, it doesn't break the API.
Essentially, there are about 13k entries in that model. For a very brief moment there are 2x13k entries and then back to 13k entries when the whole task is done.

The search is done with the LIKE operator.


peterbecom=# select term from plog_searchterm where term like 'za%';
            term
-----------------------------
 zahid
 zappa
 zappa biography
 zappa biography barry
 zappa biography barry miles
 zappa blog
(6 rows)

In Python, it's as simple as:


base_qs = SearchTerm.objects.all()
qs = base_qa.filter(term__startswith=term.lower())

But suppose someone searches for bio we want it to match things like frank zappa biography so what it actually does is:


from django.db.models import Q 

qs = base_qs.filter(
    Q(term__startswith=term.lower()) | Q(term__contains=f" {term.lower()}")
)

Typo tolerance

This is done with the % operator.


peterbecom=# select term from plog_searchterm where term % 'frenk';
  term
--------
 free
 frank
 freeze
 french
(4 rows)

In the Django ORM it looks like this:


base_qs = SearchTerm.objects.all()
qs = base_qs.filter(term__trigram_similar=term.lower())

And if that doesn't work, it gets even more desperate. It does this using the similarity() function. Looks like this in SQL:


peterbecom=# select term from plog_searchterm where similarity(term, 'zuppa') > 0.14;
       term
-------------------
 frank zappa
 zappa
 zappa biography
 radio frank zappa
 frank zappa blog
 zappa blog
 zurich
(7 rows)

Note on typo tolerance

Most of the time, the most basic query works and yields results. I.e. the .filter(term__startswith=term.lower()) query.
It's only if it yields fewer results than the pagination size. That's why the fault tolerance query is only-if-needed. This means, it might send 2 SQL select queries from Python to PostgreSQL. In Elasticsearch, you usually don't do this. You send multiple queries and boost the differently.

It can be done with PostgreSQL too using an UNION operator so that you send one but more complex query.

Speed

It's hard to measure the true performance of these things because they're so fast that it's more about the network speed.

On my fast MacBook Pro M4, I ran about 50 realistic queries and measured the time it took each with this new PostgreSQL-based solution versus the previous Elasticsearch solution. They both take about 4ms per query. I suspect, that 90% of that 4ms is serialization & transmission, and not much time inside the database itself.

The number of rows it searches is only, at the time of writing, 13,000+ so it's hard to get a feel for how much faster Elasticsearch would be than PostgreSQL. But with a GIN index in PostgreSQL, it would have to scale much much larger to feel too slow.

About Elasticsearch

Elasticsearch is better than PostgreSQL at full-text search, including n-grams. Elasticsearch is highly optimized for these kinds of things and has powerful ways that you can make a query be a product of how well it matched with each entry's popularity. With PostgreSQL that gets difficult.

But PostgreSQL is simple. It's solid and it doesn't take up nearly as much memory as Elasticsearch.

December 18, 2025 09:46 AM UTC


Talk Python to Me

#531: Talk Python in Production

Have you ever thought about getting your small product into production, but are worried about the cost of the big cloud providers? Or maybe you think your current cloud service is over-architected and costing you too much? Well, in this episode, we interview Michael Kennedy, author of "Talk Python in Production," a new book that guides you through deploying web apps at scale with right-sized engineering.

December 18, 2025 08:00 AM UTC


Seth Michael Larson

Delta emulator adds support for SEGA Genesis games

December 18, 2025 12:00 AM UTC

December 17, 2025


Sebastian Pölsterl

scikit-survival 0.26.0 released

I am pleased to announce that scikit-survival 0.26.0 has been released.

This is a maintainance release that adds support for Python 3.14 and includes updates to make scikit-survival compatible with new versions of pandas and osqp. It adds support for the pandas string dtype, and copy-on-write, which is going to become the default with pandas 3. In addition, sksurv.preprocessing.OneHotEncoder now supports converting columns with the object dtype.

With this release, the minimum supported version are:

PackageMinimum Version
Python3.11
pandas2.0.0
osqp1.0.2

Install

scikit-survival is available for Linux, macOS, and Windows and can be installed either

via pip:

pip install scikit-survival

or via conda

 conda install -c conda-forge scikit-survival

December 17, 2025 08:26 PM UTC


PyCharm

The Islands theme is now the default look across JetBrains IDEs starting with version 2025.3.This update is more than a visual refresh. It’s our commitment to creating a soft, balanced environment designed to support focus and comfort throughout your workflow. We began introducing the new theme earlier this year, gathering feedback, conducting research, and testing it hands-on with developers […]

December 17, 2025 07:41 PM UTC


Real Python

How to Build the Python Skills That Get You Hired

Build a focused learning plan that helps you identify essential Python skills, assess your strengths, and practice effectively to progress.

December 17, 2025 02:00 PM UTC


Python Morsels

Embrace whitespace

Well placed spaces and line breaks can greatly improve the readability of your Python code.

Table of contents

  1. Whitespace around operators
  2. Auto-formatters: both heroes and villains
  3. Using line breaks for implicit line continuation
  4. Separating sections with blank lines
  5. The whitespace is for us, not for Python
  6. Consider ruff, black, and other auto-formatters
  7. Whitespace is all about visual grouping

Whitespace around operators

Compare this:

result = a**2+b**2+c**2

To this:

result = a**2 + b**2 + c**2

I find that second one more readable because the operations we're performing are more obvious (as is the order of operations).

Too much whitespace can hurt readability though:

result = a ** 2 + b ** 2 + c ** 2

This seems like a step backward because we've lost those three groups we had before.

With both typography and visual design, more whitespace isn't always better.

Auto-formatters: both heroes and villains

If you use an auto-formatter …

Read the full article: https://www.pythonmorsels.com/embrace-whitespace/

December 17, 2025 12:00 AM UTC


Armin Ronacher

What Actually Is Claude Code’s Plan Mode?

December 17, 2025 12:00 AM UTC

December 16, 2025


PyCoder’s Weekly

Issue #713: Deprecations, Compression, Functional Programming, and More (Dec. 16, 2025)

December 16, 2025 07:30 PM UTC


Real Python

Exploring Asynchronous Iterators and Iterables

Learn to build async iterators and iterables in Python to handle async operations efficiently and write cleaner, faster code.

December 16, 2025 02:00 PM UTC


Caktus Consulting Group

PydanticAI Agents Intro

In previous posts, we explored function calling and how it enables models to interact with external tools. However, manually defining schemas and managing the request/response loop can get tedious as an application grows. Agent frameworks can help here.

December 16, 2025 01:00 PM UTC


Tryton News

Tryton Release 7.8

We are proud to announce the 7.8 release of Tryton.
This release provides many bug fixes, performance improvements and some fine tuning.
You can give it a try on the demo server, use the docker image or download it here.
As usual upgrading from previous series is fully supported.

Here is a list of the most noticeable changes:

Changes for the User

Client

We added now a drop-down menu to the client containing the user’s notifications. Now when a user clicks on a notification, it is marked as read for this user.
Also we implemented an unread counter in the client and raise a user notification pop-up when a new notification is sent by the server.

Now users can subscribe to a chat of documents by toggling the notification bell-icon.
The chat feature has been activated to many documents like sales, purchases and invoices.

Now we display the buttons that are executed on a selection of records at the bottom of lists.

We now implemented an easier way to search for empty relation fields:
The query Warehouse: = will now return records without a warehouse instead of the former result of records with warehouses having empty names. And the former result can be searched by the following query: "Warehouse.Record Name": =.

Now we interchanged the internal ID by the record name when exporting Many2One and Reference fields to CSV. And the export of One2Many and Many2Many fields is using a list of record names.

We also made it possible to import One2Many field content by using a list of names (like for the Many2Many).

Web

We made the keyboard shortcuts now also working on modals.

Server

On scheduled tasks we now also implemented user notifications.
Each user can now subscribe to be notified by scheduled tasks which generates notifications. Notifications will appear in the client drop-down.

Accounting

On supplier invoice we now made it possible to set a payment reference and to validate it. Per default the Creditor Reference is supported. And on customer invoices Tryton generates a payment reference automatically. It is using the Creditor Reference format by default, and the structured communication for Belgian customers. The payment reference can be validated for defined formats like the “Creditor Reference”. And it can be used in payment rules.

Now we support the Belgian structured communication on invoices, payments and statement rules. And with this the reconciliation process can be automated.

We now implemented when succeeding a group of payments, Tryton now will ask for the clearing date instead of just using today.

Now we store the address of the party in the SEPA mandate instead of using just the first party address.

We now added a button on the accounting category to add or remove multiple products easily.

Customs

Now we support customs agents. They define a party to whom the company is delegating the customs between two countries.

Incoterm

We now added also the old version of Incoterms 2000 because some companies and services are still using it.

Now we allow the modification of the incoterms on the customer shipment as long as it has not yet been shipped.

Product

We now make the list of variants for a product sortable. This is useful for e-commerce if you want to put a specific variant in front.

Now it is possible to set a different list price and gross price per variant without the need for a custom module.

We now made the volume and weight usable in price list formulas. This is useful to include taxes based on such criteria.

Production

Now we made it possible to define phantom bill-of-materials (BOM) to group common inputs or outputs for different BOMs. When used in a production, the phantom BOM is replaced by its corresponding materials.

We now made it possible to define a production as a disassembly. In this case the calculation from the BOM is inverted.

Purchasing

Now we restrict the run of the create purchase wizard from purchase requests which are already purchased.

And also we now restrict to run the create quotation wizard on purchase requests when it is no longer possible to create them.

It is now possible to create a new quotation for a purchase request which already has received one.

Now we made the client to open quotations that have been created by the wizard.

We fine-tuned the supply system: When no supplier can supply on time, the system will now choose the fastest supplier.

Sales

Now we made it possible to encode refunding payments on the sale order.

We allow now to group invoices created for a sale rental with the invoices created for sale orders.

In the sale subscription lines we now implemented a summary column similar to sales.

Stock

We now added two new stock reports that calculates the inventory and turnover of the stock. We find this useful to optimize and fine-tune the order points.

Now we added the support for international shipping to the shipping services: DPD, Sendcloud and UPS.

And now we made Tryton to generate a default shipping description based on the custom categories of the shipped goods (with a fallback to “General Merchandise” for UPS). This is useful for international shipping.

We now implemented an un-split functionality to correct erroneous split moves.

Now we allow to cancel a drop-shipment in state done similar to the other shipment types.

Web Shop

We now define the default Incoterm per web shop to set on the sale orders.

Now we added a status URL to the sales coming from a web shop.

We now added the URL to each product that is published in a web shop.

Now we added a button on sale from the web shop to force an update from the web shop.

We did many improvements to extend our Shopify support:

New Modules

EDocument Peppol

The EDocument Peppol Module provides the foundation for sending and receiving
electronic documents on the Peppol network.

EDocument Peppol Peppyrus

The EDocument Peppol Peppyrus Module allows sending and receiving electronic
documents on the Peppol network thanks to the free Peppyrus service.

EDocument UBL

The EDocument UBL Module adds electronic documents from UBL.

Sale Rental

The Sale Rental Module manages rental order.

Sale Rental Progress Invoice

The Sale Rental Progress Invoice Module allows creating progress invoices for
rental orders.

Stock Shipment Customs

The Stock Shipment Customs Module enables the generation of commercial
invoices for both customer and supplier return shipments.

Stock Shipping Point

The Stock Shipping Point Module adds a shipping point to shipments.

Changes for the System Administrator

Server

We now made the server stream the JSON and gzip response to reduce the memory consumption.

Now the trytond-console gains an option to execute a script from a file.

We now replaced the [cron] clean_days configuration by [cron] log_size. Now the storage of the logs of scheduled tasks only depends on its size and no longer on its frequency.

Now we made the login process send the URL for the host of the bus. This way the clients do not need to rely on the browser to manage the redirection. Which wasn’t working on recent browsers, anyway.

We now made the login sessions only valid for the IP address of the client that generates it. This enforces the security against session leak.

Now we let the server set a Message-Id header in all sent emails.

Product

We added a timestamp parameter to the URLs of product images. This allows to force a refresh of the old cached images.

Web Shop

Now we added routes to open products, variants, customers and orders using their Shopify-ID. This can be used to customize the admin UI to add a direct link to Tryton.

Changes for the Developer

Server

In this release we introduce notifications. Their messages are sent to the user as soon as they are created via the bus. They can be linked to a set of records or an action that will be opened when the user click on it.

We made it now possible to configure a ModelSQL based on a table_query to be materialized. The configuration defines the interval at which the data must be refreshed and a wizard lets the user force a refresh.
This is useful to optimize some queries for which the data does not need to be exactly fresh but that could benefit from some indexes.

Now we register the models, wizards and reports in the tryton.cfg module file. This reduces the memory consumption of the server. It does no longer need to import all the installed modules but only the activated modules.
This is also a first step to support typing with the Tryton modular design.

We now added the attribute multiple to the <button> on tree view. When set, the button is shown at the bottom of the view.

Now we implemented the declaration of read-only Wizards. Such wizards use a read-only transaction for the execution and because of this write access on the records is not needed.

We now store only immutable structures in the MemoryCache. This prevents the alteration of cached data.

Now we added a new method to the Database to clear the cached properties of the database. This is useful when writing tests that alter those properties.

We now use the SQL FILTER syntax for aggregate functions.

Now we use the SQL EXISTS operator for searching Many2One fields with the where domain operator.

We introduced now the trytond.model.sequence_reorder method to update the sequence field according to the current order of a record list.

Now we refactored the trytond.config to add cache. It is no more needed to retrieve the configuration as a global variable to avoid performance degradation.

We removed the has_window_functions function from the Database, because the feature is supported by all the supported databases.

Now we added to the trytond.tools pair and unpair methods which are equivalent implementation in Python of the sql_pairing.

Proteus

We now implemented the support of total ordering in Proteus Model.

Marketing

We now set the One-Click header on the marketing emails to let the receivers unsubscribe easily.

Sales

Now we renamed the advance payment conditions into lines for more coherence.

Web Shop

We now updated the Shopify module to use the GraphQL API because their REST-API is now deprecated.

4 posts - 2 participants

Read full topic

December 16, 2025 07:00 AM UTC

December 15, 2025


Peter Bengtsson

Comparison of speed between gpt-5, gpt-5-mini, and gpt-5-nano

gpt-5-mini is 3 times faster than gpt-5 and gpt-5-nano.

December 15, 2025 11:37 PM UTC


The Python Coding Stack

If You Love Queuing, Will You Also Love Priority Queuing? • [Club]

Exploring Python’s heapq

December 15, 2025 04:53 PM UTC


Real Python

Writing DataFrame-Agnostic Python Code With Narwhals

If you're a Python library developer looking to write DataFrame-agnostic code, this tutorial will show how the Narwhals library could give you a solution.

December 15, 2025 02:00 PM UTC

Quiz: Writing DataFrame-Agnostic Python Code With Narwhals

If you're a Python library developer wondering how to write DataFrame-agnostic code, the Narwhals library is the solution you're looking for.

December 15, 2025 12:00 PM UTC


Python Bytes

#462 LinkedIn Cringe

Topics include , docs, PyAtlas: interactive map of the top 10,000 Python packages on PyPI., and Buckaroo.

December 15, 2025 08:00 AM UTC


Python GUIs

Getting Started With Flet for GUI Development — Your First Steps With the Flet Library for Desktop and Web Python GUIs

Getting started with a new GUI framework can feel daunting. This guide walks you through the essentials of Flet, from installation and a first app to widgets, layouts, and event handling.

December 15, 2025 06:00 AM UTC


Zato Blog

Microsoft Dataverse with Python and Zato Services

Microsoft Dataverse with Python and Zato Services

Overview

Microsoft Dataverse is a cloud-based data storage and management platform, often used with PowerApps and Dynamics 365.

Integrating Dataverse with Python via Zato enables automation, API orchestration, and seamless CRUD (Create, Read, Update, Delete) operations on any Dataverse object.

Below, you'll find practical code examples for working with Dataverse from Python, including detailed comments and explanations. The focus is on the "accounts" entity, but the same approach applies to any object in Dataverse.

Connecting to Dataverse and retrieving accounts

The main service class configures the Dataverse client and retrieves all accounts. Both the handle and get_accounts methods are shown together for clarity.

# -*- coding: utf-8 -*-

# Zato
from zato.common.typing_ import any_
from zato.server.service import DataverseClient, Service

class MyService(Service):

    def handle(self):

        # Set up Dataverse credentials - in a real service,
        # this would go to your configuration file.

        tenant_id = '221de69a-602d-4a0b-a0a4-1ff2a3943e9f'
        client_id = '17aaa657-557c-4b18-95c3-71d742fbc6a3'
        client_secret = 'MjsrO1zc0.WEV5unJCS5vLa1'
        org_url = 'https://org123456.api.crm4.dynamics.com'

        # Build the Dataverse client using the credentials
        client = DataverseClient(
            tenant_id=tenant_id,
            client_id=client_id,
            client_secret=client_secret,
            org_url=org_url
        )

        # Retrieve all accounts using a helper method
        accounts = self.get_accounts(client)

        # Process the accounts as needed (custom logic goes here)
        pass

    def get_accounts(self, client:'DataverseClient') -> 'any_':

        # Specify the API path for the accounts entity
        path = 'accounts'

        # Call the Dataverse API to retrieve all accounts
        response = client.get(path)

        # Log the response for debugging/auditing

        self.logger.info(f'Dataverse response (get accounts): {response}')

        # Return the API response to the caller
        return response
{'@odata.context': 'https://org1234567.crm4.dynamics.com/api/data/v9.0/$metadata#accounts',
'value': [{'@odata.etag': 'W/"11122233"', 'territorycode': 1,
'accountid': 'd92e6f18-36fb-4fa8-b7c2-ecc7cc28f50c', 'name': 'Zato Test Account 1',
'_owninguser_value': 'ea4dd84c-dee6-405d-b638-c37b57f00938'}]}

Let's check more examples - you'll note they all follow the same pattern as the first one.

Retrieving an Account by ID

def get_account_by_id(self, client:'DataverseClient', account_id:'str') -> 'any_':

    # Construct the API path using the account's GUID
    path = f'accounts({account_id})'

    # Call the Dataverse API to fetch the account
    response = client.get(path)

    # Log the response for traceability
    self.logger.info(f'Dataverse response (get account by ID): {response}')

    # Return the fetched account
    return response

Retrieving an account by name

def get_account_by_name(self, client:'DataverseClient', account_name:'str') -> 'any_':

    # Construct the API path with a filter for the account name
    path = f"accounts?$filter=name eq '{account_name}'"

    # Call the Dataverse API with the filter
    response = client.get(path)

    # Log the response for auditing
    self.logger.info(f'Dataverse response (get account by name): {response}')

    # Return the filtered account(s)
    return response

Creating a new account

def create_account(self, client:'DataverseClient') -> 'any_':

    # Specify the API path for account creation
    path = 'accounts'

    # Prepare the data for the new account
    account_data = {
        'name': 'New Test Account',
        'telephone1': '+1-555-123-4567',
        'emailaddress1': 'hello@example.com',
        'address1_city': 'Prague',
        'address1_country': 'Czech Republic',
    }

    # Call the Dataverse API to create the account
    response = client.post(path, account_data)

    # Log the response for traceability
    self.logger.info(f'Dataverse response (create account): {response}')

    # Return the API response
    return response

Updating an existing account

def update_account(self, client:'DataverseClient', account_id:'str') -> 'any_':

    # Prepare the data to update
    update_data = {
        'name': 'Updated Account Name',
        'telephone1': '+1-555-987-6543',
        'emailaddress1': 'hello2@example.com',
    }

    # Call the Dataverse API to update the account by ID
    response = client.patch(f'accounts({account_id})', update_data)

    # Log the response for auditing
    self.logger.info(f'Dataverse response (update account): {response}')

    # Return the updated account response
    return response

Deleting an Account

def delete_account(self, client:'DataverseClient', account_id:'str') -> 'any_':

    # Call the Dataverse API to delete the account
    response = client.delete(f'accounts({account_id})')

    # Log the response for traceability
    self.logger.info(f'Dataverse response (delete account): {response}')

    # Return the API response
    return response

API path vs. PowerApps UI table names

A detail to note when working with Dataverse APIs is that the names you see in the PowerApps or Dynamics UI are not always the same as the paths expected by the API. For example:

This pattern applies to all Dataverse objects: always check the API documentation or inspect the metadata to determine the correct entity path.

Working with other Dataverse objects

While the examples above focus on the "accounts" entity, the same approach applies to any object in Dataverse: contacts, leads, opportunities, custom tables, and more. Simply adjust the API path and payload as needed.

Full CRUD Support

With Zato and Python, you get full CRUD (Create, Read, Update, Delete) capability for any Dataverse entity. The methods shown above can be adapted for any object, allowing you to automate, integrate, and orchestrate data flows across your organization.

Summary

This article has shown how to connect to Microsoft Dataverse from Python using Zato, perform CRUD operations, and understand the mapping between UI and API paths. These techniques enable robust integration and automation scenarios with any Dataverse data.

More resources

Microsoft 365 APIs and Python Tutorial
➤ Python API integration tutorials
What is an integration platform?
Python Integration platform as a Service (iPaaS)
What is an Enterprise Service Bus (ESB)? What is SOA?
Open-source iPaaS in Python

December 15, 2025 03:00 AM UTC


Python Anywhere

Changes on PythonAnywhere Free Accounts

tl;dr

Starting in January 2026, all free accounts will shift to community-powered support instead of direct support and will have some reduced features. If you want to upgrade, you can lock in the current $5/month (€5/month in the EU system) Hacker plan rate before January 8 (EU) or January 15 (US). After that, the base paid tier will be $10/month (€10/month in the EU system).

If you’re currently a paying customer, you can learn more about the new pricing tiers and guidance for current customers here.

December 15, 2025 12:00 AM UTC