Planet Python
Last update: May 23, 2025 01:44 AM UTC
May 22, 2025
First Institute of Reliable Software
How to separate test data from code: testing with CSV in pytest
Want to test a function with dozens of variants? I'll show you how to organize test cases in a CSV file and connect them to pytest using the pytest-csv-params plugin.
Mirek Długosz
Understanding Python web deployment
Trying to deploy server-side Python web application may be overwhelming. Many tutorials will gladly tell you how to run it inside gunicorn, and how to put nginx in front of it, but they usually skim over details on why all this is needed. Why do you need both nginx and gunicorn? Why gunicorn and not Waitress or daphne? What is WhiteNoise and do you still need nginx if you have it? What even is WSGI?
In this article I will share my mental model of distinct pieces and how they fit together. If I badly missed a mark somewhere, please reach out and tell me how wrong I am.
This is one of those topics that really benefits when you go through it in non-linear fashion. If you are here to learn, you might want to skim over headings and read sections in different order, or re-read selected sections out of order after reading the whole thing.
The main model
Let’s start with a high-level model of handling HTTP request.

That model follows the basic HTTP request/response cycle. Server passively waits for connections, which are initiated by clients. Client composes and sends a request. Server generates a response and sends it back, at which point connection is closed. If client decides it needs anything more, it sends new HTTP request.
The real world is complicated by HTTP/2, websockets and probably other things. HTTP/2 allows server to group multiple responses when addressing a single request. Websockets only start with HTTP request/response before switching to a completely different protocol. Some of these complexities will become relevant later.
Accepting HTTP request
HTTP may seem simple. Client prepares a text message defining a request and sends it to a server. Server prepares a text message constituting a response and sends it back. This works well for text-based content, such as HTML, XML and JSON.
But there is so much more than that. The current iteration of HTTP standard has a separate document on caching alone, which is about 30 pages long. There are three different compression algorithms, each described in detail in separate documentation. There’s an optional section about partial access to large resources. Then that whole thing may be wrapped in encrypted frames. HTTPS also involves special procedure for the initial handshake and a fair deal of cryptography. And on top of all that, you need to decide what to do with clients that do not conform to the specification.
Then, it’s somewhat rare for a single physical machine to serve single domain (website). On one hand, one machine may serve multiple domains and there must be something that can decide how exactly a specific request directed at one of managed domains should be handled. On the other hand, modern web-based applications are usually distributed across multiple physical machines and there must be something that can choose a specific machine to handle particular incoming request. These days decision is often made based on the request content, analysis of recent traffic and knowledge of internal service infrastructure.
Unfortunately, the world out there is a dangerous place. Some invalid requests are actually malicious. What if client sends hundreds of thousands of requests? What if client opens a connection and never sends any data? What if client sends less data it claims it will? These sorts of questions do not come naturally from reading the specification alone, and answers have been hard won over the years.
As you can imagine, there’s a lot of complexity, depth and lessons learned in last 30 years of HTTP usage. Most application developers don’t want to deal with any of that - they would rather focus on simple request/response cycle and only dive deep into other areas as they are relevant to problems they face. They would happily outsource everything else to some other software.
That other software is usually called HTTP server, load balancer or router. Some popular programs in that problem space are nginx, traefik and Kubernetes ingress.
It’s worth noting that many web app hosting platforms that focus on ease of use have their own setup and do not give any choice in that matter. Part of value proposition of Heroku or Fly.io is that the first stage is taken care of by dedicated professionals, and you can jump straight into step two or even three.
Translating between HTTP and programming language
Thanks to HTTP server, you have a sequence of bytes that constitute a valid HTTP request directed at your application. Your job is to produce a sequence of bytes that will constitute a valid HTTP response.
You can write application that works on these primitives. Python’s built-in HTTP server is basically that. But for most application developers, this is too low level - they don’t want to work with sequences of bytes, they want native objects and a library of helpers that make it easy to parse a request and construct a response. Reading and setting a cookie should take one line of code each.
Translating between a sequences of bytes and some kind of internal programming language objects (and back from objects to sequence of bytes when processing a response) is the main goal of the second step in the main model. The name and specifics of the approach varies by a programming language. Usually that component is called “application server”, “middleware” or “translation framework”.
“Middleware” is overloaded term. The concept is also used in traefik, an HTTP server (step number 1), and Django, a Python web application development framework (step number 3). The world is a tangled web and depending on where you draw the lines and how much you are willing to squint, a lot of things are “in the middle” between some other things.
You can think of application server as a tool written in some programming language that helps developers working in that language to work with HTTP, by translating between byte sequences and programming language constructs. There are some exceptions, which I’ll cover briefly near the end of article.
Python - WSGI
In Python community, application servers usually are designed to run WSGI-compliant applications.
WSGI is an abstract specification, published as PEP-333 and PEP-3333. It’s like a contract - for application server, it defines what it must provide to application, and what it should expect back. For application, it defines what it must provide back to application server, and what it should expect from it. The idea is that you should be able to run your application on any application server - and application server authors may write code once and support all applications. It’s an example of adapter design pattern.
Popular application servers are gunicorn, Waitress and bjoern.
Ruby - Rack
Rack is the main application server for Ruby web applications. These days it is both a concrete implementation of application server software, and de facto specification of how Ruby applications should work with application servers. unicorn and Puma are independent projects capable of hosting “Rack applications”.
Java - Jakarta EE
Jakarta EE is a bit of overloaded term that may refer to either Jakarta EE software, or Jakarta EE specification. Jakarta EE software is concrete implementation of Jakarta EE specification. As far as I understand, the software was first and later the community formalized whatever it was doing into a specification.
So Jakarta EE software is a bit similar to Rack, as it’s a specific software you can use which became a de facto standard. Jakarta EE specification is a bit similar to WSGI, because it’s just a document - the idea is that you should be able to use any application server with any application, as long as both are compliant with that document.
Most notable application servers are Tomcat and WildFly.
Node.js - built-in
Node.js has HTTP server and helpers built in. That’s because Node.js is built on top of a browser JavaScript engine, and JavaScript in a browser needs to deal with HTTP requests all the time. Node.js, as a de facto JavaScript backend platform, already does the same thing as application server does for other languages.
Rust - compiled-in
Rust does not provide HTTP helpers in standard library, and there is no generally accepted application server specification. Instead, there are multiple HTTP server libraries that solve many of the problems that your application is likely to deal with across the entire model. When you run application that uses one of these libraries, it binds to a socket and reads and writes byte sequences directly. At runtime, a single Rust application will effectively merge steps number 2 and 3 of the main model.
Popular libraries are Actix Web, axum and Rocket.
Adding value
At this point HTTP request is in the form suitable for a programming language. This is the step where majority of web application developers spend most of their time. Everything up to this point was a groundwork - things that had to be done, but it makes little difference how exactly they are done. Now we are getting close to the place that differentiates your application from all the other applications in the world.
In practice, many problems at this point are still shared and there is a room for outsourcing them. The objects and structures exposed to a programming language may still be relatively low level and a bit awkward to work with. Application likely needs to connect to one of popular database engines. A large part of application might be available to authenticated users, so there is a need for authentication framework. There are probably various levels of permissions and roles that users might have.
These problems are solved by frameworks or libraries. They differ mainly in how many of these problems they solve, and how opinionated they are in solutions they provide. Some popular frameworks are Django (Python), Flask (Python), FastAPI (Python), Quart (Python), Ruby on Rails (Ruby), Grape (Ruby), Spring (Java), Grails (Java), Apache Wicket (Java), Vaadin (Java), Apache Struts (Java), Meteor (JavaScript) or Next.js (JavaScript). In general, communities of most popular languages came up with at least one web application framework.
Noteworthy exceptions and complications
WSGI and ASGI
ASGI is another specification for Python. It serves the same role as WSGI.
Why two specifications? There are two main reasons. First, WSGI is written around HTTP request/response loop, and as such is unable to support newer protocols like websockets. Second, WSGI supports only synchronous functions, while asynchronous code is increasingly popular since it was introduced in Python 3.5. Some application developers concluded that performance gains they receive from asynchronous code is worth more for them than WSGI compatibility. ASGI was created in response to these needs.
From the main model perspective, WSGI and ASGI are functionally the same thing.
Practically speaking, gunicorn is go-to WSGI application server and uvicorn is go-to ASGI application server. Django is compatible with both WSGI and ASGI, while Flask users can easily move to Quart, developed by the same team. Some newer frameworks, like FastAPI, are exclusive to ASGI. There is also at least one application server capable of serving either WSGI or ASGI applications.
WhiteNoise
WhiteNoise is a Python package for efficient static files serving. Static files are all files not generated dynamically by a Python program, such as stylesheets, frontend scripts, images and fonts.
It can act as a generic WSGI application wrapping around a target WSGI application. You can think of it as being somewhere between step number 2 and 3 of the main model. However, it also provides facilities to easily integrate with Django, and in such scenario it belongs firmly to step number 3.
The common knowledge of Python community is that static files should not be served by Python application, but instead should be handled by an HTTP server at the earlier stage of request/response cycle. WhiteNoise rejects that notion. It promises a high performance of serving static files while making sure they are correctly cached by CDN, proxies and client. It takes over one of responsibilities of an HTTP server.
Given all the things that HTTP server does, personally I would not be bold enough to skip it and open WhiteNoise-enabled application server directly to the world. But it might be worth considering in some special cases, like inside a container image or for internal company service.
Phusion Passenger
Phusion Passenger is application server notable for two main things. First, it’s an example of application server written in different language that target application - Phusion Passenger is written in C++ and initially hosted only applications written in Ruby. Second, it’s an example of application server compatible with multiple programming languages - newer versions can also host Python applications (compatible with WSGI) and Node.js applications.
Granian
Granian is application server written in Rust, capable of hosting both WSGI and ASGI Python applications. It’s notable because it’s an example of application server written in different language than target application, but also because it’s application server supporting both Python specifications.
Spring Boot
Spring Boot supports multiple deployment models. It can be built into a package compatible with the Jakarta EE specification, intended to run on application server. But there is also an option to build it into standalone HTTP server application that directly responds to incoming HTTP requests. That option is similar to how HTTP servers are built in Rust.
Django Weblog
Why, in 2025, do we still need a 3rd party app to write a REST API with Django?
The question was asked to the president of the DSF this year at FOSDEM, after his talk. And it is clearly a legitimate one!
But… is it True
? Do we actually need a 3rd party app to write an API with Django?
In a lot of cases, when you require a complex and full-featured API, I would recommend you do use one. Django REST Framework and Django Ninja being very sound choices with a bunch of nifty things you might need in a bigger project.
But… what if what you need is a simple REST API that does CRUD? Do you really need a 3rd party app to do that?
Let's try not to!
Let's first ask what is a REST API in the context of this article. Let's limit ourselves to building this:
- a URL that answers to
GET
requests with a list of records of a single model type POST
-ing to that same URL should create a new record- a second URL with the primary key of a record tagged to the end. When
GET
-ing that URL, one should receive only that single record, in a similar format as in the list PUT
-ing data to that URL should update the record and return that record with updated valuesDELETE
-ing to that same URL should delete the record- everything should be achieved using JSON
Listing records from a model
Chances are you have heard of generic class-based views (CBVs) in Django, the one that comes to mind when it comes to listing records is the built-in django.views.generic.ListView
.
ListView
extends 2 other classes, BaseListView
and MultipleObjectTemplateResponseMixin
. Since we want to build an API, we clearly don't need to extend anything template-related. Looking at what BaseListView
provides, one can notice the only thing really missing there is a render_to_response
method. And this is going to be the case for most of the other base classes.
This sets our starting point!
The type of response we want to render is a json
response and Django already provides one. So let's build a JsonViewMixin
that looks like this for now:
class JsonViewMixin(View):
response_class = JsonResponse
content_type = 'application/json'
def render_to_response(self, context, **response_kwargs):
response_kwargs.setdefault("content_type", self.content_type)
return self.response_class(context, **response_kwargs)
The next thing we have to tackle is that the context
returned by BaseListView
's get_context_data
returns much more data than what we strictly need for this article. Also the list of records returned is not serialized to json
.
Serializers for mutliple formats already exist in Django core (see django.core.serializers
) but I will go a different route here. There is another way to serialize data in Django that you are likely familiar with but is not labelled as clearly: forms.
Forms are used in regular views to serialize models to simple types, understandable by HTML forms (mostly text) and vice-versa. This is very close to what we need, since json
is also mostly text-based.
To start with, using forms as serializers requires creating a new form instance for each record in the list we want to return.
Let's add that to the mixin!
def serialize_many(self, obj_list):
return [self.serialize_one(obj) for obj in obj_list]
def serialize_one(self, obj)
form = self.get_form_for_object(obj)
serialized = form.initial
serialized['pk'] = obj.pk # forms strip pk's from their data
return serialized
def get_form_for_object(self, obj):
form_class = self.get_form_class()
kwargs = self.get_form_kwargs()
kwargs['instance'] = obj
return form_class(**kwargs)
Why use forms?
ModelForms are a built-in and robust Django tool that are built around the idea of handling the transition between Model
fields and simple (and also JSON-serializable) types (mostly text and numbers). Which is exactly what we want from (de-)serializers in a lot of cases.
If you need to (de-)serialize a custom field type, Django documents creating a custom form field and this covered in various places like StackOverflow.
Moving on to our first View
Now that we have a tool to serialize the records list returned by BaseListView
let's write the first version of JsonListView
. As I alluded to earlier, we need to strip down what is returned from get_context_data
.
class JsonListView(JsonViewMixin, BaseListView):
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
# wrapping this in a dict for security reasons
return {
'results': self.serialize_many(context['object_list'])
}
This won't work yet because get_form_class
that I used in the JsonViewMixin
is only provided by classes that descend from FormMixin
. Since we want this view to handle both listing and creating records, let's go and fix that in the next section!
1 down, 3 to go: Adding records
First thing first, let's rebrand JsonListView
and make it inherit from BaseCreateView
.
class JsonListCreateView(JsonViewMixin, BaseCreateView, BaseListView):
Form creation and validation will be handled automatically by Django!
Almost…
The first concern will be with populating the form with POST
data. While Django does this for you when dealing with URL encoded or multipart form data, it does not (yet) handle json-encoded POST
content.
But this can be handled by taking advantage of the modularity of Django's generic class-based-views and overwritting get_form_kwargs
.
Let's address this (in a naïve way) within the mixin as it will be applicable to any JSON view:
def get_form_kwargs(self):
kwargs = super().get_form_kwargs()
if (
len(kwargs.get("data", {})) == 0
and self.request.content_type == "application/json"
and len(self.request.body)
):
# The request has a JSON body that we did not decode
kwargs["data"] = json.loads(self.request.body)
return kwargs
An issue that could arise here is that a JSONDecoderError
could be triggered. get_form_kwargs
does not return a response so I don't think it is the right place to handle the exception.
The post
method does return a response, let's wrap the original one with a try
/except
(still in the mixin):
def post(self, request, *args, **kwargs):
try:
return super().post(request, *args, **kwargs)
except json.decoder.JSONDecodeError as e:
return self.response_class(
{"error": f"json decode error: {e.msg}"},
status=HTTPStatus.UNSUPPORTED_MEDIA_TYPE,
)
Speaking of returning responses, the BaseCreateView class is built around HTML principles and its form_valid
and get
methods are both designed to render a form (via get_context_data
).
In the case of our REST API, the "create" part of things should not be involved with GET
requests.
Furthermore the reply to an invalid form submission should only comprise of an error (status + message) and should not require anything provided by get_context_data
.
Still, in regards to form validation, a valid form should not result in a redirect (behaviour of BaseCreateView
) but rather in a 201
response optionally containing the representation of the created record.
The form handling part is generic enough to put it in the mixin itself.
The behaviour of GET
is specific to the list/create view though.
Let's write the code accordingly:
class JsonViewMixin(View):
...
def form_invalid(self, form):
return self.response_class(
{'errors': form.errors},
status=HTTPStatus.UNPROCESSABLE_CONTENT
)
def form_valid(self, form):
self.object = form.save()
context = self.serialize_one(self.object)
return self.response_class(
context,
status=HTTPStatus.CREATED
)
class JsonListCreateView(JsonViewMixin, BaseCreateView, BaseListView):
...
def get_context_data(self, **kwargs):
# Explicitly point to BaseListView instead of `super()`
# to prevent actions taken by
# BaseCreateView's implementation of `get_context_data`
context = BaseListView.get_context_data(self, **kwargs)
return {
'results': self.serialize_many(context['object_list'])
}
Halfway there!
That was everything needed to handle the create
and list
portions of our CRUD REST application. Now we can move on to the read
, update
, delete
part. We'll do that in a second View
class as it requires a slightly different URL, one that contains the pk
of the resource.
Both read and update functionalities are provided by Django BaseUpdateView
but, as with the create/list view, the major difference in this case will be that we need a much simpler context.
class JsonReadUpdateView(JsonViewMixin, BaseUpdateView):
def get_context_data(self, **kwargs):
return self.serialize_one(self.object)
That's it!!!
Well, almost…
BaseUpdateView
is wired to answer to POST
requests for updating a record while REST good practices want us to use PUT
instead. The fix for this is to raise an error in reply to POST
calls while directing PUT
s to the parent's post
implementation.
def post(self, request, *args, **kwargs):
return self.response_class(status=HTTPStatus.METHOD_NOT_ALLOWED)
def put(self, request, *args, **kwargs):
return super().post(self, request, *args, **kwargs)
One more fix…
Our mixin implementation returns a 201
on form_valid
. In case of any view which is not creating a record, this should be 200
. Here are the necessary changes:
class JsonViewMixin(View):
form_valid_status = HTTPStatus.OK # new property
def form_valid(self, form):
self.object = form.save()
context = self.serialize_one(self.object)
return self.response_class(
context, status=self.form_valid_status # use the new property
)
...
class JsonListCreateView(JsonViewMixin, BaseCreateView, BaseListView):
form_valid_status = HTTPStatus.CREATED # override in case of creation
Why PUT
and not PATCH
?
BaseUpdateView
builds a form that expects all fields to be filled. Non-present fields would be reset to empty on the existing record for partial updates.
I'll leave it as an exercise to the reader to override that behaviour in case of a PATCH
request in order to "pre-fill the form" with existing values, maybe by using the form's initial
property… 😉
Finally…
The last bit of logic we have to implement is for deleting objects. Most of the code from Django's BaseDeleteView is related to creating and validating a form for confirming the user's intend on deleting the resource. This is usually not the expected behaviour for a REST API, this part being handled by whatever is calling the API.
Furthermore, it doesn't implement a delete method. In the HTML world of Django's BaseDeleteView
, everything is done using GET
and POST
. So we are (mostly) on our own for this last part.
We can still leverage the get_object
implementation provided by BaseUpdateView though.
Here is what implementing the delete operation for our read/update/delete view looks like:
class JsonReadUpdateDeleteView(JsonViewMixin, BaseUpdateView): # name changed
...
def delete(self, request, *args, **kwargs):
self.get_object().delete()
# data is required by JsonResponse
return self.response_class(data={}, status=HTTPStatus.NO_CONTENT)
Conclusion
This implementation is basic and clearly naïve. But it gets the job done!
And this can all be done by leveraging Django-provided tools and mechanisms, mainly using Django's generic CBVs.
Generic class-based views have been built in such a modular fashion that implementing one's own mini REST framework can be done in less than 100 lines of code.
A non-negligible advantage of such an approach is that most libraries written to work with Django's generic CBVs are also likely to work with this implementation.
This rather simple approach can certainly be improved (handling exceptions in delete
… anyone?) and is clearly not going to cover everybody's use cases. And it most likely misses handling a bunch of edge cases!
And if you are building a large REST API, I would say you are probably still better off using a 3rd party library but… to me, the answer to the question “Why do you need a 3rd party application to write a simple REST application with Django?” is: "You don’t"
If you enjoyed this article, read more from Emma on Emma has a blog, which is where this piece was from. Or watch the FOSDEM talk that Emma reacts to:
Thibaud Colas - Shifting DX expectations: keeping Django relevant 😬 | FOSDEM 2025
EuroPython
AI Keynote Reveal, Python Documentary Premiere & Last Chance for Combined Tickets
Hello, Pythonistas! 🐍
Tickets are still available but combined and tutorial tickets are almost sold out! You can get your tickets at europython.eu/tickets
📣 Meet our Keynoters
We are pleased to announce Nerea Luis will be doing a Keynote at Europython about the tangible impact that developers, managers, and other tech sector professionals can expect from the rise of generative AI.
Nerea Luis is an independent consultant and advisor in Artificial Intelligence and cofounder of Lumi Labs. She has a PhD in Computer Science and 10 years of experience in the field, she has led the direction of projects related to Natural Language Processing and Computer Vision. She has received recognition from Google, the Spanish Royal Family, and Forbes, among others.
💰 Sponsorship
If you&aposre passionate about supporting EuroPython and helping us make the event accessible to all, consider becoming a sponsor or asking your employer to join us in this effort.
By sponsoring EuroPython, you’re not just backing an event – you&aposre gaining highly targeted visibility and the chance to present your company or personal brand to one of the largest and most diverse Python communities in Europe and beyond!
We have a last few sponsorship slots available. Along with our main packages, there are optional add-ons and optional extras.
👉 More information at: https://ep2025.europython.eu/sponsorship/sponsor/
👉 Contact us at sponsoring@europython.eu
Speaker Mentorship
First Time Speakers’ Workshop
New to speaking? A seasoned speaker? Somewhere in between?
No worries! The EuroPython First Time Speakers’ Workshop is for everyone! 🐍 You will gain key presentation skills, practical advice, and valuable insights from seasoned speakers.
When: June 4, 2025, at 18:00 CEST on Zoom. Attendance is free, but kindly register: https://forms.gle/T8rc73sbyu3KbLNKA
More details: https://europython.eu/programme/mentorship/
🎥 Documentary on the History of Python
The filmmakers from Cult Repo, formerly known as Honeypot, are working on a documentary about the history of Python and its vibrant community. It features over 20 core developers and takes us on a journey from the first days of Python to the latest developments. We’re thrilled to announce that the documentary will premiere at EuroPython! Stay tuned for more details.
👥Call for Volunteers
Want to experience EuroPython from the inside? We’re looking for on-site volunteers to help us from July 14–20!
Volunteering is a great way to gain hands-on experience, meet new people, and attend the conference in a unique way. Whether you&aposre great at helping people, love the buzz of registration, want to chair a session, or just enjoy being part of a team — we have a role for you.
👉Apply by: June 1st
👉More info: europython.eu/volunteers
❓What does Guido van Rossum like about EuroPython?
Want to find out? Follow us on YouTube and social media! Over the next few days, we’ll be sharing short conversations with community members about what they&aposre looking forward to at the next EuroPython.
EuroPython Society
🏰 Call for Venues - EuroPython 2026
Are you a community builder who dreams of bringing EuroPython to your city? The Call for Venues for EuroPython 2026 is now open!
If you want to propose a location on behalf of your community, please fill in the following form: https://forms.gle/ZGQA7WhTW4gc53MD6
📊 Board Report
The EuroPython Society is the volunteer-run non-profit that organizes the EuroPython conference and supports the wider Python community in Europe. To stay transparent and keep everyone informed, the Board shares regular updates on what we’re working on.
The April board report is ready: https://europython-society.org/board-report-for-april-2025/
💞 Community Outreach
PyCon US
This year, the EuroPython Society had a booth at PyCon US for the very first time! 🎉 Curious how it went?
👉 Check out our blog post: https://europython-society.org/europython-society-at-pycon-us-2025/
💞Upcoming Events in the Python Community
- Pycon Italia, Bologna, 28-31 May https://2025.pycon.it/en
- EuroPython, Prague, 14-20 July https://ep2025.europython.eu
- EuroSciPy, Kraków, 18-22 August https://euroscipy.org/2025/
- PyCon Poland, Gliwice, 28-31 August https://pl.pycon.org/2025/en/
- PyCon Greece, Athens, Greece, 29-30 August https://2025.pycon.gr/en/
- PyData Berlin, 01-03 September https://pydata.org/berlin2025
- PyCamp CZ 25 beta, Třeštice, 12-14 September https://pycamp.cz/
- Pycon UK, Manchester, 19-22 September https://2025.pyconuk.org/
- PyCon Estonia, Tallinn, 2-3 October https://pycon.ee/
- PyCon Finland, Jyväskylä, 17 October https://2025.ploneconf.org/pyconfi-2025
- PyCon Sweden, Stockholm, 30-31 October https://pycon.se/
- PyCon France. Lyon, 30 October-2 November https://www.pycon.fr/2025/en/index.html
👋 See You All Next Month
And in the meantime, follow us on social media:
- LinkedIn: https://www.linkedin.com/company/europython/
- X: https://x.com/europython
- Mastodon: https://fosstodon.org/@europython
- BlueSky: https://bsky.app/profile/europython.eu
- YouTube: https://www.youtube.com/@EuroPythonConference
- Instagram: https://www.instagram.com/europython/
EuroPython Society
Board Report for April 2025
In April, our top priority was to kick off discussions about the 2026 venue. We successfully launched the Call for Venues and are now actively working on choosing a new location for next year.
We also helped release the schedule and coordinated all the related activities.
On top of that, we worked with our event manager to confirm a few more details, sign contracts, handle billing, and take care of similar administrative tasks.
Individual reports:
Artur
- Sponsor management
- Various small infrastructure updates, mainly in our static files hosting and the internal discord bot.
- Adding data sync and parsing from pretalx and pretix to the internal discord bot.
- Call for venues 2026 and related calls with communities
- Regular sync with the event manager
- Program: coordination of the documentary premiere with producers
Mia
- Communications: migration to Canva; copywriting for social media & articles
- Call for Venues 2026: kick-off and calls with communities
- Payments
- Design: coordination of design items with the designer; reviews
- Website: PR reviews; page updates
- Grants program: communication with recipients and administrative tasks
- Community outreach: DjangoCon Europe
- Program: coordination of the documentary premiere with producers
- Regular sync with the event manager
- PSF Grant Work
Aris
- Ops team activities, meetings and planning
- Tickets/vouchers documentation and workflow
- Accounting updates
- Payments
Ege
- Promotional material orders for PyCon US: Stickers, Posters, Retractable roll up banner, laminated info sheets
- Website and Programme API updates
- PR reviews
Shekhar
- FinAid: Completed Round 1 of our Financial Aid Programme
- Visa Application Letter: Distribution and Management
Anders
- VAT Report for March
Cyril
- …
Python⇒Speed
Loading Pydantic models from JSON without running out of memory
You have a large JSON file, and you want to load the data into Pydantic. Unfortunately, this uses a lot of memory, to the point where large JSON files are very difficult to read. What to do?
Assuming you’re stuck with JSON, in this article we’ll cover:
- The high memory usage you get with Pydantic’s default JSON loading.
- How to reduce memory usage by switching to another JSON library.
- Going further by switching to dataclasses with slots.
May 21, 2025
Real Python
Nested Loops in Python
Nested loops in Python allow you to place one loop inside another, enabling you to perform repeated actions over multiple sequences. Understanding nested loops helps you write more efficient code, manage complex data structures, and avoid common pitfalls such as poor readability and performance issues.
By the end of this tutorial, you’ll understand that:
- Nested loops in Python involve placing one loop inside another, enabling iteration over multiple sequences or repeated actions.
- Situations where nested loops are a good idea include handling multidimensional data, generating patterns, and performing repetitive tasks with multiple layers of iteration.
- You can break out of nested loops by using the
break
statement, which exits the innermost loop when a condition is met. - Disadvantages of nested loops include potential performance bottlenecks, poor readability, and variable scoping issues.
This tutorial provides practical examples and optimization techniques for using nested loops effectively in your Python programs.
Get Your Code: Click here to download the free sample code that you’ll use to learn about nested loops in Python.
Take the Quiz: Test your knowledge with our interactive “Nested Loops in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Nested Loops in PythonNested loops allow you to perform repeated actions over multiple sequences, but is there more? Test your understanding of nested loops in Python!
Getting Started With Nested Loops in Python
Loops are fundamental building blocks in programming, allowing you to iterate through actions efficiently. In Python, there are two primary types of loops: the for
loop and the while
loop. Both serve the same purpose —executing a block of code multiple times—but they differ in how they operate and in their use cases:
- A
for
loop iterates over a sequence, such as alist
or arange
, and executes a block of code for each item. They’re useful when the number of iterations is known beforehand. - A
while
loop runs as long as a specified condition remains true, making it useful when the number of iterations isn’t known in advance.
You create a nested loop by placing one loop inside another. This structure is especially helpful when working with multidimensional data, generating patterns, or handling tasks that involve several layers of repetition.
In a nested loop, the first loop is called the outer loop, and the loop inside is the inner loop. So, for every iteration of the outer loop, the inner loop runs completely before the outer loop moves to the next iteration.
Here’s the basic syntax of a nested loop:
for outer_variable in outer_iterable:
for inner_variable in inner_iterable:
<body>
The outer_iterable
must be a list, a dictionary, or some other sequence of items that you can iterate over. The same applies to the inner_iterable
. The <body>
inside the inner loop contains the code that runs once for each for
loop step in the inner_iterable
. Since the inner loop is nested inside the outer loop, it runs in full for each iteration of the outer loop.
A good analogy for a nested loop is the hour and minute hands of a clock. The hour hand moves slowly around the clock, completing one full revolution every twelve hours. Meanwhile, the minute hand moves at a much faster rate, completing a revolution every hour. While both hands rotate at different speeds, they work together, each completing their own cycle within the same clock.
Here’s how the clock logic looks in Python code:
>>> for hour in range(0, 24):
... for minute in range(0, 60):
... print(f"{hour:02d}:{minute:02d}")
...
00:00
00:01
00:02
⋮
23:57
23:58
23:59
As you can see, every time the minute hand completes a cycle, the hour hand moves to the next hour. :02d
is a format specifier that ensures the number is printed as a two-digit-wide integer value.
Now that you’ve been introduced to nested loops, it’s time to explore some practical examples. You’ll become familiar with writing programs using nested loops in the following section.
Exploring Practical Examples of Nested Loops
As you just learned, nested loops have a number of use cases. Here, you’ll have a look at a few examples. These examples are interesting and practical, allowing you to have fun as you explore their syntax and semantics.
Printing Patterns With Nested Loops
Being able to print any pattern of your choosing is a fascinating feat in programming. One way you can achieve this is by understanding how nested loops work. The code snippet below builds a sail pattern using a few simple symbols. While this may not seem particularly exciting, consider it a first step toward creating something spectacular—like a spaceship:
1>>> height = 6
2>>> sail_patterns = "*#-x+o"
3>>> for row in range(height):
4... pattern = ""
5... spacing = " " * (height - row)
6... for symbol in sail_patterns:
7... pattern += symbol * row + spacing
8...
9... print(pattern)
10...
11
12* # - x + o
13** ## -- xx ++ oo
14*** ### --- xxx +++ ooo
15**** #### ---- xxxx ++++ oooo
16***** ##### ----- xxxxx +++++ ooooo
Here’s what the code does line by line:
Read the full article at https://realpython.com/nested-loops-python/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Nested Loops in Python
In this quiz, you’ll test your understanding of Nested Loops in Python.
Nested loops in Python involve placing one loop inside another, enabling iteration over multiple sequences or repeated actions. They’re particularly useful for handling multidimensional data, generating patterns, and performing repetitive tasks with multiple layers of iteration.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
EuroPython Society
EuroPython Society at PyCon US 2025
This year, for the first time, the EuroPython Society had a booth at PyCon US. 🎉
It was a great chance to meet people from other Python communities, catch up with old friends, and make some new ones. At our booth, we gave out stickers (including a sneak peek of our 2025 ones), talked about our grants programme, and shared what the Society has been up to.
PyCon US was also where something special was announced. The filmmakers from Cult Repo, formerly known as Honeypot, are working on a documentary about the history of Python and its vibrant community. It features over 20 core developers, from the early days of Python to the newest contributors. They showed a trailer during the opening session on the third day and announced that the premiere will happen at EuroPython!
We managed to take a group photo with some of us during the PyLadies auction, which many of us attended. With such a full schedule, it can be tricky to get everyone together, so we were happy to catch that moment.
Outside of the talks and sessions, we also spent some evenings together just hanging out and catching up. These moments are always a highlight of any conference.
We are especially happy to share that our chair, Artur Czepiel, received a community award. So did Laís Carvalho, a former board member, and Raquel Dou, a former chair.
Karolina Surma, a member of Czech PyLadies and the 2023 programme team, also received an Outstanding PyLady Award for her contributions to the PyLadies community in the Czech Republic
Congratulations to all of you and thank you for all the time and energy you give to the community!
Thank you to the PyCon US organisers for running such a fantastic event and for giving us space to be part of it.
Thank you to our board member Ege Akman for sorting out materials and helping get everything ready for the booth.
And a big thank you to everyone who stopped by to chat. It was a pleasure to meet you. See you next year!
May 20, 2025
PyCoder’s Weekly
Issue #682: pylock.toml, NumPy, t-strings, and More (May 20, 2025)
#682 – MAY 20, 2025
View in Browser »
Going Beyond requirements.txt
With pylock.toml
What is the best way to record the Python dependencies for the reproducibility of your projects? What advantages will lock files provide for those projects? This week on the show, we welcome back Python Core Developer Brett Cannon to discuss his journey to bring PEP 751 and the pylock.toml file format to the community.
REAL PYTHON podcast
I Don’t Like NumPy
This opinion piece talks about why NumPy gets difficult fast. Two dimensions to your array? No problem, the calc is mostly self evident. Add a couple more dimensions and it gets messy fast. See also the associated HN discussion, which also includes possible solutions.
DYNOMIGHT
Webinar: Inside Dust’s AI-Powered Vision for the Future of Work
Join us on June 5 for a behind-the-scenes look at how Dust is building the future of work with AI and why Temporal is at the heart of their platform. We’ll explore how Dust is transforming those everyday tasks with intelligent agents that are deeply contextual, resilient, and production-ready →
TEMPORAL sponsor
Unravelling t-strings
PEP 750 introduced t-strings
for Python 3.14. These are a template string mechanism similar to f-strings. Although they are in 3.14.0b1, there isn’t any documentation yet, so this post explains what they are how they can be used.
BRETT CANNON
Python Jobs
Senior Software Engineer – Quant Investment Platform (LA or Dallas) (Los Angeles, CA, USA)
Causeway Capital Management LLC
Articles & Tutorials
Working With Missing Data in Polars
In this video course, you’ll learn how to deal with missing data in Polars to ensure it doesn’t interfere with your data analysis. You’ll discover how to check for missing values, update them, and remove them.
REAL PYTHON course
Django Security Best Practices
This guide contains a long list of best practices for securing your Django site. It includes topics such as ensuring HTTPS everywhere, hardening your database, using key built-in security features, and more.
AHMAD SADEDDIN
Read My Lips: Automate Lip Syncing with Wav2Lip
Don’t let your message be misunderstood! Use Wav2Lip for automated and AI-powered lip syncing. Get the code now.
INTEL CORPORATION sponsor
t-strings
in Python (PEP 750)
Python 3.14 will include the new t-string concept, an f-string like mechanism for string templating. Talk Python interviews Paul Everitt, David Peck, and Jim Baker and they talk about the upcoming feature.
KENNEDY ET AL podcast
The Database Row That Did and Didn’t Exist
Misty ran into a confounding bug, where loading and saving certain Django ORM objects was failing, but not all of them. Read on to understand what went wrong.
MISTY DE MÉO
The Appearing Built-In
Rodrigo discovered that examining the builtins
module in the REPL changes the size of the builtins
module. Read on to find what causes this oddity.
RODRIGO GIRÃO SERRÃO
What Does @Slot()
Do in Qt?
When working with Qt slots and signals in PySide6 you might discover the @Slot
decorator. This post tells you what it does and why you might use it.
MARTIN FITZPATRICK
5 Steps to N-Body Simulation
This article shows you how to simulate gravity in an N-body problem. It starts out with simpler simulation then adds optimizations as it goes along.
ALVIN NG
PEPs & Co.
A little bit of Python history: where the term “PEP” came from, along with a list of similar types of documents for other languages.
HUGO VAN KEMENADE
The First Year of Free-Threaded Python
A recap of the first year of work on enabling support for the free-threaded build of CPython in community packages.
NATHAN GOLDBAUM
Pandas: Drop Columns by Name in DataFrames
A quick overview of the variety of ways to drop columns from a pandas DataFrame, referencing the columns by name.
ERIK MARSJA
Working on Complex Systems: What I Learned at Google
Explore the key characteristics of complex systems and learn practical patterns for tackling complex problems.
TEIVA HARSANYI
Building a Real-Time Dashboard With Flask and Svelte
Learn how to build a real-time analytics dashboard using Flask and Svelte with server-sent events.
AMIR TADRISI
Projects & Code
Events
Python Leiden User Group
May 22, 2025
PYTHONLEIDEN.NL
Python Sheffield
May 27, 2025
GOOGLE.COM
PyCon Italia 2025
May 28 to June 1, 2025
PYCON.IT
PyKla Monthly Meetup
May 28, 2025
MEETUP.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #682.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Ari Lamstein
Course Review: Build AI Agents with Python
I was slow to jump on the LLM bandwagon.
In some ways this is surprising. Although it was a while ago, I did take several AI courses in grad school. So I am probably in a good position to dive into new developments in the field.
But when the LLM wave started I was busy trying to develop software engineering expertise in Python. I wanted to finish that “project” before moving onto something new. It was only a few months ago, when the PR of my multiyear
module was merged to the censusdis
package (link), that I felt like I could turn my attention elsewhere.
Coincidentally, around that time Kevin Markham created his first course on LLMs. I wrote a review of that course here. In this post I review his followup course: Build AI Agents with Python.
The Problem AI Agents Solve
Something I learned in this course is that LLMs like GPT-4o mini are called “foundational models”. Foundational models have very real limitations. Consider this conversation I just had with GPT-4o mini:
User: What is your cutoff date?
AI: My knowledge cutoff date is October 2021. This means I do not have information on events or developments that occurred after that date. If you have questions based on information available up to then, feel free to ask!User: Do you have access to the web?
AI: No, I do not have access to the web or the ability to browse the internet. My responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available information up to my knowledge cutoff date in October 2021. If you have any questions within that scope, feel free to ask!User: What is the latest version of Python?
AI: As of my last knowledge update in October 2021, the latest stable release of Python was Python 3.10, which was released on October 4, 2021. For any updates or newer versions released after that date, I recommend checking the official Python website at https://www.python.org/.
In short, the answers that foundational models give are limited to the data they were trained on. For reference, the latest version of Python is 3.13.3, which was released in April 2025.
Augmenting Foundational Models
You can think of AI Agents as augmenting foundational models with tools, data and prompts that you provide them. This allows them to provide better answers to certain questions.
As an example, Build AI Agents with Python walks you though the process of giving GPT-4o mini the “tool” of web search using Tavily. This resulted in the following conversation:
User: What is your cutoff date?
AI: My knowledge cutoff date is in October 2021. I do not have information on events or developments that occurred after that date. However, I can access real-time information through specific tools. If you have a particular query, feel free to ask!User: Do you have access to the web?
AI: Yes, I can access real-time information through a specific tool to search the web for current events and updates. If you have a particular topic or question in mind, let me know, and I can look it up for you!User: What is the latest version of python?
AI: The latest stable version of Python is 3.13.3. You can download it from the official Python website https://www.python.org/downloads/.
I had not heard of Tavily prior to taking this course. It turns out that there is an entire ecosystem of tools around LLMs!
In addition to adding in a tool for web search, the course guides you through integrating custom data sources using Retrieval Augmented Generation (RAG) and building a Q&A system that can interact with a SQL database. For a course that only contains two hours of videos, I felt that it covered a lot.
Like with all online courses, there is an argument that I could have learned all this on my own. But “get your hands dirty with LLMs” had been on my todo list for a while. It was only taking this course that made it happen. If you are looking for a similar push, then you might enjoy this course too.
Wish List
Kevin is gifted at coming up with good examples and clearly explaining his solutions. If there was one change I could make to the course it would be for him to add exercises to it.
Longtime readers might remember that I have created and taught several courses myself. My initial courses did not have exercises. It was only when I read Telling Ain’t Training that I learned how valuable they are for students. That book also presents a framework for creating exercises that I still use today.
Next Steps
After taking an online course, I often want to apply what I learned to a project of my own. In this case, I can now imagine adding an LLM feature to my Covid Demographics Explorer.
The dataset that powers that app is in a single CSV file. I think it would be fun to create a chatbot that has access to that file and uses it to answer questions. That would allow users to ask questions about the dataset using natural language and get answers in natural language.
Real Python
First Steps With LangChain
You’ve likely interacted with large language models (LLMs), like the ones behind OpenAI’s ChatGPT, and experienced their remarkable ability to answer questions, summarize documents, write code, and much more.
While LLMs are remarkable by themselves, with a little programming knowledge, you can leverage libraries like LangChain to create your own LLM-powered applications that can do just about anything.
In this video course, you’ll learn how to:
- Use LangChain to build LLM-powered applications
- Create reusable instructions with prompt templates
- Create and extend LangChain chains
- Debug what happens when a chain executes
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Stéphane Wirtel
From script to version 1.0.0 — 12 years later
🧵 From script to version 1.0.0 — 12 years later In 2013, I wrote a tiny Python script to help with local SMTP debugging during development. It used smtpd, docopt, and had no ambition other than making my day a little easier. This tiny helper evolved into a CLI tool: dsmtpd Then, I forgot about it. 🧹 2021: Minor cleanup, no big plans Eight years later, I dusted it off:
May 19, 2025
Real Python
How to Group Data Using Polars .group_by()
One of the most common tasks you’ll encounter when analyzing Polars data is the need to summarize it. You can use the Polars .group_by()
method to create groupings based on column values. A related summarization technique is aggregation, where you take a sequence of related values and condense them into a single value.
By the end of this tutorial, you’ll understand that:
- You can summarize data using aggregation.
- You can use
.filter()
to view specific data. - Using
.group_by()
allows you to summarize one or more columns of your data. - Your time series data can be grouped using
.group_by_dynamic()
. - You can match summarized data with the original data using window functions.
- Pivot tables allow you to group and aggregate rows and columns of data.
In this tutorial, you’ll learn how to group data in several ways using the .group_by()
method in Polars and apply aggregation to each group.
Before you begin, you should be familiar with the basics of working with Polars DataFrames or another DataFrame library like pandas.
Note: If you’re familiar with Polars, you’ll know that in addition to DataFrames, Polars also supports LazyFrames. While this is an efficient tool when working with large datasets, and you’re certainly encouraged to learn how to use them, DataFrames are sufficient for the examples in this tutorial.
In addition, you may find it helpful to use Jupyter Notebook as you work through many of the examples in this tutorial. Alternatively, JupyterLab will enhance your notebook experience, but any Python environment you’re comfortable with will work just fine.
To get started, you’ll need some data. For the central part of this tutorial, you’ll use the student performance dataset freely available from the UC Irvine Machine Learning Repository. The data in its original form comes in two files: student-mat.csv
and student-por.csv
. These contain data on students taking math and Portuguese language courses.
Both files have been altered to include student identification numbers and the subject indicators M
and P
, formatted as math.parquet
and portuguese.parquet
. All data has also been merged into a single course.parquet
file. These three files are available in your download bundle, which you can access by clicking the link below:
Get Your Code: Click here to download the free sample code that you’ll use to learn about grouping data with Polars .group_by().
The table below shows the fields used in this tutorial:
Field Name | Description |
---|---|
absences |
Number of absences |
age |
Student’s age |
failures |
Number of failures |
G1 , G2 , G3 |
First, second, and final grade |
internet |
Student has home internet access |
reason |
Reason for taking course |
school |
School attended |
sex |
Student’s gender (M , F ) |
student_id |
Student’s enrollment number |
subject |
Subject studied (M , P ) |
The dataset is quite extensive, containing over thirty fields. If you want to experiment, the student.txt
file defines every field. This file is also available as part of the tutorial downloads.
To use Polars, you first need to install the Polars library into your Python environment. To install Polars from the command prompt, use the following command:
$ python -m pip install polars
In a Jupyter Notebook, the command is !python -m pip install polars
.
With everything set up, it’s time to start using Polars to aggregate and group data.
Get Your Code: Click here to download the free sample code that you’ll use to learn about grouping data with Polars .group_by().
Take the Quiz: Test your knowledge with our interactive “How to Group Data Using Polars .group_by()” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Group Data Using Polars .group_by()Put your Polars .group_by() and aggregation skills to the test. This quiz goes beyond the tutorial, so you'll need to dig deeper to find all the answers.
Aggregating Data
Whenever you want to aggregate data without grouping it, you apply an aggregation function directly to the column or columns you want to analyze. More often than not, you’ll also want to group your aggregated data. You’ll learn how to do that later—for now, you’ll focus on aggregation.
Learning the Basics
Suppose you wanted to find the highest number of absences among students in the math class. You can do this as follows:
>>> import polars as pl
>>> math_students = pl.read_parquet("math.parquet")
>>> (
... math_students
... .select(pl.col("absences").max())
... )
shape: (1, 1)
┌──────────┐
│ absences │
│ --- │
│ i64 │
╞══════════╡
│ 75 │
└──────────┘
After installing the Polars library earlier, you import
it into your code with the conventional alias pl
to access its functionality. This allows you to use the read_parquet()
function to read the contents of math.parquet
into a Polars DataFrame.
Read the full article at https://realpython.com/polars-groupby/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: How to Group Data Using Polars .group_by()
In this quiz, you’ll test your understanding of the techniques covered in How to Group Data Using Polars .group_by().
By working through the questions, you’ll review your understanding of how to use the data aggregation and grouping techniques provided by Polars.
You’ll need to do some research outside of the tutorial to answer all the questions, so let this challenge take you on a learning journey.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Talk Python to Me
#506: ty: Astral's New Type Checker (Formerly Red-Knot)
The folks over at Astral have made some big-time impacts in the Python space with uv and ruff. They are back with another amazing project named ty. You may have known it as Red-Knot. But it's coming up on release time for the first version and with the release it comes with a new official name: ty. We have Charlie Marsh and Carl Meyer on the show to tell us all about this new project.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/connect-cloud'>Posit</a><br> <a href='https://talkpython.fm/auth0'>Auth0</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading">Links from the show</h2> <div><strong>Talk Python's Rock Solid Python: Type Hints & Modern Tools (Pydantic, FastAPI, and More) Course</strong>: <a href="https://training.talkpython.fm/courses/python-type-hint-course-with-hands-on-examples" target="_blank" >training.talkpython.fm</a><br/> <br/> <strong>Charlie Marsh on Twitter</strong>: <a href="https://twitter.com/charliermarsh?featured_on=talkpython" target="_blank" >@charliermarsh</a><br/> <strong>Charlie Marsh on Mastodon</strong>: <a href="https://hachyderm.io/@charliermarsh?featured_on=talkpython" target="_blank" >@charliermarsh</a><br/> <br/> <strong>Carl Meyer</strong>: <a href="https://twitter.com/carljm/?featured_on=talkpython" target="_blank" >@carljm</a><br/> <br/> <strong>ty on Github</strong>: <a href="https://github.com/astral-sh/ty?featured_on=talkpython" target="_blank" >github.com/astral-sh/ty</a><br/> <strong>A Very Early Play with Astral’s Red Knot Static Type Checker</strong>: <a href="https://app.daily.dev/posts/a-very-early-play-with-astral-s-red-knot-static-type-checker-wnhcuf0lx?featured_on=talkpython" target="_blank" >app.daily.dev</a><br/> <strong>Will Red Knot be a drop-in replacement for mypy or pyright?</strong>: <a href="https://github.com/astral-sh/ruff/discussions/15149?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Hacker News Announcement</strong>: <a href="https://news.ycombinator.com/item?id=42868576&featured_on=talkpython" target="_blank" >news.ycombinator.com</a><br/> <strong>Early Explorations of Astral’s Red Knot Type Checker</strong>: <a href="https://pydevtools.com/blog/early-explorations-of-astrals-red-knot-type-checker/?featured_on=talkpython" target="_blank" >pydevtools.com</a><br/> <strong>Astral's Blog</strong>: <a href="https://astral.sh/blog?featured_on=talkpython" target="_blank" >astral.sh</a><br/> <strong>Rust Analyzer Salsa Docs</strong>: <a href="https://docs.rs/rust-analyzer-salsa/latest/salsa/?featured_on=talkpython" target="_blank" >docs.rs</a><br/> <strong>Ruff Open Issues (label: red-knot)</strong>: <a href="https://github.com/astral-sh/ruff/issues?q=is%3Aissue%20state%3Aopen%20label%3Ared-knot&featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Ruff Types</strong>: <a href="https://types.ruff.rs/?featured_on=talkpython" target="_blank" >types.ruff.rs</a><br/> <strong>Ruff Docs (Astral)</strong>: <a href="https://docs.astral.sh/ruff/?featured_on=talkpython" target="_blank" >docs.astral.sh</a><br/> <strong>uv Repository</strong>: <a href="https://github.com/astral-sh/uv?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=XVwpL_cAvrw" target="_blank" >youtube.com</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/506/ty-astrals-new-type-checker-formerly-red-knot" target="_blank" >talkpython.fm</a><br/> <br/> <strong>--- Stay in touch with us ---</strong><br/> <strong>Subscribe to Talk Python on YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" >youtube.com</a><br/> <strong>Talk Python on Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm at bsky.app</a><br/> <strong>Talk Python on Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes at bsky.app</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Python Bytes
#432 How To Fix Your Computer
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://adamj.eu/tech/2025/05/07/pre-commit-install-uv/?featured_on=pythonbytes">pre-commit: install with uv</a></strong></li> <li><strong><a href="https://peps.python.org/pep-0773/?featured_on=pythonbytes">PEP 773</a>: A Python Installation Manager for Windows (Accepted)</strong></li> <li><a href="https://textual.textualize.io/blog/2025/05/07/the-future-of-textualize/?featured_on=pythonbytes"><strong>Changes for Textual</strong></a></li> <li><strong><a href="https://endler.dev/2025/best-programmers/?featured_on=pythonbytes">The Best Programmers I Know</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=eNH8L1voK9s' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="432">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by <strong>NordLayer</strong>: <a href="https://pythonbytes.fm/nordlayer"><strong>pythonbytes.fm/nordlayer</strong></a></p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes"><strong>@mkennedy.codes</strong></a> <strong>(bsky)</strong></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes"><strong>@brianokken.bsky.social</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/pythonbytes.fm"><strong>@pythonbytes.fm</strong></a> <strong>(bsky)</strong></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it. </p> <p><strong>Brian #1:</strong> <a href="https://adamj.eu/tech/2025/05/07/pre-commit-install-uv/?featured_on=pythonbytes">pre-commit: install with uv</a></p> <ul> <li>Adam Johnson</li> <li>uv tool works great at keeping tools you use on lots of projects up to date quickly, why not use it for pre-commit.</li> <li>The extension of <a href="https://pypi.org/project/pre-commit-uv/?featured_on=pythonbytes">pre-commit-uv</a> will use uv to create virtual environments and install packages fore pre-commit. This speeds up initial pre-commit cache creation. </li> <li>However, Adam is recommending this flavor of using pre-commit because it’s just plain easier to install pre-commit and dependencies than the official pre-commit install guide.</li> <li>Win-win.</li> <li>Side note: No Adam, I’m not going to pronounce uv “uhv”, I’ll stick with “you vee”, even Astral tells me I’m wrong </li> </ul> <p><strong>Michael #2:</strong> <a href="https://peps.python.org/pep-0773/?featured_on=pythonbytes">PEP 773</a>: A Python Installation Manager for Windows (Accepted)</p> <ul> <li>via pycoders newsletter</li> <li>One manager to rule them all – PyManager.</li> <li>PEP 773 replaces all existing Windows installers (.exe “traditional” bundle, per-version Windows Store apps, and the separate py.exe launcher) with a single MSIX app called Python Install Manager (nick-named PyManager).</li> <li>PyManager should be mainstream by CPython 3.15, and the traditional installer disappears no earlier than 3.16 (≈ mid-2027).</li> <li><strong>Simple, predictable commands.</strong> <ul> <li>python → launches “the best” runtime already present or auto-installs the latest CPython if none is found.</li> <li>py → same launcher as today <strong>plus</strong> management sub-commands:</li> <li>py install, py uninstall, py list, py exec, py help.</li> <li>Optional python3 and python3.x aliases can be enabled by adding one extra PATH entry.</li> </ul></li> </ul> <p><strong>Michael #3:</strong> <a href="https://textual.textualize.io/blog/2025/05/07/the-future-of-textualize/?featured_on=pythonbytes"><strong>Changes for Textual</strong></a></p> <ul> <li><strong>Bittersweet news: the business experiment ends, but the code lives on.</strong></li> <li>Textual began as a hobby project layered on top of Rich, but it has grown into a mature, “makes-the-terminal-do-the-impossible” TUI framework with an active community and standout documentation.</li> <li>Despite Textual’s technical success, the team couldn’t pinpoint a single pain-point big enough to sustain a business model, so <strong>the company will wind down in the coming weeks</strong>.</li> <li><strong>The projects themselves aren’t going anywhere</strong>: they’re stable, battle-tested, and will continue under the stewardship of the original author and the broader community.</li> </ul> <p><strong>Brian #4:</strong> <a href="https://endler.dev/2025/best-programmers/?featured_on=pythonbytes">The Best Programmers I Know</a></p> <ul> <li>Matthias Endler</li> <li>“I have met a lot of developers in my life. Lately, I asked myself: “What does it take to be one of the best? What do they all have in common?””</li> <li>The list <ul> <li>Read the reference</li> <li>Know your tools really well</li> <li>Read the error message</li> <li>Break down problems</li> <li>Don’t be afraid to get your hands dirty</li> <li>Always help others</li> <li>Write</li> <li>Never stop learning</li> <li>Status doesn’t matter</li> <li>Build a reputation</li> <li>Have patience</li> <li>Never blame the computer</li> <li>Don’t be afraid to say “I don’t know”</li> <li>Don’t guess</li> <li>Keep it simple</li> </ul></li> <li>Each topic has a short discussion. So don’t just ready the bullet points, check out the article.</li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li>I had a great time in Munich last week. I a talk at a company event, met with tons of people, and had a great time. <ul> <li>The best part was connecting with people from different divisions working on similar problems.</li> <li>I love the idea of internal conferences to get people to self organize by topic and meet people they wouldn’t otherwise, to share ideas.</li> <li>Also got started working on a second book on the plane trip back. </li> </ul></li> </ul> <p>Michael:</p> <ul> <li>Talk Python Clips (e.g. <a href="https://www.youtube.com/shorts/cARHSv-CwaE">mullet</a>)</li> <li>Embrace your cloud firewall (<a href="https://blobs.pythonbytes.fm/cloud-firewall-example.jpg">example</a>).</li> <li><a href="https://pythoninsider.blogspot.com/2025/05/python-3140-beta-1-is-here.html?featured_on=pythonbytes">Python 3.14.0 beta 1</a> is here</li> <li>Congrats to the <a href="https://pyfound.blogspot.com/2025/05/announcing-python-software-foundation.html?featured_on=pythonbytes">new PSF Fellows</a>.</li> <li><a href="https://www.linkedin.com/posts/mdboom_its-been-a-tough-couple-of-days-microsofts-activity-7328583333536268289-p4Lp/?rcm=ACoAAABOjqABPkOWTTbZXV9tmnQohvpkplQOibU&featured_on=pythonbytes">Cancelled faster CPython</a></li> <li><a href="https://bsky.app/profile/snarky.ca/post/3lp5w5j5tws2i?featured_on=pythonbytes">bsky.app/profile/snarky.ca post</a></li> </ul> <p><strong>Joke:</strong> <a href="https://x.com/PR0GRAMMERHUM0R/status/1559253623397404673?featured_on=pythonbytes">How To Fix Your Computer</a></p>
Zato Blog
Microsoft Dataverse with Python and Zato Services
Microsoft Dataverse with Python and Zato Services
Overview
Microsoft Dataverse is a cloud-based data storage and management platform, often used with PowerApps and Dynamics 365.
Integrating Dataverse with Python via Zato enables automation, API orchestration, and seamless CRUD (Create, Read, Update, Delete) operations on any Dataverse object.
Below, you'll find practical code examples for working with Dataverse from Python, including detailed comments and explanations. The focus is on the "accounts" entity, but the same approach applies to any object in Dataverse.
Connecting to Dataverse and Retrieving Accounts
The main service class configures the Dataverse client and retrieves all accounts. Both the handle
and get_accounts
methods are shown together for clarity.
# -*- coding: utf-8 -*-
# Zato
from zato.common.typing_ import any_
from zato.server.service import DataverseClient, Service
class MyService(Service):
def handle(self):
# Set up Dataverse credentials - in a real service,
# this would go to your configuration file.
tenant_id = '221de69a-602d-4a0b-a0a4-1ff2a3943e9f'
client_id = '17aaa657-557c-4b18-95c3-71d742fbc6a3'
client_secret = 'MjsrO1zc0.WEV5unJCS5vLa1'
org_url = 'https://org123456.api.crm4.dynamics.com'
# Build the Dataverse client using the credentials
client = DataverseClient(
tenant_id=tenant_id,
client_id=client_id,
client_secret=client_secret,
org_url=org_url
)
# Retrieve all accounts using a helper method
accounts = self.get_accounts(client)
# Process the accounts as needed (custom logic goes here)
pass
def get_accounts(self, client:'DataverseClient') -> 'any_':
# Specify the API path for the accounts entity
path = 'accounts'
# Call the Dataverse API to retrieve all accounts
response = client.get(path)
# Log the response for debugging/auditing
self.logger.info(f'Dataverse response (get accounts): {response}')
# Return the API response to the caller
return response
{'@odata.context': 'https://org1234567.crm4.dynamics.com/api/data/v9.0/$metadata#accounts',
'value': [{'@odata.etag': 'W/"11122233"', 'territorycode': 1,
'accountid': 'd92e6f18-36fb-4fa8-b7c2-ecc7cc28f50c', 'name': 'Zato Test Account 1',
'_owninguser_value': 'ea4dd84c-dee6-405d-b638-c37b57f00938'}]}
Let's check more examples - you'll note they all follow the same pattern as the first one.
Retrieving an Account by ID
def get_account_by_id(self, client:'DataverseClient', account_id:'str') -> 'any_':
# Construct the API path using the account's GUID
path = f'accounts({account_id})'
# Call the Dataverse API to fetch the account
response = client.get(path)
# Log the response for traceability
self.logger.info(f'Dataverse response (get account by ID): {response}')
# Return the fetched account
return response
Retrieving an Account by Name
def get_account_by_name(self, client:'DataverseClient', account_name:'str') -> 'any_':
# Construct the API path with a filter for the account name
path = f"accounts?$filter=name eq '{account_name}'"
# Call the Dataverse API with the filter
response = client.get(path)
# Log the response for auditing
self.logger.info(f'Dataverse response (get account by name): {response}')
# Return the filtered account(s)
return response
Creating a New Account
def create_account(self, client:'DataverseClient') -> 'any_':
# Specify the API path for account creation
path = 'accounts'
# Prepare the data for the new account
account_data = {
'name': 'New Test Account',
'telephone1': '+1-555-123-4567',
'emailaddress1': 'hello@example.com',
'address1_city': 'Prague',
'address1_country': 'Czech Republic',
}
# Call the Dataverse API to create the account
response = client.post(path, account_data)
# Log the response for traceability
self.logger.info(f'Dataverse response (create account): {response}')
# Return the API response
return response
Updating an Existing Account
def update_account(self, client:'DataverseClient', account_id:'str') -> 'any_':
# Prepare the data to update
update_data = {
'name': 'Updated Account Name',
'telephone1': '+1-555-987-6543',
'emailaddress1': 'hello2@example.com',
}
# Call the Dataverse API to update the account by ID
response = client.patch(f'accounts({account_id})', update_data)
# Log the response for auditing
self.logger.info(f'Dataverse response (update account): {response}')
# Return the updated account response
return response
Deleting an Account
def delete_account(self, client:'DataverseClient', account_id:'str') -> 'any_':
# Call the Dataverse API to delete the account
response = client.delete(f'accounts({account_id})')
# Log the response for traceability
self.logger.info(f'Dataverse response (delete account): {response}')
# Return the API response
return response
API Path vs. PowerApps UI Table Names
A detail to note when working with Dataverse APIs is that the names you see in the PowerApps or Dynamics UI are not always the same as the paths expected by the API. For example:
- In PowerApps, you may see a table called Account.
- In the API, you must use the path accounts (lowercase, plural) when making requests.
This pattern applies to all Dataverse objects: always check the API documentation or inspect the metadata to determine the correct entity path.
Working with Other Dataverse Objects
While the examples above focus on the "accounts" entity, the same approach applies to any object in Dataverse: contacts, leads, opportunities, custom tables, and more. Simply adjust the API path and payload as needed.
Full CRUD Support
With Zato and Python, you get full CRUD (Create, Read, Update, Delete) capability for any Dataverse entity. The methods shown above can be adapted for any object, allowing you to automate, integrate, and orchestrate data flows across your organization.
Summary
This article has shown how to connect to Microsoft Dataverse from Python using Zato, perform CRUD operations, and understand the mapping between UI and API paths. These techniques enable robust integration and automation scenarios with any Dataverse data.
More resources
➤ Microsoft 365 APIs and Python Tutorial
➤ Python API integration tutorials
➤ What is an integration platform?
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
➤ Open-source iPaaS in Python
Python GUIs
Tkinter Widgets — A walkthrough of Tkinter's basic widgets
In Tkinter (and most GUI libraries), widget is the name given to a component of the GUI that the user can interact with. User interfaces are made up of multiple widgets arranged within the window to make it functional and intuitive to use.
Tkinter comes with a decent set of widgets and even allows you to create your own custom widgets or customize existing ones.
A Quick Demo
First, let's have a look at some of the most common Tkinter widgets. The following code creates a range of Tkinter widgets and adds them to a window layout so you can see them together:
import tkinter as tk
from tkinter import ttk
root = tk.Tk()
root.title("Widgets Demo")
widgets = [
tk.Label,
tk.Checkbutton,
ttk.Combobox,
tk.Entry,
tk.Button,
tk.Radiobutton,
tk.Scale,
tk.Spinbox,
]
for widget in widgets:
try:
widget = widget(root, text=widget.__name__)
except tk.TclError:
widget = widget(root)
widget.pack(padx=5, pady=5, fill="x")
root.mainloop()
Tkinter's Widgets Demo on macOS
You can learn how the pack()
geometry manager works in our Using the Pack Geometry Manager in Tkinter tutorial.
Let's have a look at all the example widgets, from top to bottom:
Widget | Description |
---|---|
Label |
Just a label, not interactive |
Checkbutton |
A checkbox |
Combobox |
A dropdown list box |
Entry |
Enter a line of text |
Button |
A button |
Radiobutton |
A toggle set, with only one active item |
Scale |
A slider |
Spinbox |
An integer spinner |
There are a few more widgets in Tkinter, but they don't fit so well for a quick demo example! You can see them all by checking the TkDocs.
Now, we'll step through each of those widgets in turn, adding them to our application and seeing how they behave.
Label
We'll start the tour with Label
, arguably one of the simplest widgets available in the Tkinter toolbox. This is a simple one-line piece of text that you can position in your application. You can set the text by passing in a str
as you create it:
label = tk.Label(self, text="Hello")
Or, by using the .config()
function:
label = tk.Label(self, text="1") # The label is created with the text 1.
label.config(text="2") # The label now shows 2.
You can also adjust font properties, such as the family and size. Here's an app that showcases these features:
import tkinter as tk
root = tk.Tk()
root.title("Tkinter Label")
root.geometry("200x80")
label = tk.Label(root, text="Hello!", font=("Helvetica", 30))
label.pack(expand=True)
root.mainloop()
Tkinter's Label Widget on macOS
Note that if you want to change the properties of a widget font, it is usually better to set the font when creating the widget to ensure consistency.
The alignment is specified by using the anchor
configuration option. The possible horizontal text alignments are:
Option | Behavior |
---|---|
"w" , tk.W (for West) |
Aligns with the left edge |
"e" , tk.E (for East) |
Aligns with the right edge |
"center" , tk.CENTER |
Centers horizontally in the available space |
The options available for vertical alignment are:
Option | Behavior |
---|---|
"n" , tk.N (for North) |
Aligns with the top |
"s" , tk.S (for South) |
Aligns with the bottom |
"center" , tk.CENTER |
Centers vertically in the available space |
You can combine these by setting the anchor option:
label = tk.Label(self, text="Hello", anchor="center")
Note that you use the anchor
option to combine the alignment settings.
Finally, you can also use Label
to display an image using PhotoImage
. This function accepts an image file, and you can create it as follows:
import tkinter as tk
root = tk.Tk()
root.title("Tkinter Label Image")
photo = tk.PhotoImage(file="otje.png").subsample(2)
label = tk.Label(root, image=photo)
label.pack(expand=True)
root.mainloop()
Otje, the cat, displayed in a window
What a lovely face!
Checkbutton
The next widget to explore is Checkbutton
. As its name suggests, it presents a checkable box to the user. As with all Tkinter widgets, it has a number of configuration options to change the widget behaviors:
import tkinter as tk
root = tk.Tk()
root.title("Tkinter Checkbutton")
root.geometry("210x80")
def show_state():
checked = "Checked" if var.get() else "Unchecked"
checkbox.config(text=f"Check me! ({checked})")
var = tk.IntVar()
checkbox = tk.Checkbutton(root, text="Check me! (Checked)", variable=var)
checkbox.select()
checkbox.config(command=show_state)
checkbox.pack(padx=5, pady=10)
root.mainloop()
Tkinter's Checkbutton Widget on macOS
You can programmatically set a checkbox state using select()
or deselect()
. You can access the state using a tk.IntVar()
variable, which holds the checkbox's state: 1
for checked and 0
for unchecked.
Combobox
The Combobox
widget is a drop-down list, closed by default with an arrow to open it. You can select a single item from the list, and the currently selected item is shown as a label on the widget. The combo box is suited to selecting a choice from a long list of options.
You can add items to a Combobox
by passing a list of strings to its values
argument. The items will be added in the order we provide them:
import tkinter as tk
from tkinter import ttk
root = tk.Tk()
root.title("Tkinter Combobox")
root.geometry("200x80")
def selection_changed(event):
label.config(text=f"{event.widget.get()} selected!")
combobox = ttk.Combobox(root, values=["One", "Two", "Three"])
combobox.set("One")
combobox.bind("<<ComboboxSelected>>", selection_changed)
combobox.pack(padx=5, pady=5, fill="x")
# A helper label to show the selected value
label = tk.Label(root, text="One selected!")
label.pack(padx=5, pady=5, fill="x")
root.mainloop()
Tkinter's Combobox Widget on macOS
In this example, we create a combo box whose values come from a Python list. Then, we set the current value to "One"
with the set()
function.
The bind()
function connects the <<ComboboxSelected>>
event to the selection_changed()
function. This event is triggered when the currently selected item changes. The function updates the text of the helper label to reflect the selected item.
Combobox
widgets can also be editable, allowing users to enter values not currently in the list. To achieve this, you need to set the state
argument to "normal"
:
widget.config(state="normal")
You can also set a limit to the number of items allowed by configuring the list or using custom validation.
Listbox
The Listbox
widget is similar to Combobox
, except that its options are presented as a scrollable list of items. It also supports the selection of multiple items at once. The Listbox
class offers a <<ListboxSelect>>
event that sends the selected item's index:
import tkinter as tk
root = tk.Tk()
root.title("Tkinter Listbox")
def selection_changed(event):
selection = event.widget.curselection()
if selection:
index = selection[0]
label.config(text=f"{event.widget.get(index)} selected!")
event.widget.get(index)
listbox = tk.Listbox(root)
for item in ["One", "Two", "Three"]:
listbox.insert(tk.END, item)
listbox.bind("<<ListboxSelect>>", selection_changed)
listbox.pack(padx=5, pady=5, fill="both", expand=True)
# A helper label to show the selected value
label = tk.Label(root, text="One selected!")
label.pack(padx=5, pady=5, fill="x")
root.mainloop()
Tkinter's Listbox Widget on macOS
In this example, you create a list box and populate it through a for
loop and the insert()
function. Then, you connect the "<<ListboxSelect>>"
event with the selection_changed()
function. The helper label at the bottom of the window shows the selected item.
Entry
The Entry
widget is a simple single-line text editing box, into which users can type input. These are used for form fields, or settings where there is no restricted list of valid inputs. For example, when entering an email address or computer name:
import tkinter as tk
root = tk.Tk()
root.title("Tkinter Entry")
def return_pressed(event):
label.config(text=event.widget.get())
entry = tk.Entry(root)
entry.insert(0, "Enter your text")
entry.bind("<Return>", return_pressed)
entry.pack(padx=5, pady=5, fill="x")
# A helper label to show the selected value
label = tk.Label(root, text="Entry demo!")
label.pack(padx=5, pady=5, fill="x")
root.mainloop()
Tkinter's Entry Widget on macOS
In this example, you create a text entry using the Entry
widget. Then, you insert a placeholder text using the insert()
function. The Entry
widget allows you to handle various text events, including when the Return key is pressed. Binding this even to return_pressed()
ensures that when you press Enter, the helper label displays the text that you type into the entry.
The Entry
class has several useful features. They even allow you to perform input validation using custom validation functions and regular expressions.
Spinbox
The Spinbox
widget provides an input box for numerical values. It has arrows to increase and decrease the value. It supports integers natively:
import tkinter as tk
root = tk.Tk()
root.title("Tkinter Spinbox")
root.geometry("200x80")
spinbox_var = tk.StringVar(value="0")
spinbox = tk.Spinbox(
root,
from_=-10,
to=10,
textvariable=spinbox_var,
)
spinbox.pack(padx=5, pady=5, fill="x")
# A helper label to show the selected value
label = tk.Label(root, textvariable=spinbox_var)
label.pack(padx=5, pady=5, fill="x")
root.mainloop()
Tkinter's Spinbox Widget on macOS
The code above shows the various features available for the Spinbox
widget. You can set the interval extremes with the from_
and to
arguments. In practice, you'll often use the textvariable
option to control the spin box value.
Scale
The Scale
widget provides a slide-bar widget that works much like a Spinbox
. Rather than displaying the current value numerically, it displays the position of the slider handle along the length of the widget.
This widget is often useful when we need to adjust between two extremes but where absolute accuracy is not required. The most common use of this type of widget is for volume controls in multimedia apps:
import tkinter as tk
root = tk.Tk()
root.title("Tkinter Scale")
root.geometry("200x80")
def value_changed(event):
label.config(text=event.widget.get())
scale = tk.Scale(root, from_=0, to=10, orient="horizontal")
scale.bind("<Motion>", value_changed)
scale.pack(padx=5, pady=5, fill="x")
# A helper label to show the selected value
label = tk.Label(root, text="0")
label.pack(padx=5, pady=5, fill="x")
root.mainloop()
Run this and you'll see a slider widget. Drag the slider to change the value.
Tkinter's Scale Widget on macOS
You can also construct a slider with a vertical or horizontal orientation by setting the orient
option.
Conclusion
This concludes our brief tour of the common widgets used in Tkinter applications. To see the full list of available widgets, including all their options and attributes, take a look at the Tkinter Documentation or TkDocs site.
May 17, 2025
The Python Coding Stack
The Chores Rota (#3 in The `itertools` Series • `cycle()` and Combining Tools)
"It's your turn to take the bins out."
"No way, I washed up the dishes today, and vacuumed the rugs yesterday."
"But…"
And on and on it went. Yteria and her flatmate, Silvia, had these arguments every day. Yteria was hoping she'd be able to move soon—move to a new area and to a flat she didn't have to share with Silvia…or anyone else.
"Right. Let me set up a rota and then we'll follow it strictly", and Yteria got straight to work.
It had been several weeks since Yteria had lost the word "for". For this world is a bit different to yours and mine. People can lose words through some mishap or nefarious means. And if you lose a word, you can't speak it, you can't write it, you can't type it. You still know it's there somewhere, that it exists in the language, but you can't use it.
You can follow Yteria's origin story, how she lost the word "for" and the challenges she faced when programming here: The ‘itertools’ Series.
It’s unlikely you care, but I’ll tell you anyway. I launched a new publication last week. But it’s completely unrelated to Python and it’s unlikely there will be much overlap between the two audiences. Still, if you want to follow my ‘back to the future’ journey, here’s the first post that introduces the publication: Back on the Track • The 25-Year Gap • #1
Creating Infinite Iterables from Finite Ones
Yteria set up two lists, one with the chores and another with Silvia's name and her own:

Next, she wanted to write code to convert these lists into infinite sequences by repeating the contents of the lists forever:
But then she stopped.
Yteria had been programming without the ability to use the word "for" for several weeks by now. And she had discovered the itertools
module in Python's standard library. This module came to her rescue on several occasions.
And there it was: itertools.cycle()
. It was the perfect tool for what she needed:
The function itertools.cycle()
accepts any iterable and returns an iterator that will keep yielding items from the original iterable, restarting from the beginning each time it reaches the end.
If you want to brush up on the difference between iterable and iterator, you can read the following articles:
Iterable: Python's Stepping Stones • (Data Structure Categories #1)
A One-Way Stream of Data • Iterators in Python (Data Structure Categories #6)
But before we move on, let's still write the create_infinite_sequence()
function Yteria was about to write. A version of this function could be as follows:
This function includes a yield
rather than a return
. Therefore, this is a generator function. Calling this function creates a generator. You can read more about generators in this article: Pay As You Go • Generate Data Using Generators (Data Structure Categories #7)
A generator created from this generator function starts with index
equal to 0
and, therefore, starts by yielding the first element in the sequence. Next time, it yields the second, and so on. However, the final line in the function definition uses a conditional expression to reset the index to zero whenever it reaches the end of the sequence.
So, for a list with three elements, such as tasks
, here are the first few steps:
The generator starts with
index
equal to0
, yields the first element, then incrementsindex
to1
. The increment happens in the conditional expression. Note how the third operand in the conditional expression—the one after theelse
—isindex + 1
.Since
index
is now1
, the generator yields the second element and incrementsindex
to2
.When the generator yields
sequence[2]
, the conditional expression resetsindex
to0
sinceindex
, which is2
, is equal tolen(sequence) - 1
.The generator then yields the first element of the sequence and the whole process repeats itself.
Let's confirm that this gives the same output as itertools.cycle()
:
So, does it matter which option you choose?
Yes, it does.
First of all, once you know about itertools.cycle()
, it's much easier and quicker to use it than to write your own function. It also makes your code more readable for anyone who's aware of itertools.cycle()
—and even if they're not, the function name gives a good clue to what it does.
A second advantage of using itertools.cycle()
is that it works with any iterable. The create_infinite_sequence()
generator function only works with sequences. A sequence is an ordered collection in which you can use integers as indices to fetch data based on the order of the elements in the sequence. You can read more about sequences here: Sequences in Python (Data Structure Categories #2)
In Python, all sequences are iterable, but not all iterables are sequences. For example, dictionaries are iterable but they're not sequences. Therefore, itertools.cycle()
can be used on a larger group of data types than create_infinite_sequence()
.
And finally, there's another really good reason to use itertools.cycle()
instead of a homemade function:
You create two iterators. The first one, infinite_tasks
, is the generator you get from the generator function create_infinite_sequence()
. Note that all generators are iterators.
The second iterator is infinite_tasks_cyc
, which is the iterator that itertools.cycle()
returns. All the tools in itertools
return iterators.
Finally, you time how long it takes to get the first 10 million elements from each of these infinite iterators. Here's the output I got on my computer—your timings may vary:
Using 'create_infinite_sequence()':
0.753838583000288
Using 'itertools.cycle()':
0.19026683299944125
It's much quicker to use itertools.cycle()
. Sure, you may have ideas on writing a more efficient algorithm than the one I used in create_infinite_sequence()
. Go ahead, I'm sure you'll be able to do better than create_infinite_sequence()
. But can you do better than itertools.cycle()
?
Do you want to join a forum to discuss Python further with other Pythonistas? Upgrade to a paid subscription here on The Python Coding Stack to get exclusive access to The Python Coding Place's members' forum. More Python. More discussions. More fun.
And you'll also be supporting this publication. I put plenty of time and effort into crafting each article. Your support will help me keep this content coming regularly and, importantly, will help keep it free for everyone.
Creating the Rota • Combining Tools Using 'Iterator Algebra'
So, Yteria used itertools.cycle()
to create two infinite iterators: one for tasks
and the other for people
. Note that the original lists, tasks
and people
, don't have the same number of elements.
Next, Yteria needed to find a way to connect these two infinite iterators so that corresponding elements are matched. She needed a way to progress through the two infinite iterators at the same time. She needed something to "glue" them together…
…or better still, to "zip" them together.
This is where zip()
comes in. The zip()
built-in tool takes a number of iterators and zips them together, grouping the first elements of each iterator together, then grouping the second elements of each iterator together, and so on:
And there it is. Remember that rota
is an iterator since zip()
returns an iterator. So, each time you fetch the next value from the rota
iterator, you'll get a pairing between a person and the chore they need to do.
Yteria finished this off with some quick code to display each day's rota. It would have been easier to use a for
loop, but she couldn't. So she opted for this option, which is less tidy but still works:
You can write the easier for
loop version if you prefer. Note how Yteria, who's now proficient with the itertools
module, also used itertools.count()
to create a counter! She could have just created an integer and increment it each time, of course.
Side note: The while
loop above feels like something that could be implemented with the help of some itertools
tools. Yteria felt this way, too. She wrote a note to try to refactor this while
loop later, even if just as an exercise in playing with more of the tools in itertools
. Do you want to have a go, too? If Yteria gets round to replacing this code, I'll let you know in a future post in The 'itertools' Series.
Here's the output from this code for the first few days:
Press enter for the next day's rota...
Day 1:
It's Yteria's turn to take the bins out
It's Silvia's turn to clean floor and carpets
It's Yteria's turn to wash up
Press enter for the next day's rota...
Day 2:
It's Silvia's turn to take the bins out
It's Yteria's turn to clean floor and carpets
It's Silvia's turn to wash up
Press enter for the next day's rota...
Day 3:
It's Yteria's turn to take the bins out
It's Silvia's turn to clean floor and carpets
It's Yteria's turn to wash up
Press enter for the next day's rota...
Day 4:
It's Silvia's turn to take the bins out
It's Yteria's turn to clean floor and carpets
It's Silvia's turn to wash up
Press enter for the next day's rota...
And of course, this code works with any number of tasks and any number of people.
The itertools
documentation page has a great line about combining various iteration tools using 'iterator algebra'. Yteria's solution is an example of this. It combines two iteration tools, zip()
and cycle()
, to provide a neat solution. The tools in itertools
are often useful as standalone tools. But they're even more powerful when you combine them with each other.
Note that zip()
and enumerate()
aren't part of itertools
since they're both built-in callables. However, they fall in the same category as the other tools in itertools
—they're tools to help in particular iteration tasks.
Final Words
Problem solved. Yteria and Silvia could now share the daily chores and make sure that everyone contributes equally. Yteria felt that her forced abstention from using the for
keyword in Python led her to understand Pythonic iteration a lot better. She felt like an iteration pro now! Iterators are at the heart of iteration in Python. And itertools provides lots of useful iterators.
Code in this article uses Python 3.13
The code images used in this article are created using Snappify. [Affiliate link]
You can also support this publication by making a one-off contribution of any amount you wish.
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Further reading related to this article’s topic:
Iterable: Python's Stepping Stones • (Data Structure Categories #1)
A One-Way Stream of Data • Iterators in Python (Data Structure Categories #6)
Pay As You Go • Generate Data Using Generators (Data Structure Categories #7)
If You Find if..else in List Comprehensions Confusing, Read This, Else…
Read the other articles in The itertools
Series:
Appendix: Code Blocks
Code Block #1
tasks = ["Take the bins out", "Clean floor and carpets", "Wash up"]
people = ["Yteria", "Silvia"]
Code Block #2
def create_infinite_sequence(sequence):
...
Code Block #3
tasks = ["Take the bins out", "Clean floor and carpets", "Wash up"]
import itertools
tasks_cyc = itertools.cycle(tasks)
next(tasks_cyc)
# 'Take the bins out'
next(tasks_cyc)
# 'Clean floor and carpets'
next(tasks_cyc)
# 'Wash up'
next(tasks_cyc)
# 'Take the bins out'
next(tasks_cyc)
# 'Clean floor and carpets'
next(tasks_cyc)
# 'Wash up'
Code Block #4
def create_infinite_sequence(sequence):
index = 0
while True:
yield sequence[index]
index = 0 if index == len(sequence) - 1 else index + 1
Code Block #5
tasks_inf_seq = create_infinite_sequence(tasks)
next(tasks_inf_seq)
# 'Take the bins out'
next(tasks_inf_seq)
# 'Clean floor and carpets'
next(tasks_inf_seq)
# 'Wash up'
next(tasks_inf_seq)
# 'Take the bins out'
next(tasks_inf_seq)
# 'Clean floor and carpets'
next(tasks_inf_seq)
# 'Wash up'
Code Block #6
import itertools
import timeit
tasks = ["Take the bins out", "Clean floor and carpets", "Wash up"]
people = ["Yteria", "Silvia"]
def create_infinite_sequence(sequence):
index = 0
while True:
yield sequence[index]
index = 0 if index == len(sequence) - 1 else index + 1
infinite_tasks = create_infinite_sequence(tasks)
infinite_tasks_cyc = itertools.cycle(tasks)
print(
"Using 'create_infinite_sequence()':\n",
timeit.timeit(
"next(infinite_tasks)",
number=10_000_000,
globals=globals(),
)
)
print(
"Using 'itertools.cycle()':\n",
timeit.timeit(
"next(infinite_tasks_cyc)",
number=10_000_000,
globals=globals(),
)
)
Code Block #7
import itertools
tasks = ["Take the bins out", "Clean floor and carpets", "Wash up"]
people = ["Yteria", "Silvia"]
rota = zip(
itertools.cycle(people),
itertools.cycle(tasks),
)
Code Block #8
import itertools
tasks = ["Take the bins out", "Clean floor and carpets", "Wash up"]
people = ["Yteria", "Silvia"]
rota = zip(
itertools.cycle(people),
itertools.cycle(tasks),
)
day_counter = itertools.count(start=1)
while True:
input("\nPress enter for the next day's rota...")
day = next(day_counter)
print(f"Day {day}:")
# The next bit would be easier using a 'for' loop,
# but Yteria couldn't do this!
while True:
person, task = next(rota)
print(f"It's {person}'s turn to {task.lower()}")
if task == tasks[-1]:
break
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Nikola
Nikola v8.3.3 is out!
On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v8.3.3. This is a bugfix release.
We’ve had to release Nikola v8.3.3 immediately after releasing Nikola v8.3.2, as it is broken on Python 3.8. We would like to thank the Python packaging ecosystem for being an incomprehensible and incompatible trainwreck.
What is Nikola?
Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown — and can even turn Jupyter Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).
Find out more at the website: https://getnikola.com/
Downloads
Install using pip install Nikola
.
Changes from v8.3.1
Features
Trace template usage when an environment variable
NIKOLA_TEMPLATES_TRACE
is set to any non-empty value.Give user control over the raw underlying template engine (either
mako.lookup.TemplateLookup
orjinja2.Environment
) via an optionalconf.py
methodTEMPLATE_ENGINE_FACTORY
.Switch to pyproject.toml
Add path handler
slug_source
linking to source of post.
Bugfixes
Ignore errors in parsing SVG files for shrinking them, copy original file to output instead (Issue #3785)
Restore
annotation_helper.tmpl
with dummy content - fix themes still mentioning it (Issue #3764, #3773)Fix compatibility with watchdog 4 (Issue #3766)
nikola serve
now works with non-root SITE_URL.Stack traces meaningless for end users now more reliably suppressed (Issue #3838).
Other
Changed filter for tidy from
tidy5
totidy
.
PyCon
Welcoming 8 Companies to Startup Row at PyCon US 2025
PyCon US gives the community a chance to come together and learn about what’s new and interesting about the Python language and the seemingly infinite variety of problems that can be solved with a few (or a few thousand) lines of Python code. For entrepreneurial Pythonistas, Startup Row at PyCon US presents a unique opportunity for startup companies to connect directly with the developer community they’re building for.
Kicked off in 2011, Startup Row at PyCon US gives early-stage startups access to the best of what PyCon US has to offer, including conference passes and booth space, at no cost to their teams. Since its inception, including this year’s batch, well over 150 companies have been featured on Startup Row, and there’s a good chance you are familiar with some of their products and projects. Pandas, Modin, Codon, Ludwig, Horovod, SLSA, and dozens of other open-source tools were built or commercialized by companies featured on Startup Row at PyCon US.
Think of Startup Row at PyCon US as a peek into the future of the Python software ecosystem. And with that, we’re pleased to introduce the 2025 batch!
The Startup Row 2025 Lineup
AgentOps
Building an AI agent that works is only half the battle; seeing why it fails, how much it costs, and whether it’s about to go rogue is the other half. AgentOps gives developers that missing x-ray vision. Drop a two-line SDK into your code and every run is captured as a “session” complete with step-by-step waterfalls, prompt/response pairs, cost and token metrics, and security flags—instantly viewable in a web dashboard.
The idea was born at a San Francisco hackathon, where co-founders Alex Reibman and Adam Silverman discovered that their agent-debugging tools were more popular than the agents themselves. They turned those internal tools into AgentOps, raised a $2.6 million pre-seed led by 645 Ventures and Afore Capital in August 2024, and now give thousands of AI engineers a live dashboard that replays every agent step, surfaces exact cost and latency metrics, and enforces benchmark-driven safety checks—all from a two-line SDK.
Open-sourced under an MIT license, the project has already racked up 4.4k GitHub stars and integrates out-of-the-box with OpenAI Agents SDK, CrewAI, LangChain, AutoGen and dozens of other frameworks. With observability handled, AgentOps wants to be to autonomous agents what Datadog is to micro-services: the layer that makes ambitious agent stacks safe enough for production—and cheap enough to keep running.
AllHands AI
Agentic coding went from a theoretical possibility to something seemingly overnight, and All Hands AI’s open-source platform OpenHands is one of the reasons why. Written in Python (with a JavaScript front-end), OpenHands lets an AI developer do everything a human can: edit repositories, run shell commands, browse the web, call APIs—even lift snippets straight from Stack Overflow—and then roll it all back into a commit you can review and merge.Since its first README-only commit just 14 months ago, the project has snowballed into 54k-plus GitHub stars and 6k forks, backed by a community of roughly 310 contributors and counting. The momentum helped the team close a $5 million seed round led by Menlo Ventures last September, giving the ten-person startup runway to layer commercial tooling on top of its permissively-licensed core.
“About six months ago it finally clicked—I now write about 95% of my own code with agents,” says co-founder and chief scientist Graham Neubig, an associate professor at Carnegie Mellon who shipped the project’s first lines before Robert Brennan—now CEO—joined the project and built a globally-distributed team to scale it up. Neubig credits the early decision to ship a “non-functional prototype” and build in public for catalyzing the contributor base; today, community members maintain everything from Windows support to protocol bridges while swapping LLM benchmarks daily in the project’s Slack.
OpenHands has evolved from a weekend proof-of-concept into a community-driven framework that now aims for production-grade reliability as an open alternative to proprietary code agents. Weekly releases focus on reproducible debugging, cost control, and enterprise safeguards, and contributors are already using the system to generate and review real pull requests across a growing set of Python projects.
DiffStudio
Product photos tell a story, but DiffStudio wants to let shoppers walk around that story. The North-Jersey startup is building a camera-agnostic “inverse graphics” pipeline that ingests a handful of ordinary 2-D shots or video and returns a fully-textured, web-ready 3-D model that drops into any product page. The goal is simple: turn scrolling into spinning, pinching, and zooming—and watch engagement and conversions rise.Founder Naga Karumuri just recently formed the company in December, after months of hacking on the latest developments in Gaussian splatting and differentiable rendering. “You upload a batch of images, and our model hands you a compressed asset—think megabytes, not gigabytes—that Shopify can serve instantly,” Karumuri explained. A companion mobile app will let merchants scan products on the fly, while a web dashboard handles cloud processing and one-click embeds.
DiffStudio’s beachhead market is small- and mid-sized Shopify sellers, and blue-chip retailers are already circling. “In casual chats we’ve had interest from brands like Adidas and Michael Kors,” Karumuri noted, hinting at an eventual move up-market once the self-service MVP launches. Compression and quality are the differentiators: where existing tools like Polycam focus on hobbyist scans or LiDAR-assisted captures, DiffStudio is chasing photo-real fidelity with file sizes that won’t tank page speed. The project’s GitHub repositories showcase early demos and the startup’s open-source commitment.
The team is still lean—Karumuri plus a collaborator—but the vision is outsized: make 3-D product “digital twins” as easy to generate as a product photo set. Or, as their LinkedIn banner puts it, “Splat your products into 3D glory.”
Fabi.ai
Business users shouldn’t have to ping the data team for every ad-hoc question—and data scientists shouldn’t spend half their day writing the same queries on repeat. Fabi.ai positions itself as the AI “side-kick” that lets both camps meet in the middle: a web notebook where natural-language prompts, SQL, Python, and no-code building blocks live side-by-side, with generative agents filling in (and explaining) 90 % of the boilerplate.Founded in 2023 and headquartered in the San-Francisco Bay Area, the six-person team works face-to-face in San Mateo to iterate quickly on the product. CEO Marc Dupuis ran embedded analytics at revenue-ops unicorn Clari before teaming up again with former colleague Lei Tang (now CTO) to “let vibe-coders do 95 % of their own analysis” while still giving experts an easy way to supervise the last mile.
Eniac Ventures and Outlander VC co-led a $3 million seed round in July 2023 to bring Fabi.ai’s collaborative notebook to market. Early customers already range from fast growing startups to established e-commerce brands.
With BI dashboards stuck on the what and legacy notebooks siloed on individual laptops, Fabi.ai is betting that a cloud-native, agent-augmented workspace is the missing link—and it’s inviting the Python community to kick the tires (and write fewer queries) at PyCon US.
Gooey.ai
Most no-code AI builders stop at slick demos; Gooey.ai is obsessed with what happens after the hype, when a multilingual chatbot has to work for a Kenyan farmer with a 2 G signal or a frontline nurse switching between English and Kannada. The open-source, low-code platform stitches together the “best of private and public AI” into reusable workflows—text, speech, vision and RAG pipelines you can fork, remix and ship to WhatsApp, SMS, Slack or the web from a single dashboard. One billing account, one-click deploy.Founders Sean Blagsvedt (ex-Microsoft Research, founder of Indian job-matching startup Babajob), Archana Prasad (artist-turned-social-tech entrepreneur), and CTO Dev Aggarwal split their time between Seattle and Bangalore and run the company under the umbrella of Dara Network. Their thesis: impactful AI needs to be both affordable and local—so Gooey bakes in speech recognition for low-resource languages, translation APIs like India’s Bhashini, and zero-data-retention options for NGOs handling sensitive chats.
Real-world traction is already visible. An agronomy WhatsApp bot built on Gooey reached “tens of thousands of farmers in Kenya, India, Ethiopia and Rwanda,” delivering accurate, objective answers with page-level citations. The platform’s copilot builder now supports the latest GPT-4o, Llama 3, Claude, Gemini and Mistral models; integrates OCR, vision and text-to-speech; and ships bulk evaluation harnesses so teams can test new prompts before they hit production.
To seed more grassroots projects, Gooey recently launched a Workflow Accelerator with funding from The Rockefeller Foundation, covering model and SMS costs for six NGOs and open-sourcing every workflow that emerges. If you’re looking to take an AI pilot from “cool demo” to “24/7 field tool,” Gooey.ai wants to be the glue—and the infra—you won’t outgrow.
GripTape AI
Enterprise AI teams love the idea of autonomous agents, but hate the roulette wheel of prompt-only code. Griptape steps in with a Python framework that keeps creativity where it belongs—inside LLM calls—while wrapping every outside step in predictable, testable software patterns. Agents, sequential pipelines, and parallel workflows are first-class “Structures”; memory, rulesets, and observability are plug-in Drivers; and an “Off-Prompt” mechanism pushes sensitive or bulky data out of the prompt for lower cost and higher security.The project launched in early 2023 and has already gathered ≈2.3 k GitHub stars and an active Discord community. Adoption accelerated after co-founders Kyle Roche and Vasily Vasinov—both former AWS leaders—closed a $12.5 million Seed Round in September 2023 led by Seattle’s FUSE and Acequia Capital. The fresh capital funds Griptape Cloud, a fully managed runtime that hosts ETL pipelines, hybrid vector knowledge bases, and structure executions while piping metrics to whatever monitoring stack a Fortune 500 already uses.
Under the Apache-2.0 license, developers can start locally, swap between OpenAI, Bedrock or Anthropic drivers, and graduate to the cloud only when they need auto-scaling or policy enforcement. In short, Griptape aims to be the Django of agentic AI: batteries-included, prod-ready, and with enough guardrails that even the compliance team can sleep at night.
Griptape also recently launched Griptape Nodes, an intuitive, drag-and-drop interface where designers, artists and other creative professionals can create advanced creative pipelines using graphs, nodes, and flowcharts to exploit state-of-the-art image generation and image processing models, together with more “traditional” large language models.
MLJAR
Most AutoML platforms lock you into a browser tab and someone else’s GPU cluster. MLJAR takes the opposite approach: everything runs locally, yet you still get the “train, explain, and deploy” cycle in a single click.The Polish-based project began in 2016, when founder Piotr Płoński—fresh from a PhD spent building models for physicists, bioinformaticians, and telecom giants—decided he was tired of rewriting the same pipelines over and over. Impatience, not laziness, pushed him to automate the entire workflow.
Today the three-person team (Piotr, his co-founder wife, and a close friend) maintains a fully open-source stack. The flagship MLJAR-AutoML package handles feature engineering, hyper-parameter search, and rich Markdown reports; Mercury turns any Jupyter notebook into a shareable web app or dashboard with a sprinkle of widgets; and the brand-new MLJAR Studio Desktop app bundles its own Python environment, point-and-click “code recipes,” an integrated GPT-4 assistant, and a one-button Share that converts a notebook into a live web application.
Open source is more than a distribution strategy—it’s a trust signal. One recognisable enterprise adopted the package under an MIT license and then contracted the team for advanced features such as fairness-aware training. Revenue is a side effect; the primary goal is software that makes data science faster, friendlier, and fully under the user’s control.
If you’ve ever wished Streamlit met AutoML—and ran natively on your laptop—swing by the MLJAR booth on Startup Row at PyCon US and take Studio for a spin.
Ragas
Seemingly everyone is building RAG pipelines, but almost no one is measuring them. Ragas sets out to be “pytest for Retrieval-Augmented Generation,” bundling ready-made metrics—context recall, faithfulness, answer relevancy—and auto-generated test sets so teams can turn vibe checks into repeatable CI tests. Drop the library into LangChain, LlamaIndex, or plain-Python code and Ragas spits out a single “Ragas Score” (plus per-metric drill-downs) that tracks whether your latest prompt tweak fixed accuracy or broke it.The project landed a shout-out during OpenAI’s Dev Day and has since snowballed to 9.1 k GitHub stars and 900+ forks, with more than 80 external contributors. In production it now processes ~5 million evaluations a month for engineers at AWS, Microsoft, Databricks, and Moody’s—a number growing 70 % month-over-month.
Co-founders Jithin James (early engineer at BentoML) and Shahul ES (Kaggle Grandmaster, core contributor to Open-Assistant) met at college, hacked on open-source together for years, and entered Y Combinator’s W24 batch to turn their weekend project into a commercial platform. Their plan: keep the core evaluator MIT-licensed while DG Labs, the commercial arm, layers team dashboards, experiment tracking, and dataset management on top—so every product squad can ship RAG updates with CI-style confidence.
Thank You’s and Acknowledgements
There are far too many stakeholders in the ongoing success of Startup Row at PyCon US to name individually, but this program would not be possible without the following folks:- The Python Software Foundation, for its continued support of this little corner of PyCon US.
- The PSF Sponsorship team, for managing the logistics of getting everyone registered and set up for success
- Startup Row co-organizers, Jason D. Rowley (p.s. hey, that's me!) and and collaborator, Shea Tate-Di Donna, whose first experience with the Python community was presenting her company, Zana, on Startup Row at PyCon US 2015.
- Startup Row alumni companies that come back as paid sponsors at PyCon US. Shoutouts to Anvil (SR’17), Chainguard (SR’22), and Dagster (SR’21), whose support helps make Startup Row at PyCon US possible.
- To all startup founders who filled out the (mercifully brief) application. To those that did not get a spot this year, we appreciate your time and attention. To those that did: a hearty congratulations.
- To the selection committee, for accomplishing the difficult task of evaluating and scoring applications.
May 16, 2025
Real Python
The Real Python Podcast – Episode #249: Going Beyond requirements.txt With pylock.toml and PEP 751
What is the best way to record the Python dependencies for the reproducibility of your projects? What advantages will lock files provide for those projects? This week on the show, we welcome back Python Core Developer Brett Cannon to discuss his journey to bring PEP 751 and the pylock.toml file format to the community.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
Our Google Summer of Code 2025 contributors
We’re excited to introduce our Google Summer of Code 2025 contributors!
These amazing folks will be working on impactful projects that will shape Django’s future. Meet the contributors 👇
A. Rafey Khan
Project: Django Admin – Add Keyboard Shortcuts & Command Palette. Mentors: Tom Carrick, Apoorv Garg
Rafey will work on making Django Admin faster and more accessible through keyboard-driven workflows. Excited to see this land!
Farhan Ali Raza
Project: Bring django-template-partials into core. Mentor: Carlton Gibson
Farhan will be enhancing Django’s template system by adding first-class support for partials—making componentized templates easier than ever.
Saurabh K
Project: Automate processes within Django’s contribution workflow. Mentor: Lily Foote
Saurabh will work on streamlining how contributors interact with Django repo—automating repetitive tasks and improving dev experience for all.
A huge shoutout to our mentors and the broader Django community for supporting these contributors! 💚
Let’s make this a summer of learning, building, and collaboration.
Daniel Roy Greenfeld
Farewell to Michael Ryabushkin
Michael Ryabushkin and I met around 2011-2012 through Python community work. I don't remember how we met, instead I remember his presence suddenly there, helping and aiding others.
Michael could be pushy. He was trying to help people reach their full potential. His energy and humor was relentless, I admired his tenacity and giving nature.
While our coding preferences usually clashed, sometimes they matched. Then we would rant together about some tiny detail, those talks plus the silly Tai Chi dance we did are lovely memories I have of Michael.
In 2016 my wife Audrey had emergency surgery. For me that meant sleepless days taking care of her. Suddenly Michael's presence was there. He took shifts, ran errands (including buying a wheelchair), and forced me to sleep. I am forever grateful to Michael for what he did for us.
In early 2020 Audrey and I got last minute approval to use a large conference space to organize an event called PyBeach. Michael heard about it and as always, suddenly his presence was there. He was not just a volunteer at large, but leading the conference with us. Michael and I had our shared code rants, did our silly Tai Chi dance, and he met our baby daughter.
Between the pandemic and us moving from the Los Angeles area I didn't get the chance to see Michael again. I'll miss our rants, our silly Tai Chi dance, and his sudden appearances.
SoCal Python has created a memorial page in Michael's honor.