skip to navigation
skip to content

Planet Python

Last update: May 04, 2022 09:41 PM UTC

May 04, 2022


Real Python

Top Python Game Engines

Like many people, maybe you wanted to write video games when you first learned to code. But were those games like the games you played? Maybe there was no Python when you started, no Python games available for you to study, and no game engines to speak of. With no real guidance or framework to assist you, the advanced graphics and sound that you experienced in other games may have remained out of reach.

Now, there’s Python, and a host of great Python game engines available. This powerful combination makes crafting great computer games much easier than in the past. In this tutorial, you’ll explore several of these game engines, learning what you need to start crafting your own Python video games!

By the end of this article, you’ll:

  • Understand the pros and cons of several popular Python game engines
  • See these game engines in action
  • Understand how they compare to stand-alone game engines
  • Learn about other Python game engines available

To get the most out of this tutorial, you should be well-versed in Python programming, including object-oriented programming. An understanding of basic game concepts is helpful, but not necessary.

Ready to dive in? Click the link below to download the source code for all the games that you’ll be creating:

Get Source Code: Click here to get the source code you’ll use to try out Python game engines.

Python Game Engines Overview

Game engines for Python most often take the form of Python libraries, which can be installed in a variety of ways. Most are available on PyPI and can be installed with pip. However, a few are available only on GitHub, GitLab, or other code sharing locations, and they may require other installation steps. This article will cover installation methods for all the engines discussed.

Python is a general purpose programming language, and it’s used for a variety of tasks other than writing computer games. In contrast, there are many different stand-alone game engines that are tailored specifically to writing games. Some of these include:

These stand-alone game engines differ from Python game engines in several key aspects:

  • Language support: Languages like C++, C#, and JavaScript are popular for games written in stand-alone game engines, as the engines themselves are often written in these languages. Very few stand-alone engines support Python.
  • Proprietary scripting support: In addition, many stand-alone game engines maintain and support their own scripting languages, which may not resemble Python. For example, Unity uses C# natively, while Unreal works best with C++.
  • Platform support: Many modern stand-alone game engines can produce games for a variety of platforms, including mobile and dedicated game systems, with very little effort. In contrast, porting a Python game across various platforms, especially mobile platforms, can be a major undertaking.
  • Licensing options: Games written using a stand-alone game engine may have different licensing options and restrictions, based on the engine used.

So why use Python to write games at all? In a word, Python. Using a stand-alone game engine often requires you to learn a new programming or scripting language. Python game engines leverage your existing knowledge of Python, reducing the learning curve and getting you moving forward quickly.

There are many game engines available for the Python environment. The engines that you’ll learn about here all share the following criteria:

  • They’re relatively popular engines, or they cover aspects of gaming that aren’t usually covered.
  • They’re currently maintained.
  • They have good documentation available.

For each engine, you’ll learn about:

  • Installation methods
  • Basic concepts, as well as assumptions that the engine makes
  • Major features and capabilities
  • Two game implementations, to allow for comparison

Where appropriate, you should install these game engines in a virtual environment. Full source code for the games in this tutorial is available for download at the link below and will be referenced throughout the article:

Get Source Code: Click here to get the source code you’ll use to try out Python game engines.

With the source code downloaded, you’re ready to begin.

Pygame

When people think of Python game engines, the first thought many have is Pygame. In fact, there’s already a great primer on Pygame available at Real Python.

Written as a replacement for the stalled PySDL library, Pygame wraps and extends the SDL library, which stands for Simple DirectMedia Layer. SDL provides cross-platform access to your system’s underlying multimedia hardware components, such as sound, video, mouse, keyboard, and joystick. The cross-platform nature of both SDL and Pygame means that you can write games and rich multimedia Python programs for every platform that supports them!

Pygame Installation

Read the full article at https://realpython.com/top-python-game-engines/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 04, 2022 02:00 PM UTC


Luke Plant

REPL Python programming and debugging with IPython

When programming in Python, I spend a large amount of time using IPython and its powerful interactive prompt, not just for some one-off calculations, but for significant chunks of actual programming and debugging. I use it especially for exploratory programming where I’m unsure of the APIs available to me, or what the state of the system will be at a particular point in the code.

I’m not sure how widespread this method of working is, but I rarely hear other people talk about it, so I thought it would be worth sharing.

Setup

You normally need IPython installed into your current virtualenv for it to work properly:

pip install ipython

Methods

There are basically two ways I open an IPython prompt. The first is by running it directly from a terminal:

$ ipython
Python 3.9.5 (default, Jul  1 2021, 11:45:58)
Type 'copyright', 'credits' or 'license' for more information
IPython 8.3.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]:

In a Django project project, ./manage.py shell can also be used if you have IPython installed, with the advantage that it will properly initialise Django for you.

This works fine if you want to explore writing some “top level” code — for example, a new bit of functionality where the entry points have not been created yet. However, most code I write is not like that. Most of the time I find myself wanting to write code when I am already 10 levels of function calls down — for example:

For these cases, I use the second method:

Note that you can write and edit multiline code at this REPL — it’s not quite as comfortable as an editor, but it’s OK, and has good history support. There’s much more to say about IPython and its features that I won’t write here, you can learn about it in the docs.

For those with a background in other languages, it might also be worth pointing out that a Python REPL is not a different thing from normal Python. Everything you can do in normal Python, like defining functions and classes, is possible right there in the REPL.

Once I’ve done with my exploring, I can copy any useful snippets back from the REPL into my real code, using the history to scan back through what I typed.

Advantages

The advantages of this method are:

  1. You can explore APIs and objects much more easily when you actually have the object, rather than docs about the object, or what your editor’s auto-complete tools believe to be true about the object. For example, what attributes and methods are available on Django’s HttpRequest? You don’t have to ensure you’ve got correct type annotations, and hope they are complete, or make assumptions about what the values are - you’ve got the object right there, you can inspect it, with extensive and correct tab completion. You can actually call functions and see what they do.

    For example, Django’s request object typically has a user attribute which is not part of the HttpRequest definition, because of how it is added later. It’s visible in a REPL though.

  2. You can directly explore the state of the system. This can be a huge advantage for both exploratory programming and debugging.

    For debugging, pdb and similar debugging tools and environments will often provide you with “the state of the system”, and they are much better at being able to step through multiple layers of code. But I often find that the power and comfort of an IPython prompt is much nicer for exploring and finding solutions.

The feel of this kind of environment is not quite a smooth as REPL-driven programming in Lisp, but I still find it hugely enjoyable and productive. Compared to many other methods, like iterating on your code followed by manual or automated testing, it cuts the latency of the feedback loop from seconds or minutes to milliseconds, and that is huge.

Tips and gotchas

End

That’s it, I hope you found it useful. Do you have any other tips for using this technique?

Links

May 04, 2022 06:26 AM UTC


scikit-learn

Interview with Lucy Liu, scikit-learn Team Member

Author: Author IconReshama Shaikh , Author IconLucy Liu

Lucy Liu joined the scikit-learn Team in September 2020. In this interview, learn more about Lucy’s journey through open source, from rstats to scikit-learn.

  1. Tell us about yourself.

    My name is Lucy, I grew up in New Zealand and I am culturally Chinese. I currently live in Australia and work for Quansight labs.

  2. How did you first become involved in open source?

    I first discovered open source when I started a research Masters, after finding my clinical Optometry job unfulfilling. I loved learning to program but was initially not game enough to contribute as I was only a beginner. After my masters, while working as a bioinformatician, I wrote some R packages for analysis of niche biomedical data and put them on github. My first contribution to an existing open source project was later when I worked at INRIA (French National Institute for Research in Digital Science and Technology) alongside the INRIA scikit-learn core developers. They helped me put up my first pull request and I have been contributing ever since!

  3. How did you get involved in scikit-learn? Can you share a few of the pull requests to scikit-learn that resonate with you?

    I’m very interested in statistics and code so I was super keen to contribute to scikit-learn. Being relatively a beginner in both areas I started by contributing to documentation, then bug fixes and features. My first PR to scikit-learn was submitted in October 2019 to improve the multiclass classification documentation. I have contributed the most to the calibration module in scikit-learn (including refactoring CalibratedClassifierCV), which has been very interesting and useful for when I later worked on post-processing of weather forecasts at the Bureau of Meteorology in Australia.

    Reference: Lucy’s list of pull requests

  4. To which OSS projects and communities do you contribute?

    I contribute to Sphinx-Gallery and scikit-learn. Sphinx-Gallery was a great introduction to open source for me as it is a small package that does not get a large number of issues and pull requests (unlike scikit-learn!).

  5. What do you find alluring about OSS?

    I think the ability to see the source code and contribute back to the project are the best parts. If there is a feature you are interested in you can suggest and add it yourself, all the while learning from code reviews during the process!

  6. What pain points do you observe in community-led OSS?

    I think some of the positive aspects of the OSS community can also lead to pain. While it is great that you are able to get many different perspectives from people of various backgrounds, it also makes forming a consensus more difficult, slow progress. People from any geographical location can work together asynchronously but this can also mean people work in their own silos, making it difficult to have a cohesive direction for the project. Large projects also have a difficult learning curve, making it difficult for new contributors and contributors interested in becoming core-developers. The latter is the problem if the project lacks core-developer time for project maintenance and reviewing PRs.

  7. If we discuss how far OS has evolved in 10 years, what would you like to see happen?

    Some system that enables continuity of funding, which can combine funds from public and private sources. This would enable long term planning of OS projects and give developers more job stability. Better coordination between projects within the same area (e.g., scientific Python) would allow a better experience for users using Python for their projects.

  8. What are your favorite resources, books, courses, conferences, etc?

    Real Python have great tutorials and regex101 makes regular expressions so much easier to write and review!

    I also love the YouTube channel statquest, which explains statistical concepts in a very accessible manner and introduces videos with a jingle - what more could you want?

  9. What are your hobbies, outside of work and open source?

    I love cycling and feel strongly about designing cities for people instead of cars. I also enjoy rock climbing (indoors and outdoors), though sadly have not had much time for this recently.

May 04, 2022 12:00 AM UTC

May 03, 2022


PyCoder’s Weekly

Issue #523 (May 3, 2022)

#523 – MAY 3, 2022
View in Browser »

The PyCoder’s Weekly Logo


Dunder Methods in Python: The Ugliest Awesome Sauce

Double-underscore methods, also known as “dunder methods” or “magic methods” are an ugly way of bringing beauty to your code. Learn about constructors, __repr__, __str__, operator overloading, and getting your classes working with Python functions like len().
JOHN LOCKWOOD

Why Is It Important to Close Files in Python?

Model citizens use context managers to open and close file resources in Python, but have you ever wondered why it’s important to close files? In this tutorial, you’ll take a deep dive into the reasons why it’s important to close files and what can happen if you dont.
REAL PYTHON

Try atoti, A Free Collaborative Python BI Analytics Platform

alt

atoti is a BI analytics platform combining a python library and a web application helping Quants, Data Analyst, Data Scientist and Business Users to collaborate, analyze and translate their data into business KPIs →
ACTIVEVIAM sponsor

When Python Can’t Thread: A Deep-Dive Into the GIL’s Impact

Python’s Global Interpreter Lock (GIL) stops threads from running in parallel or concurrently. Learn how to determine the impact of the GIL on your code.
ITAMAR TURNER-TRAURING

micro:bit Python Editor Beta 3 Released

MICROBIT.ORG

2022 “Call for Code” Global Challenge Accepting Entries

CALLFORCODE.ORG

Jupyter Community Workshops: Call for Proposals

JUPYTER.ORG

Discussions

Python Shouldn’t Be the Top Programming Language

Discussion of the controversial article Python Is Now Top Programming Language — But Shouldn’t Be
HACKER NEWS

When Would You Use the Lambda Function?

REDDIT

Python Jobs

Senior Software Engineer - Python Full Stack (USA)

Blenderbox

Gameful Learning Developer (Ann Arbor, MI, USA)

University of Michigan

Data & Operations Engineer (Ann Arbor, MI, USA)

University of Michigan

Python Technical Architect (USA)

Blenderbox

Academic Innovation Software Developer (Ann Arbor, MI, USA)

University of Michigan

Software Development Lead (Ann Arbor, MI, USA)

University of Michigan

Lead Software Engineer (Anywhere)

Right Side Up

Data Engineer (Chicago, IL, USA)

Aquatic Capital Managment

More Python Jobs >>>

Articles & Tutorials

Python Testing With doctest

Python’s doctest module allows you to write unit tests through REPL-like sessions in your doc-strings. Learn how to write and execute doctest code. Also available in video.
MIKE DRISCOLL

Handling Retries in Python Requests

When coding with requests and urllib3 you can automatically retry failed connections through the use of requests.adapters.HTTPAdapter and urllib3.Retry. Don’t code retry loops manually, learn how to take advantage of the features of the libraries.
MARKKU LEINIÖ • Shared by Markku Leiniö

Merge Faster with WorkerB for PRs Chrome Extension

alt

The average pull request sits idle for two days! Add context to your PR & merge faster with WorkerB magic links. Get estimated review time, number of changes, & related issues in one click! Install it today →
LINEARB sponsor

Understanding Train Test Split

The Train-Test-Split methodology is useful for supervised machine learning with a given data set. It helps ensure that new data is more likely to be categorized correctly. Learn how to use it with Python and scikit-learn.
MICHAEL GALARNY • Shared by Michael Galarnyk

Pagination for a User-Friendly Django App

In this tutorial, you’ll learn how to serve paginated content in your Django apps. Using Django pagination can significantly improve your website’s performance and give your visitors a better user experience.
REAL PYTHON

Code Quality Tools in Python

The article describes what code quality means and introduces some cool tools to improve your Python, including a variety of linters, formatters, and IDE tools.
DOLLAR DHINGRA • Shared by Dollar Dhingra

Notes on Debugging

All programmers have to learn how to do it, and like all skills it takes practice. Learn some hints and approaches to the bane of us all: debugging.
IONEL CRISTIAN MĂRIEȘ

CData Software — The Easiest Way to Connect Python With Data

Connect, Integrate, & Automate your data from any other application or tool in real-time, on-premise or cloud, with simple data access to more than 250 cloud applications and data sources. Learn more at cdata.com
CDATA SOFTWARE sponsor

MicroPython in Docker Containers

Want to play with MicroPython without a board? Learn how to use the Unix port of MicroPython in a Docker container to test out your code.
BHAVESH KAKWANI

Incorporating Julia Into Python Programs

Learn what you need to get Julia running inside your Python programs, using PyJulia, PyCall, and how to set up your environments.
PETER BAUMGARTNER

Projects & Code

pet-python-startrek: 1977 Commodore PET Star Trek Remake

GITHUB.COM/BLOGMYWIKI

MNE: Explore and Visualize Neurophysiological Data

MNE.TOOLS

slipcover: Near Zero-Overhead Python Code Coverage Tracking

GITHUB.COM/PLASMA-UMASS

exceptionite: Make Prettier Exceptions a Cinch

GITHUB.COM/MASONITEFRAMEWORK

Real Time Multiplayer Bingo Game Using Django Channels

GITHUB.COM/LEARNINGNOOBI

Events

PyCon US 2022

April 27 to May 6, 2022
PYCON.ORG

STL Python

May 4, 2022
MEETUP.COM

Weekly Real Python Office Hours Q&A (Virtual)

May 4, 2022
REALPYTHON.COM

Heidelberg Python Meetup

May 4, 2022
MEETUP.COM

Canberra Python Meetup

May 5, 2022
MEETUP.COM

PyCon Kenya Conference 2022

May 6 to May 8, 2022
PYCONKE.ORG


Happy Pythoning!
This was PyCoder’s Weekly Issue #523.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

May 03, 2022 07:30 PM UTC


Mike Driscoll

Announcing: The Python 101 Video Course

I am happy to announce that I am creating a Python 101 video course, which is based on Python 101: 2nd Edition.

The course will eventually include videos that cover the chapters in the books. It is launching with 13 videos that run 168+ minutes!

The following link will give you $10 off!

Purchase Now

Python 101 Video Series

What You Get

Purchase Now

The post Announcing: The Python 101 Video Course appeared first on Mouse Vs Python.

May 03, 2022 02:15 PM UTC


Real Python

Testing Your Code With pytest

Testing your code brings a wide variety of benefits. It increases your confidence that the code behaves as you expect and ensures that changes to your code won’t cause regressions. Writing and maintaining tests is hard work, so you should leverage all the tools at your disposal to make it as painless as possible. pytest is one of the best tools you can use to boost your testing productivity.

In this video course, you’ll learn:


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 03, 2022 02:00 PM UTC


EuroPython

EuroPython April 2022 Newsletter

Hello fellow Pythonistas,

We hope you all are enjoying the longer daylight and the warmer weather that April has brought to us (in the northern hemisphere anyway). April also brings a new newsletter packed with updates!
We are just over 70 days until the conference and our volunteers are working hard to put together the best EuroPython ever. Without further ado, here is our update.

📝EuroPython Society Update

✈Visa

If you are planning to attend EuroPython in person, you might need a visa to enter Ireland. As a note, Ireland is part of the EU but it is outside of the Schengen zone. This means if you are from outside of the EU but you have a Schengen visa, you may still need a visa to get in.

Please, double check your case and make sure you have all the documentation in order before travelling to Ireland. If the visa process requires a support letter, we can do that too! Just head to https://ep2022.europython.eu/visa to request the support letter for your visa application!

🚸Childcare service

If you would like to attend EuroPython in person but worry about childcare, fear not, because we have the right solution for you!

We will be providing childcare service at the venue. The best part is that the service is free! Make sure to specify how many children will require childcare and we will take care of them, well, not we, but qualified professionals.

Also keep in mind that we are planning a family friendly mini Makers Fest. Now you have no excuses, you can bring your kids and hack a project together!

🍀EuroPython 2022 Conference Update

📜Programme

The Call for Proposal closed on April 3rd and we are excited to announce that we received a record breaking 429 proposals! Thanks to all the submitters for their time and effort in giving us so much to look forward to.

After closing the CfP we kicked-off 2 parallel reviews of these proposals:

The community voting closed in the third week of April and we were amazed by the community response: there were a whopping 24000 votes!! This superseded all our expectations and we’d like to thank everyone for putting in the time to cast their votes!

Parallely, 35 experienced reviewers have put in ~700 reviews across all our proposals. The programme team will run another round of reviews once the first round acceptances are sent out.

The programme team is now consolidating communities&apos preference and panel feedback to curate talks fit for the first round of acceptances.

🔥Panel discussions @ EuroPython

The programme team is working double time to put together collaborations to engage with the broader community. One such collaboration is with the core developers to put together a panel discussion on all things CPython and beyond. They are working hard to iron out the details of the panel. More details soon.

P.S. All of our Early Bird tickets are now sold out. We’re only left with 7 education tickets!

🚨
Feeling the FOMO? Grab your ticket to EuroPython now!! https://ep2022.europython.eu/tickets

🚀Financial Aid

Towards our commitment to diversity & inclusion, we’re running a Financial Aid Programme to help individuals who would otherwise not be able to attend/speak at the conference. If you need help attending the conference, don’t hesitate to apply for help: https://europython.eu/finaid

👋
The Finaid team will make every effort to send a decision on applicants who need to apply for a visa. Got any questions? Hit us up at finaid@europython.eu

💶Call for Sponsors

Big shoutout to our first confirmed sponsors Sendcloud, Ebury and Channable! Thank you for your support and we cannot wait to see you at your booths!

We are privileged to have many other fantastic companies who are interested in sponsoring EuroPython this year.

🔍
We are still looking for a Keystone sponsor. Could your company be the next Keystone sponsor and achieve the highest visibility at one of the largest and most diverse Python communities? 

Apart from the standard sponsor packages, there are so many other ways you can support the conference: be a childcare or Financial Aid sponsor, help us organise a Django Girls Workshop, or sponsor a gourmet coffee stall for a day! You can find out all the fantastic standalone options here.

If you are interested in sponsoring EuroPython 2022, head to https://ep2022.europython.eu/sponsor and dig into the details of every sponsorship level.

If you still have questions, write to us at sponsoring@europython.eu

❣Irish Community Mixer @ Dublin

Ireland thrives with communities and EuroPython wouldn’t be the same without the support of these Community Partners:

The ever-growing list of Community Partners can be found at https://ep2022.europython.eu/community-partners

🗣Events @ EuroPython

EuroPython is much more than *just* training, talks & keynotes. Every year we run multiple varied events for our community and this year is no exception. Read on below to get a glimpse of what’s waiting for you in Dublin!

🥙Community Mixer Lunch @ Dublin

Organisers of community conferences and events across Europe, we invite you to grab lunch with us in Dublin! Let’s all get together, share our joys and pains of running events. In our experience conversations flow better with a nice meal in front. Join us to share ideas and cultivate cross community relationships.

The lunch is planned for 14 or 15 July. Watch this space!

👧Django Girls Workshop

If you identify as a woman and want to learn how to make websites, we have good news for you! We are holding a one-day workshop for beginners!

It will take place on Monday 11th July at Convention Centre Dublin in the heart of Dublin city: https://djangogirls.org/en/dublin/  

Applications to attend are open until July 2nd. We&aposve only got 30 slots, don&apost waste for the last minute!!

If you are a Django expert, then consider joining us as a coach? Submit your interest via this form, and we will be in touch. Any other questions, contact us on dublin@djangogirls.org.

Please note that you do not need to have a EuroPython conference ticket to attend Django Girls workshop.

Makers Fest

Learning new libraries and new features of Python is great. But, what about showing off cool things you’ve been working on, running a demo or simply talking about a pet project you’re passionate about? If you’re a Maker, Educator, or just someone who is interested in breaking and building things, then this fest is for you!!

There are no limits to the shape of the project: this could be an automation put together with Raspberry Pi, or an AI powered music composer. Bottom-line, any and every project is encouraged.

Register your interest by filling out this form: https://forms.gle/xTdpFJ2rV8iqmMCb9

🤗Trans*Code

alt

After nearly 3 years of virus hiatus, Trans*Code will be returning! We are delighted that EuroPython will be hosting a Trans*Code event in Dublin in July!

Trans*Code is an international hack event series focused solely on drawing attention to transgender issues and opportunities. Trans*Code events aim to help draw attention to transgender issues through informal, topic-focused hackdays. Coders, designers, activists, and community members not currently working in technology are also encouraged to participate.

It is a free full day workshop & hackday open to trans and non-binary folk, allies, coders, designers and visionaries of all sorts. Stay tuned for more details.

Beginners Day

If you are new to Python or you are not so familiar with it, don’t worry: we’ve got you covered. Join us at Beginner&aposs Day: a day to cover the basics of Python so you can fully enjoy the conference ahead.

Please bear with us while we define the bells and whistles of this workshop including how to join. Stay tuned!

💡Pew Pew Workshop

Following the huge fun and success of the PewPew workshop at EuroPython 2019, you have a chance again to join the PewPew game console creator, Radomir Dopieralski, who will be running a workshop on how to program PewPew with CircuitPython.

If you are an experienced developer, we are looking for 2 more coaches to run the workshop. Please drop us an email at programme@europython.eu

💖EuroPython @ PyCon US

PyCon US just wound up and some of our EuroPython team members (Cheuk Ting Ho, Naomi Ceder, Patrick Armino, Sangarshanan Veera, Sebastian Zeeff) attended & presented at the conference. Here are some captures from the conference (in no particular order):

alt

Did you come across them during the hallway track or enjoyed their talks? Let them & us know on twitter or email!

🎗️Upcoming Events

🗓️
If you have a cool Python event and want to be featured, hit the reply button and write to us!

Python Ireland Monthly Meetup

Next Python Ireland Monthly meetup will be held on Wed 11th May, 6:30PM to 8:30PM and it will be online.

The speaker is Jeremiah Paige and he will talk about “Invisible Walls: Isolating Your Python”: Stop building projects that only "work on my machine", Learn how to isolate your python application by executing in an isolated, reproducible environment that extends beyond the code you write.

More details of the event can be found here: https://www.meetup.com/pythonireland/events/kqwjvrydchbpb/

PyLadies Dublin May Meetup

The next PyLadies Dublin meetup will be on Tue 21st May, a collaboration with Women in AI Ireland showcasing projects from the recent WaiPRACTICE programme. This will be our first in-person event since Covid restrictions 2 years ago, and will be hosted by Dogpatch Labs. Food will be provided thanks to Inscribe AI.

More details: https://www.meetup.com/PyLadiesDublin/events/285567337/

TOG Hackerspace (Dublin)

It’s “Bring your laptop to TOG’s Open Coding Night” on Tue 3rd May.

More details: https://www.meetup.com/Tog-Dublin-Hackerspace/events/ggzmtsydchbfb/

Cambridge Python User Group

At this virtual event on Tue 3rd May, Alexandre Faget will be leading a session on testing with pytest.

More details: https://www.meetup.com/CamPUG/events/285277862/

PyLadies Berlin Meetup

The next PyLadies Berlin meetup will be on Tue May 17 with talks from “Exploring our community” by Jessica Greene, “Reproducible machine learning projects with DVC and Poetry” by Doreen and “Python’s tale of concurrency” by Pradhvan Bisht.

Details: https://www.meetup.com/PyLadies-Berlin/events/285313817/

🌍Special Past Event Recap

PyCamp - to go camping in the pythonic way 🐍

Nature, OSS, and Python friends - this defines what PyCamp is.

After many years of PyCamping events in Argentina, the event had its first edition in Europe this year. The event counted with 25 people spending 4 days together doing what they love the most - collaborating with others through code. The location chosen was the beautiful region of Girona near Barcelona, Spain. Surrounded by nature, our pythonistas could choose any project or workshop to participate in.

From translating Japanese mangas to building bots and making music with Python, the projects were interesting and fun. If no project is of your taste, you can always propose one yourself. Not a Pythonista yourself? No problem at all. In this edition, we had people talking about DevOps tools and JavaScript frameworks, so feel free to bring different topics to the camp. The evenings were fulfilled with games, karaoke, and good chats. Living and collaborating with others makes without saying a true community event. So see you next PyCamp?

🐍Cool Python & Friends Projects

📢
Know a cool Python project? Hit the reply button and write to us!

Memray - Memray is a memory profiler for Python. It can track memory allocations in Python code, in native extension modules, and in the Python interpreter itself.

Goey - Turn (almost) any Python 3 Console Program into a GUI application with one line.

Polars - Polars is a blazingly fast DataFrames library implemented in Rust using Apache Arrow Columnar Format as a memory model.

DeepMind AUX - AUX, built on top of JAX, provides audio processing functions and tools to JAX. It is a sister library of PIX designed for image processing in JAX. Likewise, all operations in AUX can be optimised through jax.jit

PyScript - PyScript is a Pythonic alternative to Scratch, JSFiddle or other "easy to use" programming frameworks, making the web a friendly, hackable, place where anyone can author interesting and interactive applications.

May 03, 2022 11:40 AM UTC


Python Bytes

#282 Don't Embarrass Me in Front of The Wizards

<p><strong>Watch the live stream:</strong></p> <a href='https://www.youtube.com/watch?v=tOA5uJthE14' style='font-weight: bold;'>Watch on YouTube</a><br> <br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://testandcode.com/"><strong>Test &amp; Code</strong></a> Podcast</li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Brian #1:</strong> <a href="https://www.pyscript.net/"><strong>pyscript</strong></a></p> <ul> <li>Python in the browser, from Anaconda. <a href="https://github.com/pyscript/pyscript">repo here</a></li> <li>Announced at PyConUS</li> <li>“During a keynote speech at PyCon US 2022, Anaconda’s CEO Peter Wang unveiled quite a surprising project — <a href="https://pyscript.net/">PyScript</a>. It is a JavaScript framework that allows users to create Python applications in the browser using a mix of Python and standard HTML. The project’s ultimate goal is to allow a much wider audience (for example, front-end developers) to benefit from the power of Python and its various libraries (statistical, ML/DL, etc.).” from a nice article on it, <a href="https://towardsdatascience.com/pyscript-unleash-the-power-of-python-in-your-browser-6e0123c6dc3f"><strong>PyScript — unleash the power of Python in your browser</strong></a></li> <li>PyScript is built on <a href="https://pyodide.org/en/stable/">Pyodide</a>, which is a port of CPython based on WebAssembly.</li> <li>Demos are cool. </li> <li>Note included in README: “This is an extremely experimental project, so expect things to break!”</li> </ul> <p><strong>Michael #2:</strong> <a href="https://github.com/bloomberg/memray"><strong>Memray from Bloomberg</strong></a></p> <ul> <li>Memray is a memory profiler for Python. </li> <li>It can track memory allocations in <ul> <li>Python code</li> <li>native extension modules</li> <li>the Python interpreter itself</li> </ul></li> <li>Works both via CLI and focused app calls</li> <li>Memray can help with the following problems: <ul> <li>Analyze allocations in applications to help discover the cause of high memory usage.</li> <li>Find memory leaks.</li> <li>Find hotspots in code which cause a lot of allocations.</li> </ul></li> <li>Notable features: <ul> <li>🕵️‍♀️ Traces every function call so it can accurately represent the call stack, unlike sampling profilers.</li> <li>ℭ Also handles native calls in C/C++ libraries so the entire call stack is present in the results.</li> <li>🏎 Blazing fast! Profiling causes minimal slowdown in the application. Tracking native code is somewhat slower, but this can be enabled or disabled on demand.</li> <li>📈 It can generate various reports about the collected memory usage data, like flame graphs.</li> <li>🧵 Works with Python threads.</li> <li>👽🧵 Works with native-threads (e.g. C++ threads in native extensions)</li> </ul></li> <li><a href="https://bloomberg.github.io/memray/run.html#id3"><strong>Has a live view in the terminal</strong></a>.</li> <li>Linux only</li> </ul> <p><strong>Brian #3:</strong> <a href="https://github.com/browsertron/pytest-parallel"><strong>pytest-parallel</strong></a></p> <ul> <li>I’ve often sped up tests that can be run in parallel by using -n from pytest-xdist.</li> <li>I was recommending this to someone on Twitter, and Bruno Oliviera suggested a couple of alternatives. One was pytest-parallel, so I gave it a try.</li> <li>pytest-xdist runs using multiprocessing</li> <li>pytest-parallel uses both multiprocessing and multithreading.</li> <li>This is especially useful for test suites containing threadsafe tests. That is, mostly, pure software tests.</li> <li>Lots of unit tests are like this. System tests are often not.</li> <li>Use <code>--workers</code> flag for multiple processors, <code>--workers auto</code> works great.</li> <li>Use <code>--tests-per-worker</code> for multi-threading. <code>--tesst-per-worker auto</code> let’s it pick.</li> <li>Very cool alternative to xdist.</li> - </ul> <p><strong>Michael #4:</strong> <a href="https://www.fatiando.org/pooch/v1.6.0/index.html"><strong>Pooch: A friend for data files</strong></a></p> <ul> <li>via via Matthew Fieckert</li> <li>Just want to download a file without messing with <code>requests</code> and <code>urllib</code>?</li> <li>Who is it for? Scientists/researchers/developers looking to simply download a file.</li> <li>Pooch makes it easy to download a file (one function call). On top of that, it also comes with some bonus features: <ul> <li>Download and cache your data files locally (so it’s only downloaded once).</li> <li>Make sure everyone running the code has the same version of the data files by verifying cryptographic hashes.</li> <li>Multiple download protocols HTTP/FTP/SFTP and basic authentication.</li> <li>Download from Digital Object Identifiers (DOIs) issued by repositories like figshare and Zenodo.</li> <li>Built-in utilities to unzip/decompress files upon download</li> </ul></li> <li><code>file_path = pooch.retrieve(url)</code></li> </ul> <p><strong>Extras</strong> </p> <p>Michael:</p> <ul> <li>New course! <a href="https://training.talkpython.fm/courses/up-and-running-with-git-a-pragmatic-ui-based-introduction"><strong>Up and Running with Git - A Pragmatic, UI-based Introduction</strong></a>.</li> </ul> <p><strong>Joke:</strong> </p> <ul> <li><a href="https://www.reddit.com/r/ProgrammerHumor/comments/uh8rsb/happens_to_the_best_of_us/"><strong>Don’t embarrass me in front of the wizards</strong></a></li> <li>Michael’s <a href="https://twitter.com/mkennedy/status/1520181145261928448"><strong>crashing github</strong></a> is embarrassing him in front of the wizards!</li> </ul>

May 03, 2022 08:00 AM UTC


Tryton News

Tryton Release 6.4

We are proud to announce the 6.4 release of Tryton.
This release provides many bug fixes, performance improvements and some fine tuning. What is also significant is the addition of 9 new modules.
You can give it a try on the demo server, use the docker image or download it here.
As usual migration from previous series is fully supported. No manual operations are required.

Here is a list of the most noticeable changes:

Changes for the User

It is now possible for modules to display a notification message from the server while the user is filling in a form. This is already used by the sale_stock_quantity module to display a message when user selects a product which has a forecast quantity that is not enough for the order.

Users can now choose which optional columns are displayed on list and tree views. All modules have been reviewed to make non-essential columns optional and thus provide a lean interface by default.

Some views can now be used to edit but not create new records. This can be used, for example, to setup an editable list that allows the data to be modified, but creating a new record will always use the form view.

The CSV import now skips empty rows inside One2Many fields. It is now possible to import many One2Many fields in the same file but with different lengths.

The CSV import error messages have been improved to include the model, field and column. This makes it much easier to find and solve problems with the CSV data.

More (click for more details)

Web Client

Reference fields can now be opened from the list and tree views like Many2One fields. They are rendered as a link which opens a new tab using the form of the target model.

Desktop Client

CSV exports encoded in UTF-8 now include, by default, the Byte Order Mark to increase compatibility with other systems.

The multi-selection widget now uses the same default selection behavior as other lists. This solves inconsistency in the behavior.

Accounting

The reconciliation wizard now has an option to automatically reconcile the default suggestions. This speeds up the process for accounting with a lot of entries when the system is well configured.

Similar to the debit type, we now have also an optional credit type on accounts. Of course an accountant can only have one optional debit or credit type.

The general ledger now displays, by default, debit/credit columns only when there are actually lines in the account for the period. And it also display the number of lines.

We now use the invoice date (instead of the accounting date) to enforce the sequence order for customer invoices. This is more flexible and is still consistent with most country’s rules.

When interactively validating a supplier invoice with the same reference as an existing one, Tryton raises a warning as the user may be entering the same invoice twice.

Lines in a payable or receivable account can now only be added to a payment if they have a maturity date. This avoids creating payments for purely accounting lines.

The receivable payments can now be processed without needing to be approved first, just submitted. This simplifies the workflow for receiving payments like checks where there is no need for a second approval.
It is also now possible to edit the amount of a payment that is in a processing state. This is because sometimes a different amount is read from a check compared to the amount read by the bank.

We no longer create dunnings for lines with a pending payment.

It is no longer possible to select reconciled payments or groups when entering a statement. This simplifies the selection task for the user, and for the rare case where they still need to select a reconciled payment, they can still unreconcile them before selection.

The clearing line of a payment is now automatically reconciled with all the statement lines linked to it.

The user can now choose the allocation method to apply to the shipment cost.

More (click for more details)

Banking

Tryton can now fill in or create the related bank from an IBAN.

When searching for a bank name, Tryton also searches on the BIC.

Party

The country name on a printed address is always shown in English in order to follow the international standard.

The SIREN and SIRET codes are now managed as identifiers on the party.

A party identifier can now be linked to one of the addresses of the party. The SIRET number uses this new feature.

The “autonomous city” is now allowed as subdivision for Spain.

All the lines of the street are now used as part of the record name of an address.

Product

It is now forbidden to decrease the number of digits of a unit of measure. This prevents invalidating existing quantities linked to the unit.

We now warn users who try to deactivate a product that still has stock.

Production

The stock move form now also shows, where applicable, the production it is linked to.

Purchase

It is now possible to define a default currency for each supplier.

Sale

It is now possible to define a default currency for each customer.

The origin name of an invoice line for advance payment is now filled in with the advance payment condition name.
The advance payments are now recalled with a negative quantity instead of a negative price.

The opportunities reports now use real date fields to display the month instead of two fields - year and month. This improve the search possibilities.

Sales made by POS are now included in the general sales reports.

When registering cash change given using the POS, we use a negative debit or credit in the accounts. This prevents artificially increasing the totals.

A notification is now displayed directly to the user when entering in a sale of goods whose forecast quantity is not high enough to cover the sale.

Stock

Tryton now also recalculates the cost price on the moves of drop shipments.

The assignation process now uses the lot number as criteria if it is populated.

Upward and downward traces have been added to stock lots to improve lot traceability.

It is now possible to select the UPS notification service.

The forecasts are now applied to all the stock supplies instead of only the purchase requests.

More (click for more details)

Web Shop

Tryton now supports editing orders from Shopify.

New Modules

Account Spanish SII

The Account Spanish SII Module allows sending invoices to the SII portal. This is legal requirement for some Spanish Companies.

Account Invoice Watermark

The Account Invoice Watermark Module adds a draft or paid watermark to the printed invoice.

Account Receivable Rule

The Account Receivable Rule Module defines rules to reconcile receivables between accounts.

Account Stock Shipment Cost Weight

The Account Stock Shipment Cost Weight Module adds “By Weight” as an allocation method for shipment costs.

Account Tax Non-Deductible

The Account Tax Non-Deductible Module allows the definition of non-deductible taxes and adds reports for them.

Purchase Product Quantity

The Purchase Product Quantity Module permits enforcing minimum and rounded quantities to be purchased per supplier from purchase requests.

Sale Invoice Date

The Sale Invoice Date Module fills in the invoice date for invoices created by sales.

Sale Product Quantity

The Sale Product Quantity Module permits enforcing minimum and rounded quantities to be sold per product.

Stock Shipment Cost Weight"

The Stock Shipment Cost Weight Module adds “By Weight” as an allocation method for shipment costs on the carrier.

Changes for the System Administrator

The CORS configuration is now also applied to the root path.

Tryton now automatically retries sending emails if there is a temporary failure status.

We removed the password validation based on entropy. It was not a good measurement for robustness of password.

The login methods receive the options ip_address and device_cookie.

Country

The script to load postal codes now uses the Tryton CDN for more reliability.

We now support pycountry 22.1.10.

Changes for the Developer

We now use the unittest discover method as a replacement for the deprecated test command from setuptools.

The documentation now contains a tutorial on how to create a Tryton module.

The models now have an on_change_notify method which is used to display messages on the client while user is filling in a record.

The ModelStorage has now a validate_fields method which permits validation to occur only when specific fields have been modified. This is useful if the validation is expensive. All the modules have been reviewed to take advantage of this new feature.

The depends on the Field are now python sets. It is also no longer necessary to define depends for the states, domain and digits expression. Tryton calculates them automatically. However, it is still needed for the context if you want to be sure that it will always be set.

We include in the view only the depends fields that are needed for that type of view (editable or not).

We prevent creating or deleting singletons. The corresponding actions are disabled in the clients.

The Reference fields use a dictionary for domain and search_order. The keys are the name of the target model.

It is possible to define a field on a tree view as optional. The user will then have the choice of whether to display it or not.

The creatable attribute on tree and form views allows defining whether the view can be used to create new records or not. If not the client will switch to another view from which records can be created.

The local cache of instances created by Function fields are now populated with the values that have already been read. This speed up the calculation of these fields.

The value of Function fields are now cached during the transaction if it is readonly.

We use the JSONB column type to store Dict fields on PostgreSQL backend.

More (click for more details)

Web Client

We now use ECMAScript version 6.

Desktop Client

We added an option to define the logging output location.

Accounting

A date must always be set in order to calculate taxes.

We enforce the same type for the children of an account with a type set.

A unique Reference field is now used to on the statement line instead of multiple Many2One fields.

More (click for more details)

Banking

We enforce the uniqueness of the IBAN and permit only one per bank account.

More (click for more details)

Country

The script to load subdivisions no longer fails for unknown subdivision types.

Web Shop

A route has been added to support the Shopify webhook for orders. This allows quicker order synchronization.

We removed the API backoff time and now support request retries using the Shopify Retry-After header.

We modify only the metafields managed by Tryton.

3 posts - 1 participant

Read full topic

May 03, 2022 06:00 AM UTC

May 02, 2022


Glyph Lefkowitz

Inbox Zero, Cost: Zero

One consistent bit of feedback that I’ve received on my earlier writing about email workflow is that I didn’t include a concrete enough set of instructions for getting started with task-management workflow, particularly with low-friction options that are available for people who don’t necessarily have $100 per year to drop on the cadillac of task-management applications.

Given that the piece seems to be enjoying a small resurgence of attention, I’ve significantly expanded the “Make A Place For Tasks” section of that article, with:

It was nice to be doing this update now, because in the years since that piece was published, almost every major email application has added task-management features, or upgraded them into practical usability; gone are the times when properly filing your emails into clearly-described tasks was an esoteric feature that you needed expensive custom software for.

May 02, 2022 11:27 PM UTC


Python Morsels

Unicode character encodings

When working with text files in Python, it's considered a best practice to specify the character encoding that you're working with.

Table of contents

  1. All input starts as raw bytes
  2. Encoding strings into bytes
  3. Decoding bytes into strings
  4. Specifying a character encoding when opening files
  5. Be careful with your character encodings
  6. Summary

All input starts as raw bytes

When you open a file in Python, the default mode is r or rt, for read text mode:

>>> with open("my_file.txt") as f:
...     contents = f.read()
...
>>> f.mode
'r'
>>> with open("my_file.txt") as f:
...     contents = f.read()
...
>>> f.mode
'r'

Meaning when we read our file, we'll get back strings that represent text:

>>> contents
'This is a file ✨\n'
>>> contents
'This is a file ✨\n'

But that's not what Python actually reads from disk.

If we open a file with the mode rb and read from our file we'll see what Python sees; that is bytes:

>>> with open("my_file.txt", mode="rb") as f:
...     contents = f.read()
...
>>> contents
b'This is a file \xe2\x9c\xa8\n'
>>> type(contents)
<class 'bytes'>
>>> with open("my_file.txt", mode="rb") as f:
...     contents = f.read()
...
>>> contents
b'This is a file \xe2\x9c\xa8\n'
>>> type(contents)
<class 'bytes'>

Bytes are what Python decodes to make strings.

Encoding strings into bytes

If you have a string …

Read the full article: https://www.pythonmorsels.com/unicode-character-encodings-in-python/

May 02, 2022 03:00 PM UTC


Real Python

Python's min() and max(): Find Smallest and Largest Values

Python’s built-in min() and max() functions come in handy when you need to find the smallest and largest values in an iterable or in a series of regular arguments. Even though these might seem like fairly basic computations, they turn out to have many interesting use cases in real-world programing. You’ll try out some of those use cases here.

In this tutorial, you’ll learn how to:

  • Use Python’s min() and max() to find smallest and largest values in your data
  • Call min() and max() with a single iterable or with any number of regular arguments
  • Use min() and max() with strings and dictionaries
  • Tweak the behavior of min() and max() with the key and default arguments
  • Use comprehensions and generator expressions as arguments to min() and max()

Once you have this knowledge under your belt, then you’ll be prepared to write a bunch of practical examples that will showcase the usefulness of min() and max(). Finally, you’ll code your own versions of min() and max() in pure Python, which can help you understand how these functions work internally.

Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you’ll need to take your Python skills to the next level.

To get the most out of this tutorial, you should have some previous knowledge of Python programming, including topics like for loops, functions, list comprehensions, and generator expressions.

Getting Started With Python’s min() and max() Functions

Python includes several built-in functions that make your life more pleasant and productive because they mean you don’t need to reinvent the wheel. Two examples of these functions are min() and max(). They mostly apply to iterables, but you can use them with multiple regular arguments as well. What’s their job? They take care of finding the smallest and largest values in their input data.

Whether you’re using Python’s min() or max(), you can use the function to achieve two slightly different behaviors. The standard behavior for each is to return the minimum or maximum value through straightforward comparison of the input data as it stands. The alternative behavior is to use a single-argument function to modify the comparison criteria before finding the smallest and largest values.

To explore the standard behavior of min() and max(), you can start by calling each function with either a single iterable as an argument or with two or more regular arguments. That’s what you’ll do right away.

Calling min() and max() With a Single Iterable Argument

The built-in min() and max() have two different signatures that allow you to call them either with an iterable as their first argument or with two or more regular arguments. The signature that accepts a single iterable argument looks something like this:

min(iterable, *[, default, key]) -> minimum_value

max(iterable, *[, default, key]) -> maximum_value

Both functions take a required argument called iterable and return the minimum and maximum values respectively. They also take two optional keyword-only arguments: default and key.

Note: In the above signatures, the asterisk (*) means that the following arguments are keyword-only arguments, while the square brackets ([]) denote that the enclosed content is optional.

Here’s a summary of what the arguments to min() and max() do:

Argument Description Required
iterable Takes an iterable object, like a list, tuple, dictionary, or string Yes
default Holds a value to return if the input iterable is empty No
key Accepts a single-argument function to customize the comparison criteria No

Later in this tutorial, you’ll learn more about the optional default and key arguments. For now, just focus on the iterable argument, which is a required argument that leverages the standard behavior of min() and max() in Python:

>>>
>>> min([3, 5, 9, 1, -5])
-5

>>> min([])
Traceback (most recent call last):
    ...
ValueError: min() arg is an empty sequence

>>> max([3, 5, 9, 1, -5])
9

>>> max([])
Traceback (most recent call last):
    ...
ValueError: max() arg is an empty sequence

In these examples, you call min() and max() with a list of integer numbers and then with an empty list. The first call to min() returns the smallest number in the input list, -5. In contrast, the first call to max() returns the largest number in the list, or 9. If you pass an empty iterator to min() or max(), then you get a ValueError because there’s nothing to do on an empty iterable.

An important detail to note about min() and max() is that all the values in the input iterable must be comparable. Otherwise, you get an error. For example, numeric values work okay:

>>>
>>> min([3, 5.0, 9, 1.0, -5])
-5

>>> max([3, 5.0, 9, 1.0, -5])
9

These examples combine int and float numbers in the calls to min() and max(). You get the expected result in both cases because these data types are comparable.

However, what would happen if you mixed strings and numbers? Check out the following examples:

>>>
>>> min([3, "5.0", 9, 1.0, "-5"])
Traceback (most recent call last):
    ...
TypeError: '<' not supported between instances of 'str' and 'int'

>>> max([3, "5.0", 9, 1.0, "-5"])
Traceback (most recent call last):
    ...
TypeError: '>' not supported between instances of 'str' and 'int'

Read the full article at https://realpython.com/python-min-and-max/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

May 02, 2022 02:00 PM UTC


Codementor

Functions in Python

In This Article We are going to learn how functions will work. Let's first discuss What exactly is a Python Function. In Python, Function is a group of related statements that performs a specific...

May 02, 2022 01:59 PM UTC


Mike Driscoll

PyDev of the Week: Jyotika Singh

The PyDev of the Week this week is Jyotika Singh (@JyotikaSingh_). Jyotika is the maintainer of pyAudioProcessing and a speaker at multiple conferences. You can check out what Jyotika is up to by going to her GitHub profile.

Let's spend a few minutes getting to know Jyotika better!

Can you tell us a little about yourself (hobbies, education, etc): 

I work as the Director of Data Science at Placemakr and volunteer as a mentor at Data Science Nigeria and Women Impact Tech. I actively participate in conferences and webinars to share my knowledge and experiences with the Python and Data Science community, and with students aspiring to a career in software development and data science. As part of my research, I have been granted multiple patents in data science, algorithms, and marketing optimization techniques.

I graduated with a Master's in Science degree from the University of California, Los Angeles (UCLA), specializing in Signals and Systems.

When not engaged in technology and coding, I enjoy sketching, painting, playing musical instruments, and at times enjoy my evenings at the beach.

Why did you start using Python? 

The first time I used Python was to work on an assignment on data scraping during the course of my MS in 2015. While the course didn't require me to use Python, I chose to take on the new language given the ease of writing and availability of ML libraries which I was just beginning to look into. As my MS progressed, I kept learning more of Python and got hooked onto its ease and functionality, and compatibility with other tools.

What other programming languages do you know and which is your favorite? 

I have primarily worked with MATLAB, C, Java, R, Golang, and Python. Python has hands-down been my favorite for a while now.

What projects are you working on now? 

For the next few months, I'm working on a few different projects.

  1. Pricing recommendation and optimization models
  2. Content-based recommendation systems
  3. Consumer review analysis and classification
  4. A reinforcement learning-based model for ROI optimization

Furthermore, I'm in the process of writing a book on industrial applications and implementations of Natural Language Processing.

How did you decide to write a book about NLP with Python? 

There is a known gap between what data science graduates master versus the needs of a data scientist in the industry. The majority of the projects start with the availability of data in a state that does not require much thought behind where the data comes from or what data is required. The most commonly available learning resources have attributes far from real-world applications.

For individuals in software development, technology, product management, or those that are new to data science or Natural Language Processing, learning about what to solve and how to proceed can be a challenging problem. Keeping this in mind, I decided to write this book that explains the application across 15 industry verticals. It is set to dig into practical implementations of many popular applications and contains actual code examples. This book will guide users to build applications with Python and highlight the reality of data problems and solutions in the real world.

Which Python libraries are your favorite (core or 3rd party)? 

os, collections, pandas, numpy, sklearn, pytorch, tensorflow, keras, spacy, nltk, matplotlib, and my own pyAudioProcessing.

How did the pyAudioProcessing library come about? 

I was working on a unique audio classification problem. Given the popularity of ML tooling in Python, I was looking to build my audio models in Python. I noticed a large gap between the research and development happening in MATLAB versus the state of 3rd party tooling in Python around audio.

Different types of non-numeric data have different feature formation techniques that work well for numerically representing the data in a meaningful way. An example for text data would be TF-IDF. Similarly, audio data has a completely different and its own set of feature formation techniques that represent the information in a numerical sense. I took it upon myself to mathematically construct audio features from raw audio in Python, and decided to open-source my work while recognizing the need. This gave rise to PyAudioProcessing. Today, you can extract features such as GFCC, MFCC, spectral features, and chroma features using PyAudioProcessing. Its integration with other 3rd party libraries helps extract audio visualizations, audio format conversion, build audio classification models using sklearn models, and use off-the-shelf audio classification models for some common tasks.

What challenges do you face as a maintainer of a Python package? 

Given a packed schedule with my full-time job, research, conferences, and mentorship volunteering, the most challenging bit is taking the time out for continuous development and keeping the library up-to-date with new research, needs, features, and compatibility with the latest Python releases. Contributors are always welcome!

Is there anything else you’d like to say? 

I would like to thank all the people who open-source their work that the community is able to leverage for their personal and professional projects. Also, a huge shout out to people who volunteer and make efforts to share their knowledge and findings via events organized by the Python and Data community.

I'm on Twitter at jyotikasingh_, follow me to catch my latest talks, work, and findings.

Thanks for doing the interview, Jyotika!

The post PyDev of the Week: Jyotika Singh appeared first on Mouse Vs Python.

May 02, 2022 12:30 PM UTC


Tryton News

Release of python-sql 1.4.0

We are proud to announce the release of the version 1.4.0 of python-sql.

python-sql is a library to write SQL queries in a pythonic way. It is mainly developed for Tryton but it has no external dependencies and is agnostic to any framework or SQL database.

In addition to bug-fixes, this release contains the following improvements:

python-sql is available on PyPI: python-sql · PyPI

1 post - 1 participant

Read full topic

May 02, 2022 10:43 AM UTC

Releaset of Relatorio 0.10.1

We are proud to announce the release of Relatorio version 0.10.1.

Relatorio is a templating library mainly for OpenDocument using also OpenDocument as source format.

This is a bug-fix release which:

The package is available at https://pypi.org/project/relatorio/0.10.1/
The documentation is available at https://relatorio.readthedocs.io/en/0.10.1/

1 post - 1 participant

Read full topic

May 02, 2022 10:39 AM UTC


Podcast.__init__

Accelerate Your Machine Learning Experimentation With Automatic Checkpoints Using FLOR

The experimentation phase of building a machine learning model requires a lot of trial and error. One of the limiting factors of how many experiments you can try is the length of time required to train the model which can be on the order of days or weeks. To reduce the time required to test different iterations Rolando Garcia Sanchez created FLOR which is a library that automatically checkpoints training epochs and instruments your code so that you can bypass early training cycles when you want to explore a different path in your algorithm. In this episode he explains how the tool works to speed up your experimentation phase and how to get started with it.

Summary

The experimentation phase of building a machine learning model requires a lot of trial and error. One of the limiting factors of how many experiments you can try is the length of time required to train the model which can be on the order of days or weeks. To reduce the time required to test different iterations Rolando Garcia Sanchez created FLOR which is a library that automatically checkpoints training epochs and instruments your code so that you can bypass early training cycles when you want to explore a different path in your algorithm. In this episode he explains how the tool works to speed up your experimentation phase and how to get started with it.

Announcements

  • Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Your host as usual is Tobias Macey and today I’m interviewing Rolando Garcia about FLOR, a suite of machine learning tools for hindsight logging that lets you speed up model experimentation by checkpointing training data

Interview

  • Introductions
  • How did you get introduced to Python?
  • Can you describe what FLOR is and the story behind it?
  • What is the core problem that you are trying to solve for with FLOR?
    • What are the fundamental challenges in model training and experimentation that make it necessary?
    • How do machine learning reasearchers and engineers address this problem in the absence of something like FLOR?
  • Can you describe how FLOR is implemented?
    • What were the core engineering problems that you had to solve for while building it?
  • What is the workflow for integrating FLOR into your model development process?
  • What information are you capturing in the log structures and epoch checkpoints?
    • How does FLOR use that data to prime the model training to a given state when backtracking and trying a different approach?
  • How does the presence of FLOR change the costs of ML experimentation and what is the long-range impact of that shift?
    • Once a model has been trained and optimized, what is the long-term utility of FLOR?
  • What are the opportunities for supporting e.g. Horovod for distributed training of large models or with large datasets?
  • What does the maintenance process for research-oriented OSS projects look like?
  • What are the most interesting, innovative, or unexpected ways that you have seen FLOR used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on FLOR?
  • When is FLOR the wrong choice?
  • What do you have planned for the future of FLOR?

Keep In Touch

Picks

Closing Announcements

  • Thank you for listening! Don’t forget to check out our other show, the Data Engineering Podcast for the latest on modern data management.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

May 02, 2022 10:14 AM UTC


Zato Blog

Integrating with Salesforce in Python

Overview

Salesforce connections are one of the newest additions to Zato 3.2, allowing you to look up and manage Salesforce records and other business data. To showcase it, the article will create a sample Salesforce marketing campaign in a way that does not require the usage of anything else except for basic REST APIs combined with plain Python objects, such as dicts.

If you have not done it already, you can download Zato here.

Basic workflow

The scope of our works will be:

Creating Salesforce credentials

To be able to create as connection to Salesforce in the next step, we need a few credentials.

In runtime, based on this information, Zato will obtain the necessary authentication and authorization tokens itself, which means that you will only focus on the business side of the integrations, not on the low-level aspects of it.

The process of obtaining the credentials needs to be coordinated with an administrator of your organization. To assist in that, the screenshots below explain where to find them.

The credentials are:

The username and password are simply the same credentials that can be used to log in to Salesforce:

Logging in to Salesforce

Consumer key and secret are properties of a connected app - this is a term that Salesforce uses for API clients that invoke its services. If you are already an experienced Salesforce REST API user, you may know the key and secret under their aliases of “client_id” and “client_secret” - these are the same objects.

Finding the app manager in Salesforce

Note that when a connected app already exists and you would like to retrieve the key and secret, they will be available under the “View” menu option for the app, not under “Edit” or “Manage”.

Consumer key and secret for a connected app

Defining a Salesforce connection in Zato

With all the credentials in place, we can create a new Salesforce connection in Zato Dashboard, as below.

Saleforce connections menu in Zato Dashboard A form to defined a Salesforce connection in Zato Dashboard

Authoring an integration service in Python

Above, we created a connection definition that lets Zato obtain session tokens and establish connections to Salesforce. Now, we can create an API service that will make use of such connections.

In the example below, we are using the POST REST method to invoke an endpoint that creates new Salesforce campaigns. In your own integrations, you can invoke any other Salesforce endpoint, using any REST method as needed, by following the same pattern, which is, create a model with input fields, build a Python dict for the request to Salesforce, invoke it and map all the required from the response from Salesforce to that which your own service returns to its own callers.

Note that we use a datamodel-based SimpleIO definition for the service. Among other things, although we are not going to do it here, this would let us offer definitions for this and other services.

# -*- coding: utf-8 -*-

# stdlib
from dataclasses import dataclass

# Zato
from zato.server.service import Model, Service

# ###########################################################################

if 0:
    from zato.server.connection.salesforce import SalesforceClient

# ###########################################################################

@dataclass(init=False)
class CreateCampaignRequest(Model):
    name:    str
    segment: str

# ###########################################################################

@dataclass(init=False)
class CreateCampaignResponse(Model):
    campaign_id: str

# ###########################################################################

class CreateCampaign(Service):

    class SimpleIO:
        input  = CreateCampaignRequest
        output = CreateCampaignResponse

    def handle(self):

        # This is our input data
        input = self.request.input # type: CreateCampaignRequest

        # Salesforce REST API endpoint to invoke - note that Zato
        # will add a prefix to it containing the API version.
        path = '/sobjects/Campaign/'

        # Build the request to Salesforce based on what we received
        request = {
          'Name': input.name,
          'Segment__c': input.segment,
        }

        # .. create a reference to our connection definition ..
        salesforce = self.cloud.salesforce['My Salesforce Connection']

        # .. obtain a client to Salesforce ..
        with salesforce.conn.client() as client: # type: SalesforceClient

            # .. create the campaign now ..
            response = client.post(path, request)

        # .. and return its ID to our caller.
        self.response.payload.campaign_id = response['id']

# ###########################################################################

Creating a REST channel

Note that we assign HTTP Basic Auth credentials to the channel. In this manner, it is possible for clients of this REST channel to authenticate using a method that they are already familiar which simplifies everyone’s work - it is Zato that deals with how to authenticate against Salesforce whereas your API clients use the ubiquitous HTTP Basic Auth method.

REST connections menu in Zato Dashboard REST connections menu in Zato Dashboard

Testing

The last step is to invoke the newly created channel:

$ curl http://api:password@localhost:17010/api/campaign/create -d '{"name":"Hello", "segment":"123"}'
{"campaign_id":"8901Z3VHXDTebEJWs"}
$

That is everything - you have just integrated with Salesforce and exposed a REST channel for external applications to integrate with!

Next steps

May 02, 2022 09:31 AM UTC


John Ludhi/nbshare.io

PySpark GroupBy Examples

PySpark GroupBy Examples

In this notebook, we will go through PySpark GroupBy method. For this exercise, I will be using following data from Kaggle...
https://www.kaggle.com/code/kirichenko17roman/recommender-systems/data

If you don't have PySpark installed, install Pyspark on Linux by clicking here.

In [ ]:
from pyspark.sql.functions import sum, col, desc, avg, round, count
import pyspark.sql.functions as F
from pyspark.sql import SparkSession
from pyspark.sql.types import *
spark = SparkSession \
    .builder \
    .appName("Purchase") \
    .config('spark.ui.showConsoleProgress', False) \
    .getOrCreate()

Let us look at the data first.

In [2]:
df = spark.read.csv(
    "/home/notebooks/kz.csv", 
    header=True, sep=",")
#show 3 rows of our DataFrame
df.show(3)
+--------------------+-------------------+-------------------+-------------------+--------------------+-------+------+-------------------+
|          event_time|           order_id|         product_id|        category_id|       category_code|  brand| price|            user_id|
+--------------------+-------------------+-------------------+-------------------+--------------------+-------+------+-------------------+
|2020-04-24 11:50:...|2294359932054536986|1515966223509089906|2268105426648170900|  electronics.tablet|samsung|162.01|1515915625441993984|
|2020-04-24 11:50:...|2294359932054536986|1515966223509089906|2268105426648170900|  electronics.tablet|samsung|162.01|1515915625441993984|
|2020-04-24 14:37:...|2294444024058086220|2273948319057183658|2268105430162997728|electronics.audio...| huawei| 77.52|1515915625447879434|
+--------------------+-------------------+-------------------+-------------------+--------------------+-------+------+-------------------+
only showing top 3 rows

In [3]:
df.columns
Out[3]:
['event_time',
 'order_id',
 'product_id',
 'category_id',
 'category_code',
 'brand',
 'price',
 'user_id']

This is transaction data.

PySpark Groupby Count

Let us count number of unique transactions by categories.

In [4]:
df.groupBy(['category_code']).count().show(5)
+----------------+-----+
|   category_code|count|
+----------------+-----+
|           13.87|11075|
|          350.67|    5|
|computers.ebooks|  884|
|           98.59|    2|
|            3.89| 6997|
+----------------+-----+
only showing top 5 rows

PySpark groupby and count can be run on multiple columns.

In [5]:
df.groupBy(['category_code','brand']).count().show(5)
+--------------------+-------------------+-----+
|       category_code|              brand|count|
+--------------------+-------------------+-----+
|electronics.smart...|               oppo|36349|
|appliances.enviro...|            airline|   52|
|computers.periphe...|               sanc|  584|
|appliances.enviro...|            insight|   11|
|               11.55|1515915625481232307|    1|
+--------------------+-------------------+-----+
only showing top 5 rows

PySpark drop null follow by GroupBy

In [6]:
dfg = df.dropna().groupBy(['category_code'])
In [7]:
dfg.count().show(2)
+--------------------+-----+
|       category_code|count|
+--------------------+-----+
|    computers.ebooks|  398|
|computers.periphe...| 3053|
+--------------------+-----+
only showing top 2 rows

PySpark GroupBy and Aggregate

Most of the times, groupby is followed by aggregate method. Let us say we want to find the average price for each category. Here is how it can be done.

In [8]:
df.dropna().groupBy(['category_code']).agg({'price':'avg'}).show(5)
+--------------------+------------------+
|       category_code|        avg(price)|
+--------------------+------------------+
|    computers.ebooks| 199.6687185929649|
|computers.periphe...| 71.94989518506395|
|construction.tool...|  18.2120273065784|
|appliances.kitche...|43.298406940063074|
|electronics.video...| 401.3619130434783|
+--------------------+------------------+
only showing top 5 rows

Note, pyspark has named the average price column to avg(price). We can rename the column name after aggregate method with withColumnRenamed method.

In [9]:
df.dropna().groupBy(['category_code']).agg({'price':'avg'}).withColumnRenamed("avg(price)", "price").show(5)
+--------------------+------------------+
|       category_code|             price|
+--------------------+------------------+
|    computers.ebooks| 199.6687185929649|
|computers.periphe...| 71.94989518506395|
|construction.tool...|  18.2120273065784|
|appliances.kitche...|43.298406940063074|
|electronics.video...| 401.3619130434783|
+--------------------+------------------+
only showing top 5 rows

Another way to rename the column in pyspark is using alias method.

In [10]:
df.dropna().groupBy(['category_code']).agg(avg('price').alias("avg_price")).show(3)
+--------------------+-----------------+
|       category_code|        avg_price|
+--------------------+-----------------+
|    computers.ebooks|199.6687185929649|
|computers.periphe...|71.94989518506395|
|construction.tool...| 18.2120273065784|
+--------------------+-----------------+
only showing top 3 rows

Pyspark Multiple Aggregate functions

We can also run multiple aggregate methods after groupby. Note F.avg and F.max which we imported above from pyspark.sql.
import pyspark.sql.functions as F

In [11]:
df.dropna().groupBy(['category_code']).agg(F.avg('price'),F.max('price')).show(2)
+--------------------+------------------+----------+
|       category_code|        avg(price)|max(price)|
+--------------------+------------------+----------+
|     accessories.bag| 20.63646942148758|     97.20|
|accessories.umbrella|110.71249999999998|     99.28|
+--------------------+------------------+----------+
only showing top 2 rows

We can rename the multiple columns using toDF() method as shown below.

In [12]:
Data_list = ["category_code","avg_price","max_price"]
df.dropna().groupBy(['category_code']).agg(F.avg('price'),F.max('price')).toDF(*Data_list).show(2)
+--------------------+------------------+---------+
|       category_code|         avg_price|max_price|
+--------------------+------------------+---------+
|     accessories.bag| 20.63646942148758|    97.20|
|accessories.umbrella|110.71249999999998|    99.28|
+--------------------+------------------+---------+
only showing top 2 rows

or we can use alias method this way...

In [13]:
df.dropna().groupBy(['category_code']).agg(avg('price').alias("avg_price"),F.max('price').alias("max_price")).show(3)
+--------------------+------------------+---------+
|       category_code|         avg_price|max_price|
+--------------------+------------------+---------+
|     accessories.bag| 20.63646942148758|    97.20|
|accessories.umbrella|110.71249999999998|    99.28|
|     apparel.costume|21.384999999999998|    27.75|
+--------------------+------------------+---------+
only showing top 3 rows

PySpark GroupBy follow by Aggregate and Sort Method

Let us sort the table by max_price.

In [14]:
df.dropna().groupBy(['category_code']).agg(F.avg('price'),F.max('price')).toDF(*Data_list).sort('max_price').show(2)
+--------------+------------------+---------+
| category_code|         avg_price|max_price|
+--------------+------------------+---------+
|    kids.swing|            115.72|   115.72|
|apparel.tshirt|21.384516129032253|    23.13|
+--------------+------------------+---------+
only showing top 2 rows

PySpark GroupBy follow by Aggregate and Filter method

We can filter results using Filter method. Below code filters the categories which have average price greater than 500.

In [15]:
dfg = df.dropna().groupBy(['category_code']).agg(F.avg('price').alias("avg_price"))
dfg.filter(dfg.avg_price> 500).show(4)
+--------------------+-----------------+
|       category_code|        avg_price|
+--------------------+-----------------+
|electronics.camer...| 670.243984962406|
|construction.tool...|513.4461206896547|
|  computers.notebook|571.6449383765361|
+--------------------+-----------------+

Conclusion


PySpark GroupBy is very powerful method to do data analysis. I hope above examples gave you enough to get started on PySpark GroupBy. Please email me if you want me to add more examples on PySpark Groupby.

May 02, 2022 07:39 AM UTC

May 01, 2022


Nikola

Nikola v8.2.2 is out!

On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v8.2.2. This is a bugfix release, whose only change is support for the latest version of Pygments.

What is Nikola?

Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown — and can even turn Jupyter Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).

Find out more at the website: https://getnikola.com/

Downloads

Install using pip install Nikola.

Changes

  • Compatibility with Pygments 2.12.0 (Issue #3617, #3618)

May 01, 2022 05:02 PM UTC


Zero to Mastery

Python Monthly Newsletter 💻🐍

29th issue of the Python Monthly Newsletter! Read by 25,000+ Python developers every month. This monthly Python newsletter covers the latest Python news so that you stay up-to-date with the industry and keep your skills sharp.

May 01, 2022 10:00 AM UTC


Python GUIs

Packaging PySide6 applications into a macOS app with PyInstaller (updated for 2022)

There is not much fun in creating your own desktop applications if you can't share them with other people — whether than means publishing it commercially, sharing it online or just giving it to someone you know. Sharing your apps allows other people to benefit from your hard work!

The good news is there are tools available to help you do just that with your Python applications which work well with apps built using PySide6. In this tutorial we'll look at the most popular tool for packaging Python applications: PyInstaller.

This tutorial is broken down into a series of steps, using PyInstaller to build first simple, and then more complex PySide6 applications into distributable macOS app bundles. You can choose to follow it through completely, or skip to the parts that are most relevant to your own project.

We finish off by building a macOS Disk Image, the usual method for distributing applications on macOS.

You always need to compile your app on your target system. So, if you want to create a Mac .app you need to do this on a Mac, for an EXE you need to use Windows.

Example Disk Image for macOS Example Disk Image Installer for macOS

If you're impatient, you can download the Example Disk Image for macOS first.

Requirements

PyInstaller works out of the box with PySide6 and as of writing, current versions of PyInstaller are compatible with Python 3.6+. Whatever project you're working on, you should be able to package your apps.

You can install PyInstaller using pip.

bash
pip3 install PyInstaller

If you experience problems packaging your apps, your first step should always be to update your PyInstaller and hooks package the latest versions using

bash
pip3 install --upgrade PyInstaller pyinstaller-hooks-contrib

The hooks module contains package-specific packaging instructions for PyInstaller which is updated regularly.

Install in virtual environment (optional)

You can also opt to install PySide6 and PyInstaller in a virtual environment (or your applications virtual environment) to keep your environment clean.

bash
python3 -m venv packenv

Once created, activate the virtual environment by running from the command line —

bash
call packenv\scripts\activate.bat

Finally, install the required libraries. For PySide6 you would use —

python
pip3 install PySide6 PyInstaller

Getting Started

It's a good idea to start packaging your application from the very beginning so you can confirm that packaging is still working as you develop it. This is particularly important if you add additional dependencies. If you only think about packaging at the end, it can be difficult to debug exactly where the problems are.

For this example we're going to start with a simple skeleton app, which doesn't do anything interesting. Once we've got the basic packaging process working, we'll extend the application to include icons and data files. We'll confirm the build as we go along.

To start with, create a new folder for your application and then add the following skeleton app in a file named app.py. You can also download the source code and associated files

python
from PySide6 import QtWidgets

import sys

class MainWindow(QtWidgets.QMainWindow):

    def __init__(self):
        super().__init__()

        self.setWindowTitle("Hello World")
        l = QtWidgets.QLabel("My simple app.")
        l.setMargin(10)
        self.setCentralWidget(l)
        self.show()

if __name__ == '__main__':
    app = QtWidgets.QApplication(sys.argv)
    w = MainWindow()
    app.exec()

This is a basic bare-bones application which creates a custom QMainWindow and adds a simple widget QLabel to it. You can run this app as follows.

bash
python app.py

This should produce the following window (on macOS).

Simple skeleton app in PySide6 Simple skeleton app in PySide6

Building a basic app

Now we have our simple application skeleton in place, we can run our first build test to make sure everything is working.

Open your terminal (command prompt) and navigate to the folder containing your project. You can now run the following command to run the PyInstaller build.

python
pyinstaller --windowed app.py

The --windowed flag is necessary to tell PyInstaller to build a macOS .app bundle.

You'll see a number of messages output, giving debug information about what PyInstaller is doing. These are useful for debugging issues in your build, but can otherwise be ignored. The output that I get for running the command on my system is shown below.

bash
martin@MacBook-Pro pyside6 % pyinstaller --windowed app.py
74 INFO: PyInstaller: 4.8
74 INFO: Python: 3.9.9
83 INFO: Platform: macOS-10.15.7-x86_64-i386-64bit
84 INFO: wrote /Users/martin/app/pyside6/app.spec
87 INFO: UPX is not available.
88 INFO: Extending PYTHONPATH with paths
['/Users/martin/app/pyside6']
447 INFO: checking Analysis
451 INFO: Building because inputs changed
452 INFO: Initializing module dependency graph...
455 INFO: Caching module graph hooks...
463 INFO: Analyzing base_library.zip ...
3914 INFO: Processing pre-find module path hook distutils from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks/pre_find_module_path/hook-distutils.py'.
3917 INFO: distutils: retargeting to non-venv dir '/usr/local/Cellar/python@3.9/3.9.9/Frameworks/Python.framework/Versions/3.9/lib/python3.9'
6928 INFO: Caching module dependency graph...
7083 INFO: running Analysis Analysis-00.toc
7091 INFO: Analyzing /Users/martin/app/pyside6/app.py
7138 INFO: Processing module hooks...
7139 INFO: Loading module hook 'hook-PyQt6.QtWidgets.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7336 INFO: Loading module hook 'hook-xml.etree.cElementTree.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7337 INFO: Loading module hook 'hook-lib2to3.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7360 INFO: Loading module hook 'hook-PyQt6.QtGui.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7397 INFO: Loading module hook 'hook-PyQt6.QtCore.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7422 INFO: Loading module hook 'hook-encodings.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7510 INFO: Loading module hook 'hook-distutils.util.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7513 INFO: Loading module hook 'hook-pickle.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7515 INFO: Loading module hook 'hook-heapq.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7517 INFO: Loading module hook 'hook-difflib.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7519 INFO: Loading module hook 'hook-PyQt6.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7564 INFO: Loading module hook 'hook-multiprocessing.util.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7565 INFO: Loading module hook 'hook-sysconfig.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7574 INFO: Loading module hook 'hook-xml.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7677 INFO: Loading module hook 'hook-distutils.py' from '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks'...
7694 INFO: Looking for ctypes DLLs
7712 INFO: Analyzing run-time hooks ...
7715 INFO: Including run-time hook '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks/rthooks/pyi_rth_subprocess.py'
7719 INFO: Including run-time hook '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py'
7722 INFO: Including run-time hook '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py'
7726 INFO: Including run-time hook '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks/rthooks/pyi_rth_inspect.py'
7727 INFO: Including run-time hook '/usr/local/lib/python3.9/site-packages/PyInstaller/hooks/rthooks/pyi_rth_pyqt6.py'
7736 INFO: Looking for dynamic libraries
7977 INFO: Looking for eggs
7977 INFO: Using Python library /usr/local/Cellar/python@3.9/3.9.9/Frameworks/Python.framework/Versions/3.9/Python
7987 INFO: Warnings written to /Users/martin/app/pyside6/build/app/warn-app.txt
8019 INFO: Graph cross-reference written to /Users/martin/app/pyside6/build/app/xref-app.html
8032 INFO: checking PYZ
8035 INFO: Building because toc changed
8035 INFO: Building PYZ (ZlibArchive) /Users/martin/app/pyside6/build/app/PYZ-00.pyz
8390 INFO: Building PYZ (ZlibArchive) /Users/martin/app/pyside6/build/app/PYZ-00.pyz completed successfully.
8397 INFO: EXE target arch: x86_64
8397 INFO: Code signing identity: None
8398 INFO: checking PKG
8398 INFO: Building because /Users/martin/app/pyside6/build/app/PYZ-00.pyz changed
8398 INFO: Building PKG (CArchive) app.pkg
8415 INFO: Building PKG (CArchive) app.pkg completed successfully.
8417 INFO: Bootloader /usr/local/lib/python3.9/site-packages/PyInstaller/bootloader/Darwin-64bit/runw
8417 INFO: checking EXE
8418 INFO: Building because console changed
8418 INFO: Building EXE from EXE-00.toc
8418 INFO: Copying bootloader EXE to /Users/martin/app/pyside6/build/app/app
8421 INFO: Converting EXE to target arch (x86_64)
8449 INFO: Removing signature(s) from EXE
8484 INFO: Appending PKG archive to EXE
8486 INFO: Fixing EXE headers for code signing
8496 INFO: Rewriting the executable's macOS SDK version (11.1.0) to match the SDK version of the Python library (10.15.6) in order to avoid inconsistent behavior and potential UI issues in the frozen application.
8499 INFO: Re-signing the EXE
8547 INFO: Building EXE from EXE-00.toc completed successfully.
8549 INFO: checking COLLECT
WARNING: The output directory "/Users/martin/app/pyside6/dist/app" and ALL ITS CONTENTS will be REMOVED! Continue? (y/N)y
On your own risk, you can use the option `--noconfirm` to get rid of this question.
10820 INFO: Removing dir /Users/martin/app/pyside6/dist/app
10847 INFO: Building COLLECT COLLECT-00.toc
12460 INFO: Building COLLECT COLLECT-00.toc completed successfully.
12469 INFO: checking BUNDLE
12469 INFO: Building BUNDLE because BUNDLE-00.toc is non existent
12469 INFO: Building BUNDLE BUNDLE-00.toc
13848 INFO: Moving BUNDLE data files to Resource directory
13901 INFO: Signing the BUNDLE...
16049 INFO: Building BUNDLE BUNDLE-00.toc completed successfully.

If you look in your folder you'll notice you now have two new folders dist and build.

build & dist folders created by PyInstaller build & dist folders created by PyInstaller

Below is a truncated listing of the folder content, showing the build and dist folders.

bash
.
&boxvr&boxh&boxh app.py
&boxvr&boxh&boxh app.spec
&boxvr&boxh&boxh build
&boxv   &boxur&boxh&boxh app
&boxv       &boxvr&boxh&boxh Analysis-00.toc
&boxv       &boxvr&boxh&boxh COLLECT-00.toc
&boxv       &boxvr&boxh&boxh EXE-00.toc
&boxv       &boxvr&boxh&boxh PKG-00.pkg
&boxv       &boxvr&boxh&boxh PKG-00.toc
&boxv       &boxvr&boxh&boxh PYZ-00.pyz
&boxv       &boxvr&boxh&boxh PYZ-00.toc
&boxv       &boxvr&boxh&boxh app
&boxv       &boxvr&boxh&boxh app.pkg
&boxv       &boxvr&boxh&boxh base_library.zip
&boxv       &boxvr&boxh&boxh warn-app.txt
&boxv       &boxur&boxh&boxh xref-app.html
&boxur&boxh&boxh dist
    &boxvr&boxh&boxh app
    &boxv   &boxvr&boxh&boxh libcrypto.1.1.dylib
    &boxv   &boxvr&boxh&boxh PySide6
    &boxv   ...
    &boxv   &boxvr&boxh&boxh app
    &boxv   &boxur&boxh&boxh Qt5Core
    &boxur&boxh&boxh app.app

The build folder is used by PyInstaller to collect and prepare the files for bundling, it contains the results of analysis and some additional logs. For the most part, you can ignore the contents of this folder, unless you're trying to debug issues.

The dist (for "distribution") folder contains the files to be distributed. This includes your application, bundled as an executable file, together with any associated libraries (for example PySide6) and binary .so files.

Since we provided the --windowed flag above, PyInstaller has actually created two builds for us. The folder app is a simple folder containing everything you need to be able to run your app. PyInstaller also creates an app bundle app.app which is what you will usually distribute to users.

The app folder is a useful debugging tool, since you can easily see the libraries and other packaged data files.

You can try running your app yourself now, either by double-clicking on the app bundle, or by running the executable file, named app.exe from the dist folder. In either case, after a short delay you'll see the familiar window of your application pop up as shown below.

Simple app, running after being packaged Simple app, running after being packaged

In the same folder as your Python file, alongside the build and dist folders PyInstaller will have also created a .spec file. In the next section we'll take a look at this file, what it is and what it does.

The Spec file

The .spec file contains the build configuration and instructions that PyInstaller uses to package up your application. Every PyInstaller project has a .spec file, which is generated based on the command line options you pass when running pyinstaller.

When we ran pyinstaller with our script, we didn't pass in anything other than the name of our Python application file and the --windowed flag. This means our spec file currently contains only the default configuration. If you open it, you'll see something similar to what we have below.

python
# -*- mode: python ; coding: utf-8 -*-


block_cipher = None


a = Analysis(['app.py'],
             pathex=[],
             binaries=[],
             datas=[],
             hiddenimports=[],
             hookspath=[],
             hooksconfig={},
             runtime_hooks=[],
             excludes=[],
             win_no_prefer_redirects=False,
             win_private_assemblies=False,
             cipher=block_cipher,
             noarchive=False)
pyz = PYZ(a.pure, a.zipped_data,
             cipher=block_cipher)

exe = EXE(pyz,
          a.scripts,
          [],
          exclude_binaries=True,
          name='app',
          debug=False,
          bootloader_ignore_signals=False,
          strip=False,
          upx=True,
          console=False,
          disable_windowed_traceback=False,
          target_arch=None,
          codesign_identity=None,
          entitlements_file=None )
coll = COLLECT(exe,
               a.binaries,
               a.zipfiles,
               a.datas,
               strip=False,
               upx=True,
               upx_exclude=[],
               name='app')
app = BUNDLE(coll,
             name='app.app',
             icon=None,
             bundle_identifier=None)

The first thing to notice is that this is a Python file, meaning you can edit it and use Python code to calculate values for the settings. This is mostly useful for complex builds, for example when you are targeting different platforms and want to conditionally define additional libraries or dependencies to bundle.

Because we used the --windowed command line flag, the EXE(console=) attribute is set to False. If this is True a console window will be shown when your app is launched -- not what you usually want for a GUI application.

Once a .spec file has been generated, you can pass this to pyinstaller instead of your script to repeat the previous build process. Run this now to rebuild your executable.

bash
pyinstaller app.spec

The resulting build will be identical to the build used to generate the .spec file (assuming you have made no changes). For many PyInstaller configuration changes you have the option of passing command-line arguments, or modifying your existing .spec file. Which you choose is up to you.

Tweaking the build

So far we've created a simple first build of a very basic application. Now we'll look at a few of the most useful options that PyInstaller provides to tweak our build. Then we'll go on to look at building more complex applications.

Naming your app

One of the simplest changes you can make is to provide a proper "name" for your application. By default the app takes the name of your source file (minus the extension), for example main or app. This isn't usually what you want.

You can provide a nicer name for PyInstaller to use for the app (and dist folder) by editing the .spec file to add a name= under the EXE, COLLECT and BUNDLE blocks.

python
exe = EXE(pyz,
          a.scripts,
          [],
          exclude_binaries=True,
          name='Hello World',
          debug=False,
          bootloader_ignore_signals=False,
          strip=False,
          upx=True,
          console=False
         )
coll = COLLECT(exe,
               a.binaries,
               a.zipfiles,
               a.datas,
               strip=False,
               upx=True,
               upx_exclude=[],
               name='Hello World')
app = BUNDLE(coll,
             name='Hello World.app',
             icon=None,
             bundle_identifier=None)

The name under EXE is the name of the executable file, the name under BUNDLE is the name of the app bundle.

Alternatively, you can re-run the pyinstaller command and pass the -n or --name configuration flag along with your app.py script.

bash
pyinstaller -n "Hello World" --windowed app.py
# or
pyinstaller --name "Hello World" --windowed app.py

The resulting app file will be given the name Hello World.app and the unpacked build placed in the folder dist\Hello World\.

Application with custom name Application with custom name "Hello World"

The name of the .spec file is taken from the name passed in on the command line, so this will also create a new spec file for you, called Hello World.spec in your root folder.

Make sure you delete the old app.spec file to avoid getting confused editing the wrong one.

Application icon

By default PyInstaller app bundles come with the following icon in place.

Default PyInstaller application icon, on app bundle Default PyInstaller application icon, on app bundle

You will probably want to customize this to make your application more recognisable. This can be done easily by passing the --icon command line argument, or editing the icon= parameter of the BUNDLE section of your .spec file. For macOS app bundles you need to provide an .icns file.

python
app = BUNDLE(coll,
             name='Hello World.app',
             icon='Hello World.icns',
             bundle_identifier=None)

To create macOS icons from images you can use the image2icon tool.

If you now re-run the build (by using the command line arguments, or running with your modified .spec file) you'll see the specified icon file is now set on your application bundle.

Custom application icon (a hand) on the app bundle Custom application icon on the app bundle

On macOS application icons are taken from the application bundle. If you repackage your app and run the bundle you will see your app icon on the dock!

Custom application icon in the dock Custom application icon on the dock

Data files and Resources

So far our application consists of just a single Python file, with no dependencies. Most real-world applications a bit more complex, and typically ship with associated data files such as icons or UI design files. In this section we'll look at how we can accomplish this with PyInstaller, starting with a single file and then bundling complete folders of resources.

First let's update our app with some more buttons and add icons to each.

python
from PySide6.QtWidgets import QMainWindow, QApplication, QLabel, QVBoxLayout, QPushButton, QWidget
from PySide6.QtGui import QIcon

import sys

class MainWindow(QMainWindow):

    def __init__(self):
        super().__init__()

        self.setWindowTitle("Hello World")
        layout = QVBoxLayout()
        label = QLabel("My simple app.")
        label.setMargin(10)
        layout.addWidget(label)

        button1 = QPushButton("Hide")
        button1.setIcon(QIcon("icons/hand.png"))
        button1.pressed.connect(self.lower)
        layout.addWidget(button1)

        button2 = QPushButton("Close")
        button2.setIcon(QIcon("icons/lightning.png"))
        button2.pressed.connect(self.close)
        layout.addWidget(button2)

        container = QWidget()
        container.setLayout(layout)

        self.setCentralWidget(container)

        self.show()

if __name__ == '__main__':
    app = QApplication(sys.argv)
    w = MainWindow()
    app.exec_()

In the folder with this script, add a folder icons which contains two icons in PNG format, hand.png and lightning.png. You can create these yourself, or get them from the source code download for this tutorial.

Run the script now and you will see a window showing two buttons with icons.

Window with two icons Window with two buttons with icons.

Even if you don't see the icons, keep reading!

Dealing with relative paths

There is a gotcha here, which might not be immediately apparent. To demonstrate it, open up a shell and change to the folder where our script is located. Run it with

bash
python3 app.py

If the icons are in the correct location, you should see them. Now change to the parent folder, and try and run your script again (change <folder> to the name of the folder your script is in).

bash
cd ..
python3 <folder>/app.py

Window with two icons missing Window with two buttons with icons missing.

The icons don't appear. What's happening?

We're using relative paths to refer to our data files. These paths are relative to the current working directory -- not the folder your script is in. So if you run the script from elsewhere it won't be able to find the files.

One common reason for icons not to show up, is running examples in an IDE which uses the project root as the current working directory.

This is a minor issue before the app is packaged, but once it's installed it will be started with it's current working directory as the root / folder -- your app won't be able to find anything. We need to fix this before we go any further, which we can do by making our paths relative to our application folder.

In the updated code below, we define a new variable basedir, using os.path.dirname to get the containing folder of __file__ which holds the full path of the current Python file. We then use this to build the relative paths for icons using os.path.join().

Since our app.py file is in the root of our folder, all other paths are relative to that.

python
from PySide6.QtWidgets import QMainWindow, QApplication, QLabel, QVBoxLayout, QPushButton, QWidget
from PySide6.QtGui import QIcon

import sys, os

basedir = os.path.dirname(__file__)

class MainWindow(QMainWindow):

    def __init__(self):
        super().__init__()

        self.setWindowTitle("Hello World")
        layout = QVBoxLayout()
        label = QLabel("My simple app.")
        label.setMargin(10)
        layout.addWidget(label)

        button1 = QPushButton("Hide")
        button1.setIcon(QIcon(os.path.join(basedir, "icons", "hand.png")))
        button1.pressed.connect(self.lower)
        layout.addWidget(button1)

        button2 = QPushButton("Close")
        button2.setIcon(QIcon(os.path.join(basedir, "icons", "lightning.png")))
        button2.pressed.connect(self.close)
        layout.addWidget(button2)

        container = QWidget()
        container.setLayout(layout)

        self.setCentralWidget(container)

        self.show()

if __name__ == '__main__':
    app = QApplication(sys.argv)
    w = MainWindow()
    app.exec_()

Try and run your app again from the parent folder -- you'll find that the icons now appear as expected on the buttons, no matter where you launch the app from.

Packaging the icons

So now we have our application showing icons, and they work wherever the application is launched from. Package the application again with pyinstaller "Hello World.spec" and then try and run it again from the dist folder as before. You'll notice the icons are missing again.

Window with two icons missing Window with two buttons with icons missing.

The problem now is that the icons haven't been copied to the dist/Hello World folder -- take a look in it. Our script expects the icons to be a specific location relative to it, and if they are not, then nothing will be shown.

This same principle applies to any other data files you package with your application, including Qt Designer UI files, settings files or source data. You need to ensure that relative path structures are replicated after packaging.

Bundling data files with PyInstaller

For the application to continue working after packaging, the files it depends on need to be in the same relative locations.

To get data files into the dist folder we can instruct PyInstaller to copy them over. PyInstaller accepts a list of individual paths to copy, together with a folder path relative to the dist/<app name> folder where it should to copy them to. As with other options, this can be specified by command line arguments or in the .spec file.

Files specified on the command line are added using --add-data, passing the source file and destination folder separated by a colon :.

The path separator is platform-specific: Linux or Mac use :, on Windows use ;

bash
pyinstaller --windowed --name="Hello World" --icon="Hello World.icns" --add-data="icons/hand.png:icons" --add-data="icons/lightning.png:icons" app.py

Here we've specified the destination location as icons. The path is relative to the root of our application's folder in dist -- so dist/Hello World with our current app. The path icons means a folder named icons under this location, so dist/Hello World/icons. Putting our icons right where our application expects to find them!

You can also specify data files via the datas list in the Analysis section of the spec file, shown below.

python
a = Analysis(['app.py'],
             pathex=[],
             binaries=[],
             datas=[('icons/hand.png', 'icons'), ('icons/lightning.png', 'icons')],
             hiddenimports=[],
             hookspath=[],
             runtime_hooks=[],
             excludes=[],
             win_no_prefer_redirects=False,
             win_private_assemblies=False,
             cipher=block_cipher,
             noarchive=False)

Then rebuild from the .spec file with

bash
pyinstaller "Hello World.spec"

In both cases we are telling PyInstaller to copy the specified files to the location ./icons/ in the output folder, meaning dist/Hello World/icons. If you run the build, you should see your .png files are now in the in dist output folder, under a folder named icons.

The icon file copied to the dist folder The icon file copied to the dist folder

If you run your app from dist you should now see the icon icons in your window as expected!

Window with two icons Window with two buttons with icons, finally!

Bundling data folders

Usually you will have more than one data file you want to include with your packaged file. The latest PyInstaller versions let you bundle folders just like you would files, keeping the sub-folder structure.

Let's update our configuration to bundle our icons folder in one go, so it will continue to work even if we add more icons in future.

To copy the icons folder across to our build application, we just need to add the folder to our .spec file Analysis block. As for the single file, we add it as a tuple with the source path (from our project folder) and the destination folder under the resulting folder in dist.

python
# ...
a = Analysis(['app.py'],
             pathex=[],
             binaries=[],
             datas=[('icons', 'icons')],   # tuple is (source_folder, destination_folder)
             hiddenimports=[],
             hookspath=[],
             hooksconfig={},
             runtime_hooks=[],
             excludes=[],
             win_no_prefer_redirects=False,
             win_private_assemblies=False,
             cipher=block_cipher,
             noarchive=False)
# ...

If you run the build using this spec file you'll see the icons folder copied across to the dist\Hello World folder. If you run the application from the folder, the icons will display as expected -- the relative paths remain correct in the new location.

Alternatively, you can bundle your data files using Qt's QResource architecture. See our tutorial for more information.

Building the App bundle into a Disk Image

So far we've used PyInstaller to bundle the application into macOS app, along with the associated data files. The output of this bundling process is a folder and an macOS app bundle, named Hello World.app.

If you try and distribute this app bundle, you'll notice a problem: the app bundle is actually just a special folder. While macOS displays it as an application, if you try and share it, you'll actually be sharing hundreds of individual files. To distribute the app properly, we need some way to package it into a single file.

The easiest way to do this is to use a .zip file. You can zip the folder and give this to someone else to unzip on their own computer, giving them a complete app bundle they can copy to their Applications folder.

However, if you've install macOS applications before you'll know this isn't the usual way to do it. Usually you get a Disk Image .dmg file, which when opened shows the application bundle, and a link to your Applications folder. To install the app, you just drag it across to the target.

To make our app look as professional as possible, we should copy this expected behaviour. Next we'll look at how to take our app bundle and package it into a macOS Disk Image.

Making sure the build is ready.

If you've followed the tutorial so far, you'll already have your app ready in the /dist folder. If not, or yours isn't working you can also download the source code files for this tutorial which includes a sample .spec file. As above, you can run the same build using the provided Hello World.spec file.

bash
pyinstaller "Hello World.spec"

This packages everything up as an app bundle in the dist/ folder, with a custom icon. Run the app bundle to ensure everything is bundled correctly, and you should see the same window as before with the icons visible.

Two icons Window with two icons, and a button.

Creating an Disk Image

Now we've successfully bundled our application, we'll next look at how we can take our app bundle and use it to create a macOS Disk Image for distribution.

To create our Disk Image we'll be using the create-dmg tool. This is a command-line tool which provides a simple way to build disk images automatically. If you are using Homebrew, you can install create-dmg with the following command.

bash
brew install create-dmg

...otherwise, see the Github repository for instructions.

The create-dmg tool takes a lot of options, but below are the most useful.

bash
create-dmg --help
create-dmg 1.0.9

Creates a fancy DMG file.

Usage:  create-dmg [options] <output_name.dmg> <source_folder>

All contents of <source_folder> will be copied into the disk image.

Options:
  --volname <name>
      set volume name (displayed in the Finder sidebar and window title)
  --volicon <icon.icns>
      set volume icon
  --background <pic.png>
      set folder background image (provide png, gif, or jpg)
  --window-pos <x> <y>
      set position the folder window
  --window-size <width> <height>
      set size of the folder window
  --text-size <text_size>
      set window text size (10-16)
  --icon-size <icon_size>
      set window icons size (up to 128)
  --icon file_name <x> <y>
      set position of the file's icon
  --hide-extension <file_name>
      hide the extension of file
  --app-drop-link <x> <y>
      make a drop link to Applications, at location x,y
  --no-internet-enable
      disable automatic mount & copy
  --add-file <target_name> <file>|<folder> <x> <y>
      add additional file or folder (can be used multiple times)
  -h, --help
        display this help screen

The most important thing to notice is that the command requires a <source folder> and all contents of that folder will be copied to the Disk Image. So to build the image, we first need to put our app bundle in a folder by itself.

Rather than do this manually each time you want to build a Disk Image I recommend creating a shell script. This ensures the build is reproducible, and makes it easier to configure.

Below is a working script to create a Disk Image from our app. It creates a temporary folder dist/dmg where we'll put the things we want to go in the Disk Image -- in our case, this is just the app bundle, but you can add other files if you like. Then we make sure the folder is empty (in case it still contains files from a previous run). We copy our app bundle into the folder, and finally check to see if there is already a .dmg file in dist and if so, remove it too. Then we're ready to run the create-dmg tool.

bash
#!/bin/sh
# Create a folder (named dmg) to prepare our DMG in (if it doesn't already exist).
mkdir -p dist/dmg
# Empty the dmg folder.
rm -r dist/dmg/*
# Copy the app bundle to the dmg folder.
cp -r "dist/Hello World.app" dist/dmg
# If the DMG already exists, delete it.
test -f "dist/Hello World.dmg" && rm "dist/Hello World.dmg"
create-dmg \
  --volname "Hello World" \
  --volicon "Hello World.icns" \
  --window-pos 200 120 \
  --window-size 600 300 \
  --icon-size 100 \
  --icon "Hello World.app" 175 120 \
  --hide-extension "Hello World.app" \
  --app-drop-link 425 120 \
  "dist/Hello World.dmg" \
  "dist/dmg/"

The options we pass to create-dmg set the dimensions of the Disk Image window when it is opened, and positions of the icons in it.

Save this shell script in the root of your project, named e.g. builddmg.sh. To make it possible to run, you need to set the execute bit with.

bash
chmod +x builddmg.sh

With that, you can now build a Disk Image for your Hello World app with the command.

bash
./builddmg.sh

This will take a few seconds to run, producing quite a bit of output.

bash
 No such file or directory
Creating disk image...
...............................................................
created: /Users/martin/app/dist/rw.Hello World.dmg
Mounting disk image...
Mount directory: /Volumes/Hello World
Device name:     /dev/disk2
Making link to Applications dir...
/Volumes/Hello World
Copying volume icon file 'Hello World.icns'...
Running AppleScript to make Finder stuff pretty: /usr/bin/osascript "/var/folders/yf/1qvxtg4d0vz6h2y4czd69tf40000gn/T/createdmg.tmp.XXXXXXXXXX.RvPoqdr0" "Hello World"
waited 1 seconds for .DS_STORE to be created.
Done running the AppleScript...
Fixing permissions...
Done fixing permissions
Blessing started
Blessing finished
Deleting .fseventsd
Unmounting disk image...
hdiutil: couldn't unmount "disk2" - Resource busy
Wait a moment...
Unmounting disk image...
"disk2" ejected.
Compressing disk image...
Preparing imaging engine…
Reading Protective Master Boot Record (MBR : 0)…
   (CRC32 $38FC6E30: Protective Master Boot Record (MBR : 0))
Reading GPT Header (Primary GPT Header : 1)…
   (CRC32 $59C36109: GPT Header (Primary GPT Header : 1))
Reading GPT Partition Data (Primary GPT Table : 2)…
   (CRC32 $528491DC: GPT Partition Data (Primary GPT Table : 2))
Reading  (Apple_Free : 3)…
   (CRC32 $00000000:  (Apple_Free : 3))
Reading disk image (Apple_HFS : 4)…
...............................................................................
   (CRC32 $FCDC1017: disk image (Apple_HFS : 4))
Reading  (Apple_Free : 5)…
...............................................................................
   (CRC32 $00000000:  (Apple_Free : 5))
Reading GPT Partition Data (Backup GPT Table : 6)…
...............................................................................
   (CRC32 $528491DC: GPT Partition Data (Backup GPT Table : 6))
Reading GPT Header (Backup GPT Header : 7)…
...............................................................................
   (CRC32 $56306308: GPT Header (Backup GPT Header : 7))
Adding resources…
...............................................................................
Elapsed Time:  3.443s
File size: 23178950 bytes, Checksum: CRC32 $141F3DDC
Sectors processed: 184400, 131460 compressed
Speed: 18.6Mbytes/sec
Savings: 75.4%
created: /Users/martin/app/dist/Hello World.dmg
hdiutil does not support internet-enable. Note it was removed in macOS 10.15.
Disk image done

While it's building, the Disk Image will pop up. Don't get too excited yet, it's still building. Wait for the script to complete, and you will find the finished .dmg file in the dist/ folder.

The Disk Image in the dist folder The Disk Image created in the dist folder

Running the installer

Double-click the Disk Image to open it, and you'll see the usual macOS install view. Click and drag your app across the the Applications folder to install it.

The Disk Image containing your file The Disk Image contains the app bundle and a shortcut to the applications folder

If you open the Showcase view (press F4) you will see your app installed. If you have a lot of apps, you can search for it by typing "Hello"

The app is installed! The app installed on macOS

Repeating the build

Now you have everything set up, you can create a new app bundle & Disk Image of your application any time, by running the two commands from the command line.

bash
pyinstaller "Hello World.spec"
./builddmg.sh

It's that simple!

Wrapping up

In this tutorial we've covered how to build your PySide6 applications into a macOS app bundle using PyInstaller, including adding data files along with your code. Then we walked through the process of creating a Disk Image to distribute your app to others. Following these steps you should be able to package up your own applications and make them available to other people.

For a complete view of all PyInstaller bundling options take a look at the PyInstaller usage documentation.

For more, see the complete PySide6 tutorial.

May 01, 2022 09:00 AM UTC

April 30, 2022


William Minchin

Static Comments Plugin 2.1.1 for Pelican Released

Static Comments is a plugin for Pelican, a static site generator written in Python. It is meant as a drop in replacement for the Pelican Comment System.

Static Comments allows you to have a comment section on your Pelican blog, while maintaining your blog as a completely static webpage and without relying on any external services or servers; just an email address is required. Comments are stored as text files, similiar in structure to Pelican articles. This gives you complete control over the comments appearing on your site and allows you to back them up with the rest of your site.

This Release

This release takes the existing Pelican Comment System codebase and upgrades it to work with Pelican 4 (and should continue to work with Pelican 3). A few changes are needed in your configuration, but no changes to your comments files should be needed.

Installation

The simplest way to install the Python code of Static Comments is to use pip:

pip install minchin.pelican.plugins.static-comments --upgrade

If you are using Pelican 4.5+, the plugin will automatally be loaded (although not activated).

If you are an earlier version of Pelican, or non-namespace plugins, you will need to add the auto-loader to your list of plugins:

# pelicanconf.py

PLUGINS = [
    # others
    "minchin.pelican.plugins.autoloader",
]

Activate the plugin by adding the following line to your pelicanconf.py:

# pelicanconf.py

PELICAN_COMMENT_SYSTEM = True

and then set the email you want to receive comment emails at:

# pelicanconf.py

PELICAN_COMMENT_SYSTEM_EMAIL_USER = "your.email"
PELICAN_COMMENT_SYSTEM_EMAIL_DOMAIN = "gmail.com"

Finally, modify the article.html of your theme (if your theme doesn’t support Static Comments out of the box) to both display comments already submitted and to have a comment submission form. The sample submission form works by using JavaScript to convert the form contents (the commenter’s name, site, and comment body) to an email the user then sends to you. Note, this is an example of the code you might use for your theme, but please feel free to modify it to suit your needs.

{% macro comments_styles() %}
{% if PELICAN_COMMENT_SYSTEM %}
{# NOTE:
 # Instead of using this macro copy these styles in your main css file
 # This marco is only here to allow a quickstart with nice styles
 #}

#pcs-comment-form input,
#pcs-comment-form textarea {
    width: 100%;
}
#pcs-comment-form-display-replyto {
    border: solid 1px black;
    padding: 2px;
}
#pcs-comment-form-display-replyto p {
    margin-top: 0.5em;
    margin-bottom: 0.5em;
}
#pcs-comments ul {
    list-style: none;
}
#pcs-comments .comment-left {
    display: table-cell;
    padding-right: 10px;
}
#pcs-comments .comment-body {
    display: table-cell;
    vertical-align: top;
    width: 100%;
}

{% endif %}
{% endmacro %}

{% macro comments_form() %}
{% if PELICAN_COMMENT_SYSTEM %}

    
Add a Comment
Name
Website
Your Comment

You can use the Markdown syntax to format your comment.

{% if PELICAN_COMMENT_SYSTEM_FEED and article %} Comment Atom Feed {% endif %}
{% endif %} {% endmacro %} {% macro comments_with_form() %} {% if PELICAN_COMMENT_SYSTEM %}

Comments


{% if article.comments %}
    {% for comment in article.comments recursive %}
  • ="" span="" src="{{ SITEURL }}/{{ comment.avatar }}"/> alt="Avatar" height="{{ PELICAN_COMMENT_SYSTEM_IDENTICON_SIZE }}" width="{{ PELICAN_COMMENT_SYSTEM_IDENTICON_SIZE }}">
    Permalink

    {% if comment.metadata['website'] %} {{ comment.author }} {% else %} {{ comment.author }} {% endif %}

    Posted on {{ comment.locale_date }}

    {{ comment.content }}
    {% if comment.replies %}
      {{ loop(comment.replies) }}
    {% endif %}
  • {% endfor %}
{% else %}

There are no comments yet.

{% endif %} {{ comments_form() }}
{% endif %} {% endmacro %} {% macro comments_js(user, domain, includeJquery=True) %} {% if PELICAN_COMMENT_SYSTEM %} {% if includeJquery %} {% endif %}

++ {% endif %} {% endmacro %} {% macro comments_quickstart(user, domain) %} {{ comments_styles() }} {{ comments_with_form() }} {{ comments_js(user, domain) }} {% endmacro %}

What A Comment File Looks Like

When a user submits a comment, you will get an email with the details. You then take those details from your email and create a text file within your Pelican site, one for each comment. By default, the plugin will look for comments in a folder comments in your root content folder (probably the same one your have your Pelican articles in), and then in subfolders that match the slug of the article the comment applies to.

The actual comment file will look something like this:

email: noreplay@blogger.com
date: 2019-07-15T12:20+01:00
author: Mahassine
replyto: comment-slug-2382md

Sample comment body.


The replyto tag is only needed if this comment is indeed a reply to another comment. The value of the replyto tag is the slug of the comment, which is the filename plus the file extension, but not the period between them.

The comment files can be in any format Pelican is set up to read (typically Markdown and ReStructed Text, but many others supported).

I realize that this is fairly involved to activate as far as Pelican plugins go, so if you run into issues, please leave a comment on this post!

Upgrading (from the Pelican Comment System)

Upgrading from the Pelican Comment System should be seemless, and should be as simple as uninstalling the Pelican Comment System (and removing it from your pelicanconf.py) and installing Static Comments.

pip uninstall pelican-comment-system
pip install minchin.pelican.plugins.static-comments --upgrade

Existing comments files should work out of the box, and the setting haven’t been renamed.

Known Issues

Future Plans

At this point, the plugin seems feature complete. I expect future changes will be about fixing code errors or to keep it working as Pelican progresses.

Personal Thoughts

I’m excited to get this updated and out into the world. I’m a little sad that the old Pelican Comment System seems to be no longer being updated, although it looks like it got stuck halfway through a “version 2” complete rewrite, so add this as another warning about ground up rewrites. But this is the wonder of Open Source: I can take the existing codebase, fix the errors and issues, and release a new working version back into the world.

There is also a more general question of whether comments are worth keeping around. Considering that you’re reading this, and I released this plugin, I think we are both in agreement that the answer is “yes”. I have definitely seen a the number of comments posted to my blog drop over the years (this blog has been up since 2006!), but I suspect that is mostly tied to lower traffic volumes. As for comments generally, Twitter and Reddit I think provides proof that people still want to add their two cents on things, and personally, I would rather have the conversion here rather than on another site (like Reddit) that I don’t control and have the ability to backup that conversation.

Overall, I’m pretty satisfied with this solution. The biggest downside is that comments don’t post automatically and so can take some time to show as I have to manually post them, but I think that tradeoff is worth not having to maintain a separate server just for commenting.

As with all my plugins, if the pelican-plugins group wants to adopt these, I’d be happy to have the community support there.

April 30, 2022 08:17 PM UTC


STX Next

Python vs. C++: A Comparison of Key Features and Differences

C++ and Python, two of the most popular and commonly used programming languages, aren’t only versatile and object-oriented, but they can be used to create a wide array of various programs and functional code.

April 30, 2022 06:26 PM UTC

April 29, 2022


Real Python

Real Python at PyCon US 2022

PyCon US is back as an in-person conference. PyCon US 2022 is happening in Salt Lake City April 29 to May 1, and Real Python is there as well. Come join us at our booth and at the open space on Saturday.

In this article, you’ll learn where you can find Real Python at PyCon in Salt Lake City, and get to know what some of our team members are excited about at the conference.

Meet Real Python at PyCon US 2022

The PyCon US conference has been an annual meeting place for the Python community since 2003. Because of the COVID pandemic, the conference went virtual in 2020 and 2021. At Real Python, we’re excited about being able to meet in person this year. Come say hello if you’re in Salt Lake City!

Visit the Real Python Booth

The exhibit hall is a lively place at any PyCon conference. Here, you can stroll around and chat with other attendees while exploring what sponsors and exhibitors have brought to the table. It’s a great place to hang out and make new friends!

Real Python has a booth at this year’s conference. We’re excited about having our own place to hang out and show our content to everyone. You can find us at booth 228, which is just opposite Microsoft and AWS. Look for our logo and friendly faces—we’ll be smiling with our eyes!

Real Python team members at PyCon 2022

Stop by the booth to hear about all the content that we offer, or have a chat about your favorite packages, square roots, or the latest developments in Python.

Join Our Open Space

The open spaces are a unique staple of PyCon. These are self-organized one-hour meetup-like events that are happening throughout the conference. Check out the Open Space board to see if there’s anything that you’d like to join!

Read the full article at https://realpython.com/real-python-pycon-us-2022/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

April 29, 2022 05:15 PM UTC