skip to navigation
skip to content

Planet Python

Last update: March 19, 2024 01:43 AM UTC

March 18, 2024


Data School

Jupyter & IPython terminology explained 💡

Jupyter & IPython terminology explained 💡

Are you trying to understand the differences between Jupyter Notebook, JupyterLab, IPython, Colab, and other related terms? You&aposre in the right place!

I&aposll explain by walking through a brief history of the IPython and Jupyter projects:

IPython

IPython was first released in 2006 as an "interactive" version of the Python shell. Whereas the Python shell uses the >>> prompt, you can recognize IPython from its use of In [1] and Out [1] notation to indicate input/output and line numbers:

Jupyter & IPython terminology explained 💡

IPython includes many features not present in the default Python shell, such as object introspection, "magic" commands, system shell access, and more.

IPython Notebook

In 2011, the IPython Notebook was released. It was known as a "computational notebook" because it allowed you to weave together code, plots, and narrative text into a single document:

Jupyter & IPython terminology explained 💡

It was called the IPython Notebook (and not the Python Notebook) because it used IPython as the "kernel", which is the language-specific process that runs the code in a notebook.

Jupyter Notebook

In 2015, the IPython Notebook introduced support for programming languages other than Python.

Also in 2015, IPython split into two projects: IPython (for Python-specific components) and Jupyter (for language-agnostic components).

As part of that split, the IPython Notebook was renamed the Jupyter Notebook. The name "Jupyter" was inspired by the open languages of science: Julia, Python, and R:

Jupyter & IPython terminology explained 💡

To be clear, "Jupyter Notebook" was the name of both the coding environment and the files created by that environment. In other words, you would open "the Jupyter Notebook" to create "a Jupyter notebook".

Jupyter notebook files used the extension ".ipynb", which was the extension (and file format) originally created for IPython notebooks.

JupyterLab

At this point, the Jupyter Notebook was a lightweight coding environment, with far less features than a traditional IDE (integrated development environment).

In 2018, JupyterLab (one word) was released as a more full-featured alternative to the Jupyter Notebook:

Jupyter & IPython terminology explained 💡

Notebooks created within JupyterLab are still called "Jupyter notebooks", they still use the extension ".ipynb", and they&aposre compatible with notebooks created by the Jupyter Notebook.

JupyterLab was originally designed to replace the Jupyter Notebook environment. However, due to the continued popularity of the "classic" Notebook environment, JupyterLab and Jupyter Notebook continue to be developed as separate applications (as of 2024).

Summary

Here are a few related terms that I didn&apost mention above:

Are there any other Jupyter-related terms you want me to explain? Please let me know if the comments! 👇

March 18, 2024 06:58 PM UTC


Real Python

Model-View-Controller (MVC) in Python Web Apps: Explained With Lego

If you’re curious about web development, then you’ve likely encountered the abbreviation MVC, which stands for Model-View-Controller. You may know that it’s a common design pattern that’s fundamental to many Python web frameworks and even desktop applications.

But what exactly does it mean? If you’ve had a hard time wrapping your head around the concept, then keep on reading.

In this tutorial, you’ll:

  • Approach understanding the MVC pattern through a Lego-based analogy
  • Learn what models, views, and controllers are conceptually
  • Tie your conceptual understanding back to concrete web development examples
  • Investigate Flask code snippets to drive the point home

Maybe you built things with Lego as a kid, or maybe you’re still a Lego-aficionado today. But even if you’ve never pieced two Lego blocks together, keep on reading because the analogy might still be a good building block for your understanding.

Get Your Code: Click here to download an example Flask app that will help you understand MVC in Python web apps.

Take the Quiz: Test your knowledge with our interactive “Model-View-Controller (MVC) in Python Web Apps: Explained With Lego” quiz. Upon completion you will receive a score so you can track your learning progress over time:

Take the Quiz »

Explaining the Model-View-Controller Pattern With Lego

Imagine that you’re ten years old and sitting on your family room floor. In front of you is a big bucket of Lego, or similar modular building blocks. There are blocks of all different shapes and sizes:

  • 🟦🟦🟦 Some are blue, tall, and long.
  • 🟥 Some are red and cube-shaped.
  • 🟨🟨 Some are yellow, big, and wide.

With all of these different Lego pieces, there’s no telling what you could build!

Just as your mind is filling with the endless possibilities, you hear something coming from the direction of the couch. It’s your older brother, voicing a specific request. He’s saying, “Hey! Build me a spaceship!”

“Alright,” you think, “that could actually be pretty cool.” A spaceship it is!

So you get to work. You start pulling out the Lego blocks that you think you’re going to need. Some big, some small. Different colors for the outside of the spaceship, different colors for the engines.

Now that you have all of your building blocks in place, it’s time to assemble the spaceship. And after a few hours of hard work, you now have in front of you—a spaceship:

                                  🟦
                                🟦🟥🟦
                              🟦🟥🟥🟥🟦
                            🟦🟥🟥🟥🟥🟥🟦
                            🟦🟥🟥🟥🟥🟥🟦
                            🟦🟥🟩🟩🟩🟥🟦
                            🟦🟥🟩🟦🟩🟥🟦
                            🟦🟥🟩🟩🟩🟥🟦
                            🟦🟥🟥🟥🟥🟥🟦
                            🟦🟥🟥🟥🟥🟥🟦
                            🟦🟥🟥🟥🟥🟥🟦
                        🟦🟥🟥🟥🟥🟥🟥🟥🟥🟥🟦
                        🟦🟥🟥🟥🟥🟥🟥🟥🟥🟥🟦
                        🟦🟥🟨🟨🟥🟥🟥🟨🟨🟥🟦
                            🟨🟨       🟨🟨

You run to find your brother and show him the finished product. “Wow, nice work!”, he says. Then he quietly adds:

Huh, I just asked for that a few hours ago, I didn’t have to do a thing, and here it is. I wish everything was that easy.

Your Brother

What if I told you that building a web application using the MVC pattern is exactly like building something with Lego blocks?

User Sends a Request

Read the full article at https://realpython.com/lego-model-view-controller-python/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 18, 2024 02:00 PM UTC


ListenData

Python for Data Science: Beginner's Guide

This tutorial would help you to learn Data Science with Python by examples. It is designed for beginners who want to get started with Data Science in Python. Python is an open source language and it is widely used as a high-level programming language for general-purpose programming. It has gained high popularity in data science world. In the PyPL Popularity of Programming language index, Python leads with a 29 percent share. In advanced analytics and predictive analytics market, it is ranked among top 3 programming languages for advanced analytics.

Data Science Python
Data Science with Python Tutorial
Table of Contents

Introduction

Python is widely used and very popular for a variety of software engineering tasks such as website development, cloud-architecture, back-end etc. It is equally popular in data science world. In advanced analytics world, there has been several debates on R vs. Python. There are some areas such as number of libraries for statistical analysis, where R wins over Python but Python is catching up very fast.With popularity of big data and data science, Python has become first programming language of data scientists.

There are several reasons to learn Python. Some of them are as follows -

  1. Python runs well in automating various steps of a predictive model.
  2. Python has awesome robust libraries for machine learning, natural language processing, deep learning, big data and artificial Intelligence.
  3. Python wins over R when it comes to deploying machine learning models in production.
  4. It can be easily integrated with big data frameworks such as Spark and Hadoop.
  5. Python has a great online community support.
Do you know these sites are developed in Python?
  1. YouTube
  2. Instagram
  3. Reddit
  4. Dropbox
  5. Disqus

Python 2 vs. 3

Google yields thousands of articles on this topic. Some bloggers opposed and some in favor of 2.7. If you filter your search criteria and look for only recent articles, you would find Python 2 is no longer supported by the Python Software Foundation. Hence it does not make any sense to learn 2.7 if you start learning it today. Python 3 supports all the packages. Python 3 is cleaner and faster. It is a language for the future. It fixed major issues with versions of Python 2 series. Python 3 was first released in year 2008. It has been 12 years releasing robust versions of Python 3 series. You should go for latest version of Python 3.

How to install Python?

There are two ways to download and install Python
  1. Download Anaconda. It comes with Python software along with preinstalled popular libraries.
  2. Download Pythonfrom its official website. You have to manually install libraries.
Recommended : Go for first option and download anaconda. It saves a lot of time in learning and coding Python
Coding Environments
Anaconda comes with two popular IDE :
  1. Jupyter (Ipython) Notebook
  2. Spyder
Spyder. It is like RStudio for Python. It gives an environment wherein writing python code is user-friendly. If you are a SAS User, you can think of it as SAS Enterprise Guide / SAS Studio. It comes with a syntax editor where you can write programs. It has a console to check each and every line of code. Under the 'Variable explorer', you can access your created data files and function. I highly recommend Spyder!
Spyder - Python Coding Environment
Jupyter (Ipython) Notebook Jupyter is equivalent to markdown in R. It is useful when you need to present your work to others or when you need to create step by step project report as it can combine code, output, words, and graphics.
To read this article in full, please click here

March 18, 2024 12:32 PM UTC

15 Free Open Source ChatGPT Alternatives (with Code)

In this article we will explain how Open Source ChatGPT alternatives work and how you can use them to build your own ChatGPT clone for free. By the end of this article you will have a good understanding of these models and will be able to compare and use them.

Benefits of Open Source ChatGPT Alternatives

There are various benefits of using open source large language models which are alternatives to ChatGPT. Some of them are listed below.

  1. Data Privacy: Many companies want to have control over data. It is important for them as they don't want any third-party to have access to their data.
  2. Customization: It allows developers to train large language models with their own data and some filtering on some topics if they want to apply
  3. Affordability: Open source GPT models let you to train sophisticated large language models without worrying about expensive hardware.
  4. Democratizing AI: It opens room for further research which can be used for solving real-world problems.
Open Source Chat GPT Models
Table of Contents

Llama

Introduction : Llama

Llama stands for Large Language Model Meta AI. It includes a range of model sizes from 7 billion to 65 billion parameters. Meta AI researchers focused on scaling the model's performance by increasing the volume of training data, rather than the number of parameters. They claimed the 13 billion parameter model outperformed 175 billion parameters of GPT-3 model. It uses the transformer architecture and was trained on 1.4 trillion tokens extracted by web scraping Wikipedia, GitHub, Stack Exchange, Books from Project Gutenberg, scientific papers on ArXiv.

Python Code : Llama

# Install Package
pip install llama-cpp-python

from llama_cpp import Llama
llm = Llama(model_path="./models/7B/ggml-model.bin")
output = llm("Q: Name the planets in the solar system? A: ", max_tokens=128, stop=["Q:", "\n"], echo=True)
print(output)
In the model path, you need to have weights for Llama in GGML format and then store them into the models folder. You can search it on Hugging Face website. See one of them here

Llama 2

What's New in Llama 2

Here are some of the key differences between Llama 2 and Llama:

Python Code : Llama 2

Llam2: 7 Billion Parameters

To run Llama2 7B model, refer the code below. The following code uses a 4-bit quantization technique that reduces the size of the LLM, which can make it easier to deploy and use on sytems with limited memory.

%cd /content
!apt-get -y install -qq aria2

!git clone -b v1.3 https://github.com/camenduru/text-generation-webui
%cd /content/text-generation-webui
!pip install -r requirements.txt
!pip install -U gradio==3.28.3

!mkdir /content/text-generation-webui/repositories
%cd /content/text-generation-webui/repositories
!git clone -b v1.2 https://github.com/camenduru/GPTQ-for-LLaMa.git
%cd GPTQ-for-LLaMa
!python setup_cuda.py install

!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-7b-Chat-GPTQ/raw/main/config.json -d /content/text-generation-webui/models/Llama-2-7b-Chat-GPTQ -o config.json
!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-7b-Chat-GPTQ/raw/main/generation_config.json -d /content/text-generation-webui/models/Llama-2-7b-Chat-GPTQ -o generation_config.json
!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-7b-Chat-GPTQ/raw/main/special_tokens_map.json -d /content/text-generation-webui/models/Llama-2-7b-Chat-GPTQ -o special_tokens_map.json
!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-7b-Chat-GPTQ/resolve/main/tokenizer.model -d /content/text-generation-webui/models/Llama-2-7b-Chat-GPTQ -o tokenizer.model
!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-7b-Chat-GPTQ/raw/main/tokenizer_config.json -d /content/text-generation-webui/models/Llama-2-7b-Chat-GPTQ -o tokenizer_config.json
!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-7b-Chat-GPTQ/resolve/main/gptq_model-4bit-128g.safetensors -d /content/text-generation-webui/models/Llama-2-7b-Chat-GPTQ -o gptq_model-4bit-128g.safetensors

%cd /content/text-generation-webui
!python server.py --share --chat --wbits 4 --groupsize 128 --model_type llama
Llam2: 13 Billion Parameters

To run Llama2 13B model, refer the code below.

%cd /content
!apt-get -y install -qq aria2

!git clone -b v1.8 https://github.com/camenduru/text-generation-webui
%cd /content/text-generation-webui
!pip install -r requirements.txt

!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-13b-chat-hf/resolve/main/model-00001-of-00003.safetensors -d /content/text-generation-webui/models/Llama-2-13b-chat-hf -o model-00001-of-00003.safetensors
!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-13b-chat-hf/resolve/main/model-00002-of-00003.safetensors -d /content/text-generation-webui/models/Llama-2-13b-chat-hf -o model-00002-of-00003.safetensors
!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-13b-chat-hf/resolve/main/model-00003-of-00003.safetensors -d /content/text-generation-webui/models/Llama-2-13b-chat-hf -o model-00003-of-00003.safetensors
!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-13b-chat-hf/raw/main/model.safetensors.index.json -d /content/text-generation-webui/models/Llama-2-13b-chat-hf -o model.safetensors.index.json
!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-13b-chat-hf/raw/main/special_tokens_map.json -d /content/text-generation-webui/models/Llama-2-13b-chat-hf -o special_tokens_map.json
!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-13b-chat-hf/resolve/main/tokenizer.model -d /content/text-generation-webui/models/Llama-2-13b-chat-hf -o tokenizer.model
!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-13b-chat-hf/raw/main/tokenizer_config.json -d /content/text-generation-webui/models/Llama-2-13b-chat-hf -o tokenizer_config.json
!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-13b-chat-hf/raw/main/config.json -d /content/text-generation-webui/models/Llama-2-13b-chat-hf -o config.json
!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/4bit/Llama-2-13b-chat-hf/raw/main/generation_config.json -d /content/text-generation-webui/models/Llama-2-13b-chat-hf -o generation_config.json

%cd /content/text-generation-webui
!python server.py --share --chat --load-in-8bit --model /content/text-generation-webui/models/Llama-2-13b-chat-hf

Alpaca

Introduction : Alpaca

A team of researchers from Stanford University developed an open-source language model called Alpaca. It is based on Meta's large-scale language model Llama. The team used OpenAI's GPT API (text-davinci-003) to fine tune the Llama 7 billion (7B) parameters sized model. The goal of the team is to make AI available for everyone for free so that academicians can do further research without worrying about expensive hardwares to execute these memory-intensive algorithms. Although these open source models are not available for commercial use, small businesses can still utilize it for building their own chatbots.

How does Alpaca work

The Stanford team began their research with the smallest language model among Llama models, which was the Llama 7B model, and pre-trained it with 1 trillion tokens. They started with the 175 human-written instruction-output pairs from the self-instruct seed set. They then used OpenAI API to ask ChatGPT to generate more instructions using the seed set. It is to obtain roughly 52,000 sample conversations, which the team used to further fine-tune the Llama models using Hugging Face's training framework.

To read this article in full, please click here

March 18, 2024 12:50 AM UTC

4 Ways to Correct Grammar with Python

This tutorial explains various methods for checking and correcting grammar using Python. Automatic grammar correction helps students, professionals and content creators to make sure their writing follows proper grammar rules.

Check and correct grammatical errors using python
To read this article in full, please click here

March 18, 2024 12:45 AM UTC

March 17, 2024


Brett Cannon

State of WASI support for CPython: March 2024

The biggest update since June 2023 is WASI is now a tier 2 platform for CPython! This means that the main branch of CPython should never be broken more than 24 hours for WASI and that a release will be blocked if WASI support is broken. This only applies to Python 3.13 and later, although I have been trying to keep Python 3.11 and 3.12 working with WASI as well.

To help make this support easier, the devguide has build instructions for WASI. There is also now a WASI step in CI to help make things easier for core developers.

Starting in wasmtime 14, a new command line interface was introduced. All the relevant bits of code that call wasmtime have been updated to use the new CLI in Python 3.11, 3.12, and 3.13/main.

Lastly, 3.13/main and 3.12 now support WASI SDK 21 – which is the official name of the project – and 3.11 is one bug fix away in the test suite from also having support.

At this point I think CPython has caught up to what&aposs available in WASI 0.2 and wasi-libc via WASI SDK. The open issues are mostly feature requests or checking if assumptions related to what&aposs supported still hold.

I&aposm on parental leave at this point, so future WASI work from me is on hold until I return to work in June. Another side effect of me becoming a parent soon is I stepped down as the sponsor of Emscripten support in CPython. That means CPython 3.13 does not officially support Emscripten and probably starting in 3.14, I will be removing any code that complicates supporting WASI. The Pyodide project already knows about this and they don&apost expect it to be a major hindrance for them since they are already used to patching CPython source code.

March 17, 2024 10:43 PM UTC


Peter Bengtsson

How do you thousands-comma AND whitespace format a f-string in Python

Use {my_large_integer:20,}

March 17, 2024 01:58 PM UTC

March 15, 2024


Robin Wilson

How to speed up appending to PostGIS tables with ogr2ogr

Summary: If appending to a PostGIS table with GDAL/OGR is taking a long time, try setting the PG_USE_COPY config option to YES (eg. adding --config PG_USE_COPY YES to your command line). This should speed it up, but beware that if there are concurrent writes to your table at the same time as OGR is accessing it then there could be issues with unique identifiers.

As with many of my blog posts, I’m writing this in the hope that it will appear in searches when someone else has the same problem that I ran into recently. In the past I’ve found myself Googling problems that I’ve had before and finding a link to my blog with an explanation in a post that I didn’t even remember writing.

Anyway, the problem I’m talking about today is one I ran into when working with a client a few weeks ago.

I was using the ogr2ogr command-line tool (part of the GDAL software suite) to import data from a local Geopackage file into a PostGIS database (ie. a PostgreSQL database with the PostGIS extension).

I had multiple files of data that I wanted to put into one Postgres table. Specifically, I was using the lovely data collated by Alasdair Rae on the resources page of his website. Even more specifically, I was using some of the Local Authority GIS data to get buildings data for various areas of the UK. I downloaded multiple GeoPackage files (for example, for Southampton City Council, Hampshire County Council and Portsmouth City Council) and wanted to import them all to a buildings table.

I originally tested this with a Postgres server running on my local machine, and ran the following ogr2ogr commands:

ogr2ogr --debug ON \
   -f PostgreSQL PG:"host=localhost user=postgres password=blah dbname=test_db" \
  buildings1.gpkg -nln buildings

ogr2ogr -append -update --debug ON \
   -f PostgreSQL PG:"host=localhost user=postgres password=blah dbname=test_db" \
   buildings2.gpkg -nln buildings

Here I’m using the -f switch and the arguments following it to tell ogr2ogr to export to PostgreSQL and how to connect to the server, giving it the input file of buildings1.gpkg and using the -nln parameter to tell it what layer name (ie. table name) to use as the output. In the second command I do exactly the same with buildings2.gpkg but also add -append and -update to tell it to append to the existing table rather than overwriting it.

This all worked fine. Great!

A few days later I tried the same thing with a Postgres server running on Azure (using Azure Database for PostgreSQL). The first command ran fine, but the second command seemed to hang.

I was expecting that it would be a bit slower when connecting to a remote database, but I left it running for 10 minutes and it still hadn’t finished. I then tried importing the second file to a new table and it completed quickly – therefore suggesting it was some sort of problem with appending the data.

I worked round this for the time being (using the ogrmerge.py script to merge my buildings1.gpkg and buildings2.gpkg into one file and then importing that file), but resolved to get to the bottom of it when I had time.

Recently, I had that time, and posted on the GDAL mailing list about this. The maintainer of GDAL got back to me to tell me about something I’d missed in the documentation. This was that when importing to a brand new table, the Postgres COPY mode is used, but when appending to an existing table individual INSERT statements are used instead, which can be a lot slower.

Let’s look into this in a bit more detail. The PostgreSQL COPY command is a fast way of importing data into Postgres which involves copying a whole file of data into Postgres in one go, rather than dealing with each row of data individually. This can be significantly faster than iterating through each row of the data and running a separate INSERT statement for each row.

So, ogr2ogr hadn’t hung, it was just running extremely slowly, as inserting my buildings layer involved running an INSERT statement separately for each row, and there were hundreds of thousands of rows. Because the server was hosted remotely on Azure, this involved sending the INSERT command from my computer to the server, waiting for the server to process it, and then the server sending back a result to my computer – a full round-trip for each row of the table.

So, I was told, the simple way to speed this up was to use a configuration setting to turn COPY mode on when appending to tables. This can be done by adding --config PG_USE_COPY YES to the ogr2ogr command. This did the job, and the append commands now completed nice and quickly. If you’re using GDAL/OGR from within a programming language, then have a look at the docs for the GDAL bindings for your language – there should be a way to set GDAL configuration options in your code.

The only final part of this was to understand why the COPY method isn’t used all the time, as it’s so much quicker. Even explained that this is because of potential issues with other connections to the database updating the table at the same time as GDAL is accessing it. It is a fairly safe assumption that if you’re creating a brand new table then no-one else will be accessing it yet, but you can’t assume the same for an existing table. The COPY mode can’t deal with making sure unique identifiers are unique when other connections may be accessing the data. whereas individual INSERT statements can cope with this. Therefore it’s safer to default to INSERT statements when there is any risk of data corruption.

As a nice follow-up for this, and on the maintainer’s advice, I submitted a PR to the GDAL docs, which adds a new section explaining this and giving guidance on setting the config option. I’ve copied that section below:

When data is appended to an existing table (for example, using the -append option in ogr2ogr) the driver will, by default, emit an INSERT statement for each row of data to be added. This may be significantly slower than the COPY-based approach taken when creating a new table, but ensures consistency of unique identifiers if multiple connections are accessing the table simultaneously.

If only one connection is accessing the table when data is appended, the COPY-based approach can be chosen by setting the config option PG_USE_COPY to YES, which may significantly speed up the operation.

March 15, 2024 05:28 PM UTC


Python Software Foundation

Introducing PSF Grants Program Office Hours

In October 2023, we acknowledged the situation surrounding DjangoCon Africa and noted our intent to make ongoing improvements to the Grants Program. We also recognize that we are in a new world of hybrid programming since the onset of the pandemic which comes with different funding and cost challenges. One step we are taking to refresh the Grants Program (we’ll be reporting on other steps soon) is to establish PSF Grants Program Office Hours.

The office hours will be hosted on the Python Software Foundation Discord once a month at 1-2PM UTC (9AM Eastern) on the third Tuesday of the month. (Check what time that is for you.) We invite the Python community to join in to receive support for Grant-related questions and inquiries! If you have urgent or immediate questions related to the Grants Program, please email grants@pyfound.org.

Direct line of communication

As we sat down to address the challenges and issues raised around the Grants Program and how to better support the Python community, we came to realize that refreshing the program would not be an easy and quick task. We need a two-way communication channel dedicated to the topic of grants with the PSF Board, the Grants Working Group, the D&I Working Group, the Code of Conduct Working Group, and most importantly, our vibrant and diverse community.

We believe a direct line of communication between the PSF and the worldwide Python community is the best first step. In order to create that direct line, gather your feedback, and collaborate on the future of the program, we are establishing regular PSF Grants Program Office Hours!

What’s the goal?

There are a couple of goals we hope to accomplish with the Grants Program Office Hours. In the short term, we believe recurring time supporting communication between the community and the PSF is key. In other words, a place for folks to come with questions and ideas regarding the Grants Program, with an understanding that we don’t have it perfect yet. If we have the answer, or we can point you to the right resource - amazing! If we don’t, that’s an area we know needs more work and will be added to our “To Do.”

We hope to see the office hours evolve over time as we work through feedback and make updates to our process, documentation, and resources. In the long term, the PSF hopes Grants Program Office Hours will create a place for our community to ask questions about the Grants Program and for us to have (almost) all the answers. We’d like the office hours to continue to be a place where we receive feedback from the community and continuously improve what we can do for Pythonistas around the world.

PSF Grants Program Office Hour Hosts

The PSF Grants Program Office Hours will be hosted by members of the PSF Staff. This will change over time, but for now you can expect to see Laura Graves, Senior Accountant, and Marie Nordin, Community Communications Manager, hosting the sessions. When needed, other PSF staff members will sub in for Laura and Marie.  

This sounds great! How can I join?

The PSF Grants Program Office Hours will be a text-only chat based office hour hosted on the Python Software Foundation Discord at 1-2PM UTC (9AM Eastern) on the third Tuesday of the month. The server is moderated by PSF Staff and is locked in between office hour sessions. If you’re new to Discord, check out some Discord Basics to help you get started. And as always, if you have urgent or immediate questions related to the Grants Program, please email grants@pyfound.org.

Come prepared to the Office Hours with questions and shareable links to your Grant applications drafts in progress via Google docs, etherpad, pastebin, etc. We hope to see you there!

March 15, 2024 04:30 PM UTC


Real Python

The Real Python Podcast – Episode #196: Exploring Duck Typing in Python & Dynamics of Monkey Patching

What are the advantages of determining the type of an object by how it behaves? What coding circumstances are not a good fit for duck typing? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 15, 2024 12:00 PM UTC


PyCharm

Pytest vs. Unittest: Which Is Better?

Python, being a versatile and widely used programming language, offers several testing frameworks to facilitate the testing process. Two prominent choices are pytest and unittest, both of which come with their own sets of features and advantages. In this article, we’ll be covering the following sections: This will help you determine which testing framework is […]

March 15, 2024 10:47 AM UTC

March 14, 2024


Paolo Melchiorre

Pybites Podcast #155

Episode 155 of Pybites podcast with the title “Django, Open Source & Pycon Conferences, Paolo Melchiorre’s Developer Odyssey”

March 14, 2024 11:00 PM UTC


Python Does What?!

Wisdom for the ages

>>>type(type) is type
True

March 14, 2024 10:09 PM UTC


Peter Bengtsson

Leibniz formula for π in Python, JavaScript, and Ruby

Different ways to calculate the value of π using the Leibniz formula

March 14, 2024 08:24 PM UTC


Mike Driscoll

Python 3.13 Allows Disabling of the GIL + subinterpreters

Python 3.13 adds the ability to remove the Global Interpreter Lock (GIL) per PEP 703. Just this past week, a PR was merged in that allows the disabling of the GIL using a command line flag or an environment variable in free-threaded builds. Note that Python must be built using the Py_GIL_DISABLED flag for this to work.

What is the GIL?

The Global Interpreter Lock makes threading easier in Python and prevents race conditions. It also makes running multiple threads per CPU core impossible. However, you can use the multiprocessing module to at least give you the ability to better utilize the cores on your CPU.

Removing the GIL

Removing the GIL will likely make Python buggy initially, so they are making the GIL-less version a build flag. That means that the GIL will still be on by default in Python 3.13 while they test the GIL-less version and find all the bugs the change causes. Hopefully, by 3.14 or so, they will know all the bugs and have them fixed to some degree or another.

Subinterpreters Are Coming

A related enhancement are subinterpreters, which Eric Snow has worked on for quite some time. PEP 554 and PEP 734 cover what subinterpreters are and how they are being exposed so you can use them.

Here’s the abstract from PEP 734 to give you an idea of what they are:

This PEP proposes to add a new module, interpreters, to support inspecting, creating, and running code in multiple interpreters in the current process. This includes Interpreter objects that represent the underlying interpreters. The module will also provide a basic Queue class for communication between interpreters. Finally, we will add a new concurrent.futures.InterpreterPoolExecutor based on the interpreters module.

Both of these enhancements have lots of potential for making Python code faster, more efficient, or both. Peter Sobot announced he tried out the GIL removal flag for the pedalboard package and got a 17x speed up for certain processes.

The post Python 3.13 Allows Disabling of the GIL + subinterpreters appeared first on Mouse Vs Python.

March 14, 2024 06:53 PM UTC


Python Does What?!

sequence unpack a dict

>>> a, b = {"a": 1, "b": 2}
>>> a
'a'

March 14, 2024 12:45 PM UTC


Programiz

Python Program to Convert Bytes to a String

In this example, you will learn to convert bytes to a string.

March 14, 2024 11:22 AM UTC

Python Program to Remove Duplicate Element From a List

In this example, you will learn to remove duplicate elements from a list.

March 14, 2024 11:21 AM UTC

Python Program to Count the Number of Occurrence of a Character in String

In this example, you will learn to count the number of occurrences of a character in a string.

March 14, 2024 11:21 AM UTC

Python Program to Create a Countdown Timer

In this example, you will learn to create a countdown timer.

March 14, 2024 11:20 AM UTC

Python Program to Check If Two Strings are Anagram

In this example, you will learn to check if two strings are anagram.

March 14, 2024 11:19 AM UTC

Python Program to Count the Number of Digits Present In a Number

In this example, you will learn to count the number of digits present in a number.

March 14, 2024 11:19 AM UTC

Python Program to Compute the Power of a Number

In this example, you will learn to compute the power of a number.

March 14, 2024 11:18 AM UTC

Python List

In this tutorial, we will learn about Python lists (creating lists, changing list items, removing items, and other list operations) with the help of examples.

March 14, 2024 05:39 AM UTC


Mike Driscoll

NEW COURSE: Python 101 Video Course on Udemy and TutorialsPoint

I recently put my Python 101 Video Course up on Udemy and TutorialsPoint.

Python 101 Video Series

There are one thousand free copies of the course available on Udemy by using the following link:

If you prefer TutorialsPoint, you can get a free copy here:

The Python 101 video course is also available on TeachMePython and my Gumroad store.

I am slowly adding quizzes to the Udemy version of the course and plan to add the same or similar quizzes on TeachMePython. Unfortunately, TutorialsPoint doesn’t support quizzes in the same way as they use a video quiz format, so that site won’t have any quizzes.

The post NEW COURSE: Python 101 Video Course on Udemy and TutorialsPoint appeared first on Mouse Vs Python.

March 14, 2024 03:45 AM UTC