Planet Python
Last update: March 06, 2021 04:46 AM UTC
March 06, 2021
Test and Code
147: Testing Single File Python Applications/Scripts with pytest and coverage
Have you ever written a single file Python application or script?
Have you written tests for it?
Do you check code coverage?
This is the topic of this weeks episode, spurred on by a listener question.
The questions:
- For single file scripts, I'd like to have the test code included right there in the file. Can I do that with pytest?
- If I can, can I use code coverage on it?
The example code discussed in the episode: script.py
def foo():
return 5
def main():
x = foo()
print(x)
if __name__ == '__main__': # pragma: no cover
main()
## test code
# To test:
# pip install pytest
# pytest script.py
# To test with coverage:
# put this file (script.py) in a directory by itself, say foo
# then from the parent directory of foo:
# pip install pytest-cov
# pytest --cov=foo foo/script.py
# To show missing lines
# pytest --cov=foo --cov-report=term-missing foo/script.py
def test_foo():
assert foo() == 5
def test_main(capsys):
main()
captured = capsys.readouterr()
assert captured.out == "5\n"
Sponsored By:
- PyCharm Professional: Try PyCharm Pro for 4 months and learn how PyCharm will save you time. Promo Code: TESTANDCODE21
Support Test & Code : Python Testing
<p>Have you ever written a single file Python application or script?<br> Have you written tests for it?<br> Do you check code coverage?</p> <p>This is the topic of this weeks episode, spurred on by a listener question.</p> <p>The questions:</p> <ul> <li>For single file scripts, I'd like to have the test code included right there in the file. Can I do that with pytest?</li> <li>If I can, can I use code coverage on it?</li> </ul> <p>The example code discussed in the episode: script.py</p> <pre><code>def foo(): return 5 def main(): x = foo() print(x) if __name__ == '__main__': # pragma: no cover main() ## test code # To test: # pip install pytest # pytest script.py # To test with coverage: # put this file (script.py) in a directory by itself, say foo # then from the parent directory of foo: # pip install pytest-cov # pytest --cov=foo foo/script.py # To show missing lines # pytest --cov=foo --cov-report=term-missing foo/script.py def test_foo(): assert foo() == 5 def test_main(capsys): main() captured = capsys.readouterr() assert captured.out == "5\n" </code></pre><p>Sponsored By:</p><ul><li><a href="https://testandcode.com/pycharm" rel="nofollow">PyCharm Professional</a>: <a href="https://testandcode.com/pycharm" rel="nofollow">Try PyCharm Pro for 4 months and learn how PyCharm will save you time.</a> Promo Code: TESTANDCODE21</li></ul><p><a href="https://www.patreon.com/testpodcast" rel="payment">Support Test & Code : Python Testing</a></p>March 05, 2021
PyBites
Don't Blame Yourself at Work
A workplace/career thought for you to consider today.
There are times in your career when things are going to feel pretty miserable.
You may feel underappreciated, feel that you're being micromanaged, ignored, etc.
It's natural that when this situation inevitably arises you'll start to doubt yourself and think that you're doing something wrong.
You'll ask yourself, "What am I doing wrong?", "Why do they hate me?", or "Am I even good enough to be doing this?".
In these moments it's important to take a step back and consider your situation from a distance. Take the emotion out of it and really analyse what's going on.
There's likely going to be some sort of change that's occurred in your life or around you to cause the degradation of your work environment. If it's something on your end, then take the necessary steps to fix it. Hold yourself to a high standard, own the change and get things back on track.
On the other hand though, it's important to check the temperature around you. By this I mean tactfully speak with people on your team or in your immediate work environment.
Quite often, and most likely, the problem is not you.
It's so easy for us to go down a path of self-destruction thinking we're at fault in these situations. It's further exacerbated by the loneliness that you'll feel. You don't naturally want to share your perceived "failings" with your colleagues so it might take quite a while before you realise you weren't the issue in the first place.
Finding someone you can trust and speak confidentially with on your team is crucial to finding out where the problem really lies.
Is it your manager? A new process? A shift in company culture? There are many things that can influence your day-to-day at work and it's so important not to jump to the conclusion that you're the "root of all evil" if things are feeling bleak.
My point here is don't blame yourself unnecessarily. Don't do it to yourself. Take the step back, analyse the situation and give it some earnest thought. Speak with those around you about how you're feeling and you'll likely find you're not alone. There's almost always a common denominator and I'd be willing to bet it's not you.
Just remember this if you ever find yourself feeling out of it at work.
-- Julian
To receive a career tip every Thursday, subscribe here.
Andre Roberge
Friendly-traceback will have a new name
tl; dr: I plan to change the name from friendly_traceback to friendly.
When I started working on Friendly-traceback, I had a simple goal in mind:
Given an error message in a Python traceback, parse it and reformulate it into something easier to understand by beginners and that could be easily translated into languages other than English.
A secondary goal was to help users learn how to decipher a normal Python traceback and use the information provided by Pythonto understand what went wrong and how to fix it.
Early on, I quickly realised that this would not be helpful when users are faced with arguably the most frustrating error message of them all:
SyntaxError: invalid syntax
Encouraged by early adopters, I then began a quest to go much beyond simply interpreting a given error message, and trying to find a more specific cause of a given traceback. As Friendly-traceback was able to provide more and more information to users, I was faced with the realisation that too much information presented all at once could be counter-productive. Thus, it was broken down and could be made available in a console by asking what(), where(), why(), etc. If Friendly-traceback does not recognize a given error message, one can now simply type www() [name subject to change] and an Internet search for that specific message will be done using the default web browser.
By default, Friendly-traceback uses a custom exception hook to replace sys.excepthook: this definitely works with a standard Python interpreter. However, it does not work with IPython, Jupyter notebooks, IDLE (at least, not for Python 3.9 and older), etc. So, custom modules now exist and users have to write:
- from friendly_traceback.idle import ...
- from friendly_traceback.jupyter import ...
- from friendly_traceback.ipython import ...
- from friendly_traceback.mu import ...
- from friendly_traceback import ... # generic case
Back to the name change. I have typed "friendly_traceback" many, many times. It is long and annoying to type. When I work at a console, I often do:
import friendly_traceback as ft
and proceed from there.
I suspect that not too many potential users would be fond of friendly_traceback as a name. Furthermore, I wonder how convenient it is to type a name with an underscore character when using a non-English keyboard. Finally, whenever I write about Friendly-traceback, it is an hyphen that is used between the two names, and not an underscore character: one more possible source of confusion.
For all these reasons, I plan to soon change the name to be simply "friendly". This will almost certainly be done as the version number will increase from 0.2.xy to 0.3.0 ... which is going to happen "soon".
Such a name change will mean a major editing job to the extensive documentation which currently includes 76 screenshots, most of which have "friendly_traceback" in them. This means that they will all have to be redone. Of course, the most important work to be done will be changing the source code itself; however, this should be fairly easy to do with a global search/replace.
Stack Abuse
Python: Check if Array/List Contains Element/Value
Introduction
In this tutorial, we'll take a look at how to check if a list contains an element or value in Python. We'll use a list of strings, containing a few animals:
animals = ['Dog', 'Cat', 'Bird', 'Fish']
Check if List Contains Element With for Loop
A simple and rudimentary method to check if a list contains an element is looping through it, and checking if the item we're on matches the one we're looking for. Let's use a for
loop for this:
for animal in animals:
if animal == 'Bird':
print('Chirp!')
This code will result in:
Chirp!
Check if List Contains Element With in Operator
Now, a more succint approach would be to use the built-in in
operator, but with the if
statement instead of the for
statement. When paired with if
, it returns True
if an element exists in a sequence or not. The syntax of the in
operator looks like this:
element in list
Making use of this operator, we can shorten our previous code into a single statement:
if 'Bird' in animals: print('Chirp')
This code fragment will output the following:
Chirp
This approach has the same efficiency as the for
loop, since the in
operator, used like this, calls the list.__contains__
function, which inherently loops through the list - though, it's much more readable.
Check if List Contains Element With not in Operator
By contrast, we can use the not in
operator, which is the logical opposite of the in
operator. It returns True
if the element is not present in a sequence.
Let's rewrite the previous code example to utilize the not in
operator:
if 'Bird' not in animals: print('Chirp')
Running this code won't produce anything, since the Bird
is present in our list.
But if we try it out with a Wolf
:
if 'Wolf' not in animals: print('Howl')
This code results in:
Howl
Check if List Contains Element With Lambda
Another way you can check if an element is present is to filter out everything other than that element, just like sifting through sand and checking if there are any shells left in the end. The built-in filter()
method accepts a lambda function and a list as its arguments. We can use a lambda function here to check for our 'Bird'
string in the animals
list.
Then, we wrap the results in a list()
since the filter()
method returns a filter
object, not the results. If we pack the filter
object in a list, it'll contain the elements left after filtering:
retrieved_elements = list(filter(lambda x: 'Bird' in x, animals))
print(retrieved_elements)
This code results in:
['Bird']
Now, this approach isn't the most efficient. It's fairly slower than the previous three approaches we've used. The filter()
method itself is equivalent to the generator function:
(item for item in iterable if function(item))
The slowed down performance of this code, amongst other things, comes from the fact that we're converting the results into a list in the end, as well as executing a function on the item on each iteration.
Check if List Contains Element Using any()
Another great built-in approach is to use the any()
function, which is just a helper function that checks if there are any (at least 1) instances of an element in a list. It returns True
or False
based on the presence or lack thereof of an element:
if any(element in 'Bird' for element in animals):
print('Chirp')
Since this results in True
, our print()
statement is called:
Chirp
This approach is also an efficient way to check for the presence of an element. It's as efficient as the first three.
Check if List Contains Element Using count()
Finally, we can use the count()
function to check if an element is present or not:
list.count(element)
This function returns the occurrence of the given element in a sequence. If it's greater than 0, we can be assured a given item is in the list.
Let's check the results of the count()
function:
if animals.count('Bird') > 0:
print("Chirp")
The count()
function inherently loops the list to check for the number of occurences, and this code results in:
Chirp
Conclusion
In this tutorial, we've gone over several ways to check if an element is present in a list or not. We've used the for
loop, in
and not in
operators, as well as the filter()
, any()
and count()
methods.
Real Python
The Real Python Podcast – Episode #50: Consuming APIs With Python and Building Microservices With gRPC
Have you wanted to get your Python code to consume data from web-based APIs? Maybe you've dabbled with the requests package, but you don't know what steps to take next. This week on the show, David Amos is back, and he's brought another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Talk Python to Me
#306 Scaling Python and Jupyter with ZeroMQ
When we talk about scaling software threading and async get all the buzz. And while they are powerful, using asynchronous queues can often be much more effective. You might think this means creating a Celery server, maybe running RabbitMQ or Redis as well. <br/> <br/> What if you wanted this async ability and many more message exchange patterns like pub/sub. But you wanted to do zero of that server work? Then you should check out ZeroMQ. <br/> <br/> ZeroMQ is to queuing what Flask is to web apps. A powerful and simple framework for you to build just what you need. You're almost certain to learn some new networking patterns and capabilities in this episode with our guest Min Ragan-Kelley to discuss using ZeroMQ from Python as well as how ZeroMQ is central to the internals of Jupyter Notebooks.<br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Min on Twitter</b>: <a href="https://twitter.com/minrk" target="_blank" rel="noopener">@minrk</a><br/> <b>Simula Lab</b>: <a href="https://www.simula.no/research" target="_blank" rel="noopener">simula.no</a><br/> <b>Talk Python Binder episode</b>: <a href="https://talkpython.fm/256" target="_blank" rel="noopener">talkpython.fm/256</a><br/> <b>The ZeroMQ Guide</b>: <a href="https://zguide.zeromq.org/" target="_blank" rel="noopener">zguide.zeromq.org</a><br/> <b>Binder</b>: <a href="https://mybinder.org" target="_blank" rel="noopener">mybinder.org</a><br/> <b>IPython for parallel computing</b>: <a href="https://ipyparallel.readthedocs.io" target="_blank" rel="noopener">ipyparallel.readthedocs.io</a><br/> <b>Messaging in Jupyter</b>: <a href="https://jupyter-client.readthedocs.io/en/stable/messaging.html" target="_blank" rel="noopener">jupyter-client.readthedocs.io</a><br/> <b>DevWheel Package</b>: <a href="https://pypi.org/project/delvewheel/" target="_blank" rel="noopener">pypi.org</a><br/> <b>cibuildwheel</b>: <a href="https://pypi.org/project/cibuildwheel/" target="_blank" rel="noopener">pypi.org</a><br/> <br/> <b>YouTube Live Stream</b>: <a href="https://www.youtube.com/watch?v=AIq4fO5t_ks" target="_blank" rel="noopener">youtube.com</a><br/> <b>PyCon Ticket Contest</b>: <a href="https://talkpython.fm/pycon2021" target="_blank" rel="noopener">talkpython.fm/pycon2021</a><br/></div><br/> <strong>Sponsors</strong><br/> <br/> <a href='https://talkpython.fm/linode'>Linode</a><br> <a href='https://talkpython.fm/mito'>Mito</a><br> <a href='https://talkpython.fm/training'>Talk Python Training</a>
Python Pool
Python Shutil Module: 10 Methods You Should Know
Firstly, Python Shutil module in Python provides many functions to perform high-level operations on files and collections of files. Secondly, It is an inbuilt module that comes with the automation process of copying and removing files and directories. Thirdly, this module also takes care of low-level semantics like creating, closing files once they are copied, and focusing on the business logic.
How does the python shutil module work?
The basic syntax to use shutil module is as follows:
import shutil
shutil.submodule_name(arguments)
File-Directory operations
1. Python shutil.copy()
shutil.copy(): This function is used to copy the content or text of the source file to the destination file or directories. It also preserves the file’s permission mode, but another type of metadata of the file like the file’s creation and file’s modification is not preserved.
import os # import the shutil module import shutil # write the path of the file path = '/home/User' # List all the files and directories in the given path print("Before copying file:") print(os.listdir(path)) # write the Source path source = "/home/User/file.txt" # Print the file permission of the source given perms = os.stat(source).st_mode print("File Permission mode:", perms, "\n") # Write the Destination path destinationfile = "/home/User/file(copy).txt" # Copy the content of source file to destination file dests = shutil.copy(source, destinationfile) # List files and directories of the path print("After copying file:") print(os.listdir(path)) # Print again all the file permission perms = os.stat(destinationfile).st_mode print("File Permission mode:", perms) # Print path of of the file which is created print("Destination path:", dests)
Output:
Before copying file:
['hrithik.png', 'test.py', 'file.text', 'copy.cpp']
File permission mode: 33188
After copying file:
['hrithik.png', 'test.py', 'file.text', 'file(copy).txt', 'copy.cpp']
File permission mode: 33188
Destination path: /home/User/file(copy).txt
Explanation:
In this code, Firstly, we are checking with the files present in the directory. Secondly, then we will print the file permissions and give the source path of the file. Thirdly, we will give the destination path the copy of the content there in a new file. At last, we will again print all the files in the directory and check if the copy was created of that file or not.
2. Python shutil.copy2()
Firstly, this function is just like the copy() function except for the fact that it maintains metadata of the source file.
from shutil import * import os import time import sys def show_file_info(filename): stat_info = os.stat(filename) print '\tMode :', stat_info.st_mode print '\tCreated :', time.ctime(stat_info.st_ctime) print '\tAccessed:', time.ctime(stat_info.st_atime) print '\tModified:', time.ctime(stat_info.st_mtime) os.mkdir('example') print ('SOURCE time: ') show_file_info('shutil_copy2.py') copy2('shutil_copy2.py', 'example') print ('DESTINATION time:') show_file_info('example/shutil_copy2.py')
Output:
SOURCE time:
Mode : 33188
Created : Sat Jul 16 12:28:43 2020
Accessed: Thu Feb 21 06:36:54 2021
Modified: Sat Feb 19 19:18:23 2021
DESTINATION time:
Mode : 33188
Created : Mon Mar 1 06:36:54 2021
Accessed: Mon Mar 1 06:36:54 2021
Modified: Tue Mar 2 19:18:23 2021
Explanation:
In this code, we have written the function copy2() is the same as a copy, just it performs one extra operation that maintains the metadata.
3. Python shutil.copyfile()
In this function file, names get copied, which means the original file is copied by the specified name in the same directory. It says that the duplicate of the file is present in the same directory.
import os import shutil print('BEFORE LIST:', os.listdir('.')) shutil.copyfile('file_copy.py', 'file_copy.py.copy') print('AFTER LIST:', os.listdir('.'))
Output:
Latracal:shutil Latracal$ python file_copy.py
BEFORE LIST:
[' .DS_Store', 'file_copy.py']
AFTER LIST:
[ .DS_Store', 'file_copy.py', 'file_copy.py.copy']
Explanation:
In this code, we have written the function copyfile()
the same file name gets copied for the new file just copy is added in the new file name. see in the output.
4. Python shutil.copytree()
This function copies the file and the subdirectories in one directory to another directory. That means that the file is present in the source as well as the destination. The names of both the parameters must be in the string.
import pprint import shutil import os shutil.copytree('../shutil', './Latracal') pprint.pprint(os.listdir('./Latracal'))
Output:
Latracal:shutil Latracal$ clone—directory. py
[' .DS—Store' ,
'file_copy.py' ,
'file_copy_new.py'
'file_with_metadata.py' ,
'clone_directory. py']
Explanation:
In this code, we have written the function copytree() so that we can get duplicate of that file.
5. Python shutil.rmtree()
This function is used to remove the particular file and subdirectory from the specified directory, which means that the directory is deleted from the system.
import pprint import shutil import os print('BEFORE:') pprint.pprint(os.listdir('.')) shutil.rmtree('Latracal') print('\nAFTER:') pprint.pprint(os.listdir('.'))
Output:
Latracal:shutil Latracal$ retove—dir.py
BEFORE:
['.DS_Store',
'file_copy.py',
'file_copy_new.py',
'remove_dir.py',
'copy_with_metadata.py',
'Latracal'
'clone_directory.py']
AFTER:
['.DS_Store',
'file—copy.py' ,
'file_copy_new.py',
'remove_dir.py',
'copy_with_metadata.py',
'clone_directory. py']
Explanation:
In this code, we have written the function rmtree(), which is used to remove the file or directory. Firstly, we have listed all the files and applied the function to remove and again listed the file so that we can see if the file is deleted or not.
6. shutil.which()
The which()
a function is an excellent tool that is used to find the file path in your machine to easily reach the particular destination by knowing the path of the file.
import shutil import sys print(shutil.which('bsondump')) print(shutil.which('no-such-program'))
output:
Latracal:shutil Latracal$ python find_file.py
/usr/10ca1/mngodb@3.2/bin/bsondunp
Explanation:
In this code, we have written the function that () so that we can find any of the files when required.
7. Python shutil.disk_usage()
This function is used to understand how much information is present in our file system by just calling the disk_usage() function.
import shutil total_mem, used_mem, free_mem = shutil.disk_usage('.') gb = 10 **9 print('Total: {:6.2f} GB'.format(total_mem/gb)) print('Used : {:6.2f} GB'.format(used_mem/gb)) print('Free : {:6.2f} GB'.format(free_mem/gb))
Output:
shubhm:shutil shubhmS py
Total:499.9ø GB
Used :187.72 GB
Free :3ø8.26 GB
Explanation:
In this code, we have written the function disk_usage() to get to know about the total, used, and free disk space.
8. Python shutil.move()
This function is used to move the file and directory from one directory to another directory and removes it from the previous directory. It can be said as renaming the file or directory also.
import shutil shutil.move('hello.py','newdir/')
Output:
'newdir/hello.py'
Explanation:
In this code, we have written the function move() to move the file or directory from one place to another.
9. Python shutil.make_archive()
This function is used to build an archive (zip or tar) of files in the root directory.
import shutil import pprint root_directory='newdir' shutil.make_archive("newdirabcd","zip",root_directory)
output:
'C:\\python\\latracal\\newdirabcd.zip'
Explanation:
In this code, we have written the functionmake_archive() with telling them the name of the root directory to build the archive of files in the root directory.
10. Python shutil.get_archive_formats()
This function gives us all the supported archive formats in the file or directory.
import shutil import sys shutil.get_archive_formats()
output:
[('bztar', "bzip2'ed tar-file"), ('gztar', "gzip'ed tar-file"), ('tar', 'uncompressed tar file'), ('xztar', "xz'ed tar-file"), ('zip', 'ZIP file')]
Explanation:
In this code, we have written the function get_archive_formats() to get the supportive archive formats in the file or directory.
Advantages
- The shutil module helps you in the automation of copying files and directories.
- This module saves the steps of opening, reading, writing, and closing files when there is no actual processing, simply moving files.
Must Read
Conclusion
In this article, we have studied many types of operations that how we can work on high-level file operations like copying contents of a file and create a new copy of a file, etc. without diving into complex File Handling operations with shutil
module in Python.
However, if you have any doubts or questions, do let me know in the comment section below. I will try to help you as soon as possible.
Happy Pythoning!
The post Python Shutil Module: 10 Methods You Should Know appeared first on Python Pool.
The Insider’s Guide to A* Algorithm in Python
A* Algorithm in Python or in general is basically an artificial intelligence problem used for the pathfinding (from point A to point B) and the Graph traversals. This algorithm is flexible and can be used in a wide range of contexts. The A* search algorithm uses the heuristic path cost, the starting point’s cost, and the ending point. This algorithm was first published by Peter Hart, Nils Nilsson, and Bertram Raphael in 1968.
Why A* Algorithm?
This Algorithm is the advanced form of the BFS algorithm (Breadth-first search), which searches for the shorter path first than, the longer paths. It is a complete as well as an optimal solution for solving path and grid problems.
Optimal – find the least cost from the starting point to the ending point. Complete – It means that it will find all the available paths from start to end.
Basic concepts of A*

Where
g (n) : The actual cost path from the start node to the current node.
h ( n) : The actual cost path from the current node to goal node.
f (n) : The actual cost path from the start node to the goal node.
For the implementation of A* algorithm we have to use two arrays namely OPEN and CLOSE.
OPEN:
An array that contains the nodes that have been generated but have not been yet examined till yet.
CLOSE:
An array which contains the nodes which are examined.
Algorithm
1: Firstly, Place the starting node into OPEN and find its f (n) value.
2: Then remove the node from OPEN, having the smallest f (n) value. If it is a goal node, then stop and return to success.
3: Else remove the node from OPEN, and find all its successors.
4: Find the f (n) value of all the successors, place them into OPEN, and place the removed node into CLOSE.
5: Goto Step-2.
6: Exit.
Advantages of A* Algorithm in Python
- It is fully complete and optimal.
- This is the best one of all the other techniques. We use to solve all the complex problems through this algorithm.
- The algorithm is optimally efficient, i.e., there is no other optimal algorithm that is guaranteed to expand fewer nodes than A*.
Disadvantages of A* Algorithm in Python
- This algorithm is complete if the branching factor is finite of the algorithm and every action has a fixed cost.
- The speed execution of A* search is highly dependant on the accuracy of the heuristic algorithm that is used to compute h (n) and is a bit slower than other algorithms.
- It is having complex problems.
Pseudo-code of A* algorithm
let openList equal empty list of nodes
let closedList equal empty list of nodes
put startNode on the openList (leave it's f at zero)
while openList is not empty
let currentNode equal the node with the least f value
remove currentNode from the openList
add currentNode to the closedList
if currentNode is the goal
You've found the exit!
let children of the currentNode equal the adjacent nodes
for each child in the children
if child is in the closedList
continue to beginning of for loop
child.g = currentNode.g + distance b/w child and current
child.h = distance from child to end
child.f = child.g + child.h
if child.position is in the openList's nodes positions
if child.g is higher than the openList node's g
continue to beginning of for loop
add the child to the openList
A* Algorithm code for Graph
A* algorithm is best when it comes to finding paths from one place to another. It always makes sure that the founded path is the most efficient. This is the implementation of A* on a graph structure

from collections import deque class Graph: def __init__(self, adjac_lis): self.adjac_lis = adjac_lis def get_neighbors(self, v): return self.adjac_lis[v] # This is heuristic function which is having equal values for all nodes def h(self, n): H = { 'A': 1, 'B': 1, 'C': 1, 'D': 1 } return H[n] def a_star_algorithm(self, start, stop): # In this open_lst is a lisy of nodes which have been visited, but who's # neighbours haven't all been always inspected, It starts off with the start #node # And closed_lst is a list of nodes which have been visited # and who's neighbors have been always inspected open_lst = set([start]) closed_lst = set([]) # poo has present distances from start to all other nodes # the default value is +infinity poo = {} poo[start] = 0 # par contains an adjac mapping of all nodes par = {} par[start] = start while len(open_lst) > 0: n = None # it will find a node with the lowest value of f() - for v in open_lst: if n == None or poo[v] + self.h(v) < poo[n] + self.h(n): n = v; if n == None: print('Path does not exist!') return None # if the current node is the stop # then we start again from start if n == stop: reconst_path = [] while par[n] != n: reconst_path.append(n) n = par[n] reconst_path.append(start) reconst_path.reverse() print('Path found: {}'.format(reconst_path)) return reconst_path # for all the neighbors of the current node do for (m, weight) in self.get_neighbors(n): # if the current node is not presentin both open_lst and closed_lst # add it to open_lst and note n as it's par if m not in open_lst and m not in closed_lst: open_lst.add(m) par[m] = n poo[m] = poo[n] + weight # otherwise, check if it's quicker to first visit n, then m # and if it is, update par data and poo data # and if the node was in the closed_lst, move it to open_lst else: if poo[m] > poo[n] + weight: poo[m] = poo[n] + weight par[m] = n if m in closed_lst: closed_lst.remove(m) open_lst.add(m) # remove n from the open_lst, and add it to closed_lst # because all of his neighbors were inspected open_lst.remove(n) closed_lst.add(n) print('Path does not exist!') return None
INPUT:
adjac_lis = { 'A': [('B', 1), ('C', 3), ('D', 7)], 'B': [('D', 5)], 'C': [('D', 12)] } graph1 = Graph(adjac_lis) graph1.a_star_algorithm('A', 'D')
OUTPUT:
Path found: ['A', 'B', 'D']
['A', 'B', 'D']
Explanation:
In this code, we have made the class named Graph, where multiple functions perform different operations. There is written with all the functions what all operations that function is performing. Then some conditional statements will perform the required operations to get the minimum path for traversal from one node to another node. Finally, we will get the output as the shortest path to travel from one node to another.
Also, Read
Conclusion
A* in Python is a powerful and beneficial algorithm with all the potential. However, it is only as good as its heuristic function, which is highly variable considering a problem’s nature. It has found its applications in software systems in machine learning and search optimization to game development.
The post The Insider’s Guide to A* Algorithm in Python appeared first on Python Pool.
March 04, 2021
Patrick Kennedy
Server-side Sessions in Flask with Redis
I wrote a blog post on TestDriven.io about how server-side sessions can be implemented in Flask with Flask-Session and Redis:
https://testdriven.io/blog/flask-server-side-sessions/
This blog post looks at how to implement server-side sessions work in Flask by covering the following topics:
- What is a session?
- Client-side vs. server-side sessions
- Flask-Session overview
- Example Flask application that implements server-side sessions using Flask-Session and Redis
Python Morsels
Inheriting one class from another
Watch first
Need a bit more background? Or want to dive deeper?
Watch other class-related screencasts.
Transcript:
How does class inheritance work in Python?
Creating a class that inherits from another class
We have a class called FancyCounter
, that inherits from another class, Counter
(which is in the collections
module in the Python standard library):
from collections import Counter
class FancyCounter(Counter):
def commonest(self):
(value1, count1), (value2, count2) = self.most_common(2)
if count1 == count2:
raise ValueError("No unique most common value")
return value1
The way we know we're inheriting from the Counter
class because when we defined FancyCounter
, just after the class name we put parentheses and wrote Counter
inside them.
To create a class that inherits from another class, after the class name you'll put parentheses and then list any classes that your class inherits from.
In a function definition, parentheses after the function name represent arguments that the function accepts. In a class definition the parentheses after the class name instead represent the classes being inherited from.
Usually when practicing class inheritance in Python, we inherit from just one class. You can inherit from multiple classes (that's called multiple inheritance), but it's a little bit rare. We'll only discuss single-class inheritance right now.
Methods are inherited from parent classes
To use our FancyCounter
class, we can call it (just like any other class):
>>> from fancy_counter import FancyCounter
>>> letters = FancyCounter("Hello there!")
Our class will accept a string when we call it because the Counter
class has implemented a __init__
method (an initializer method).
Our class also has a __repr__
method for a nice string representation:
>>> letters
FancyCounter({'e': 3, 'l': 2, 'H': 1, 'o': 1, ' ': 1, 't': 1, 'h': 1, 'r': 1, '!': 1})
It even has a bunch of other functionality too. For example, it has overridden what happens when you use square brackets to assign key-value pairs on class instances:
>>> letters['l'] = -2
>>> letters
FancyCounter({'e': 3, 'H': 1, 'o': 1, ' ': 1, 't': 1, 'h': 1, 'r': 1, '!': 1, 'l': -2})
We can assign key-value pairs because our parent class, Counter
creates dictionary-like objects.
All of that functionality was inherited from the Counter
class.
Adding new functionality while inheriting
So our FancyCounter
class inherited all of the functionality that our Counter
class has but we've also extended it by adding an additional method, commonest
, which will give us the most common item in our class.
When we call the commonest
method, we'll get the letter e
(which occurs three times in the string we originally gave to our FancyCounter
object):
>>> letters.commonest()
'e'
Our commonest
method relies on the most_common
method, which we didn't define but which our parent class, Counter
, did define:
def commonest(self):
(value1, count1), (value2, count2) = self.most_common(2)
if count1 == count2:
raise ValueError("No unique most common value")
return value1
Our FancyCounter
class has a most_commonest
method because our parent class, Counter
defined it for us!
Overriding inherited methods
If we wanted to customize what happens when we assigned to a key-value pair in this class, we could do that by overriding the __setitem__
method.
For example, let's make it so that if we assign a key to a negative value, it instead assigns it to 0
.
Before when we assigned letters['l']
to -2
, we'd like it to be set to 0
instead of -2
(it's -2
here because we haven't customized this yet):
>>> letters['l'] = -2
>>> letters['l']
-2
To customize this behavior we'll make a __setitem__
method that accepts self
, key
, and value
because that's what __setitem__
is given by Python when it's called:
def __setitem__(self, key, value):
value = max(0, value)
The above __setitem__
method basically says: if value
is negative, set it to 0
.
If we stop writing our __setitem__
at this point, it wouldn't be very useful.
In fact that __setitem__
method would do nothing at all: it wouldn't give an error, but it wouldn't actually do anything either!
In order to do something useful we need to call our parent class's __setitem__
method.
We can call our parent class' __setitem__
method by using super
.
def __setitem__(self, key, value):
value = max(0, value)
return super().__setitem__(key, value)
We're calling super().__setitem__(key, value)
, which will call the __setitem__
method on our parent class (Counter
) with key
and our new non-negative value
.
Here's a full implementation of this new version of our FancyCounter
class:
from collections import Counter
class FancyCounter(Counter):
def commonest(self):
(value1, count1), (value2, count2) = self.most_common(2)
if count1 == count2:
raise ValueError("No unique most common value")
return value1
def __setitem__(self, key, value):
value = max(0, value)
return super().__setitem__(key, value)
To use this class we'll call it and pass in a string again:
>>> from fancy_counter import FancyCounter
>>> letters = FancyCounter("Hello there!")
But this time, if we assign a key to a negative value, we'll see that it will be assigned to0
instead:
>>> letters['l'] = -2
>>> letters['l']
0
Summary
If you want to extend another class in Python, taking all of its functionality and adding more functionality to it, you can put some parentheses after your class name and then write the name of the class that you're inheriting from.
If you want to override any of the existing functionality in that class, you'll make a method with the same name as an existing method in your parent class.
Usually (though not always) when overriding an existing method, you'll want to call super
in order to extend the functionality of your parent class rather than completely overriding it.
Using super
allows you to delegate back up to your parent class, so you can essentially wrap around the functionality that it has and tweak it a little bit for your own class's use.
That's the basics of class inheritance in Python.
PyBites
There is NO Competition, Stop Comparing Yourself to Others
The sure way to feel less fulfilled and increase imposter syndrome?
Comparing yourself to others.
Don't do it, just don't.
There will always be better developers and it's a myth you have to be in the top x% to call yourself a developer.
It's just not true. You bring UNIQUE and valuable skills to the table that go far beyond just Python skills.
You can become very good at something if you start focusing on your OWN journey.
Even if you think everything has been done/invented (which is another myth: did you know that ice cream preceded the cone for millennia?)
Whatever you do will have your unique stamp on it. And people will appreciate that.
When we built our coding platform we deliberately ignored all the other amazing solutions out there (Codewars, LeetCode, HackerRank, etc).
It would have demotivated us from the start or worse: we would just have built a lame / inferior copy.
No! We focused building ours the PyBites way making it Pythonic and focusing on getting people great results.
And it's now been doing that for years. That would not have happened though if we got in the comparison game!
Remember, your only competitor is your yesterday's self.
Focus on what you can control. Become the best version of yourself.
-- Bob
To receive a mindset tip every Monday, subscribe here.
Tryton News
Foundation Budget for 2021
The Foundation has decided to publish a budget for 2021. This is an exercise in transparency so everyone can see our plans. Note that the income of the foundation comes only from donations so we cannot guarantee that all the things will get done. We have ordered the points by priority. Each point will be done once we get a total amount of donations.
Budget points
- 1700€: Infrastructure maintenance (rental and services to maintain our servers).
- 2300€: Create a public overview of how all the current infrastructure is setup.
- 3100€: Buy a new Mac mini to support Apple Silicon
- 4300€: Improve the contents of the current website by writing more details about supported features and including more sucess stories.
- 7300€: Build a new code review system
The amounts do not represent the amount needed for each individual point but the total amount of donations we need to be able to work on it. The cost of each point can be calculated by subtracting its amount from the amount of the previous point.
If you want to help make these things happen please consider donating to the foundation. Any amount will be appreciated. We would also like to thank everyone who has already donated to the foundation. Last but not least, we would like to receive enough in donations to buy the Mac mini before the next Tryton release, scheduled for 3rd of May, so we can include support for new Apple devices.
About maintenance and infrastructure cost
If you have been following Tryton for some time you will have noticed that the maintenance budget has increased this year from 500€ to 1700€. The main reason for this is that we have agreed to also include all the services related to maintenance in this cost. Until now B2CK have been providing these services for free, but as it is a time consuming task, and sometimes needs to be done in a hurry when something isn’t working properly, we agreed that it should be paid.
During this year, we are also relying on B2CK to provide the maintenance services, but our plan is to allow other companies to also offer these services. This will allow us to choose which one is best. For this reason we added the second point on the budget, which will allow everyone to have an overview of what needs to be maintained.
1 post - 1 participant
March 03, 2021
Ben Cook
NumPy where: Understanding np.where()
The NumPy where function is like a vectorized switch that you can use to combine two arrays.
Mike Driscoll
Python Packaging Index Removes 3,653 Malicious Libraries
Once again the Python Packaging Index (PyPI) has been hit with malicious libraries. Over 3500 of them in fact. You can read more about this at The Register or the Sonatype Blog. The administrators at PyPI were quick to remove these libraries and minimize the risk of people installing them.
On the plus side, these libraries seemed to be mostly making
The only specific malicious package I have seen being reported is a variant of CuPy, a Python package that uses NumPy for Nvidia’s parallel computing platform.
While this may have been an attempt to warn developers of weaknesses in their supply chain, there have been several other typosquatting incidents on PyPI in the past that were more insidious.
As always, be sure you understand what you are installing when you use pip. It is on you to make sure that you are downloading and installing the correct packages.
The post Python Packaging Index Removes 3,653 Malicious Libraries appeared first on Mouse Vs Python.
Real Python
New Features: Article Bookmarks, Completion Status, and Search Improvements
With close to 2,000 Python tutorials and video lessons in the Real Python content library, it was getting harder and harder for learners to find the right resources at the right time.
To fix that, we’ve just launched out several new features to help you easily find and review the learning resources you’re looking for.
Here’s what’s new:
Article Completion Status and Bookmarks
Just like with courses and course lessons, you can now bookmark written tutorials and mark them as completed to track your learning progress.
This makes it super easy to save tutorials you want to read, or to keep tutorials you found valuable around for future reference:
Read the full article at https://realpython.com/article-bookmarks-search-improvements/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Stack Abuse
Seaborn Line Plot - Tutorial and Examples
Introduction
Seaborn is one of the most widely used data visualization libraries in Python, as an extension to Matplotlib. It offers a simple, intuitive, yet highly customizable API for data visualization.
In this tutorial, we'll take a look at how to plot a Line Plot in Seaborn - one of the most basic types of plots.
Line Plots display numerical values on one axis, and categorical values on the other.
They can typically be used in much the same way Bar Plots can be used, though, they're more commonly used to keep track of changes over time.
Plot a Line Plot with Seaborn
Let's start out with the most basic form of populating data for a Line Plot, by providing a couple of lists for the X-axis and Y-axis to the lineplot()
function:
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_theme(style="darkgrid")
x = [1, 2, 3, 4, 5]
y = [1, 5, 4, 7, 4]
sns.lineplot(x, y)
plt.show()
Here, we have two lists of values, x
and y
. The x
list acts as our categorical variable list, while the y
list acts as the numerical variable list.
This code results in:
To that end, we can use other data types, such as strings for the categorical axis:
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_theme(style="darkgrid")
x = ['day 1', 'day 2', 'day 3']
y = [1, 5, 4]
sns.lineplot(x, y)
plt.show()
And this would result in:
Note: If you're using integers as your categorical list, such as [1, 2, 3, 4, 5]
, but then proceed to go to 100
, all values between 5..100
will be null:
import seaborn as sns
sns.set_theme(style="darkgrid")
x = [1, 2, 3, 4, 5, 10, 100]
y = [1, 5, 4, 7, 4, 5, 6]
sns.lineplot(x, y)
plt.show()
This is because a dataset might simply be missing numerical values on the X-axis. In that case, Seaborn simply lets us assume that those values are missing and plots away. However, when you work with strings, this won't be the case:
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_theme(style="darkgrid")
x = ['day 1', 'day 2', 'day 3', 'day 100']
y = [1, 5, 4, 5]
sns.lineplot(x, y)
plt.show()
However, more typically, we don't work with simple, hand-made lists like this. We work with data imported from larger datasets or pulled directly from databases. Let's import a dataset and work with it instead.
Import Data
Let's use the Hotel Bookings dataset and use the data from there:
import pandas as pd
df = pd.read_csv('hotel_bookings.csv')
print(df.head())
Let's take a look at the columns of this dataset:
hotel is_canceled reservation_status ... arrival_date_month stays_in_week_nights
0 Resort Hotel 0 Check-Out ... July 0
1 Resort Hotel 0 Check-Out ... July 0
2 Resort Hotel 0 Check-Out ... July 1
3 Resort Hotel 0 Check-Out ... July 1
4 Resort Hotel 0 Check-Out ... July 2
This is a truncated view, since there are a lot of columns in this dataset. For example, let's explore this dataset, by using the arrival_date_month
as our categorical X-axis, while we use the stays_in_week_nights
as our numerical Y-axis:
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
sns.set_theme(style="darkgrid")
df = pd.read_csv('hotel_bookings.csv')
sns.lineplot(x = "arrival_date_month", y = "stays_in_week_nights", data = df)
plt.show()
We've used Pandas to read in the CSV data and pack it into a DataFrame
. Then, we can assign the x
and y
arguments of the lineplot()
function as the names of the columns in that dataframe. Of course, we'll have to specify which dataset we're working with by assigning the dataframe to the data
argument.
Now, this results in:
We can clearly see that weeknight stays tend to be longer during the months of June, July and August (summer vacation), while they're the lowest in January and February, right after the chain of holidays leading up to New Year.
Additionally, you can see the confidence interval as the area around the line itself, which is the estimated central tendency of our data. Since we have multiple y
values for each x
value (many people stayed in each month), Seaborn calculates the central tendency of these records and plots that line, as well as a confidence interval for that tendency.
In general, people stay ~2.8 days on weeknights, in July, but the confidence interval spans from 2.78-2.84.
Plotting Wide-Form Data
Now, let's take a look at how we can plot wide-form data, rather than tidy-form as we've been doing so far. We'll want to visualize the stays_in_week_nights
variable over the months, but we'll also want to take the year of that arrival into consideration. This will result in a Line Plot for each year, over the months, on a single figure.
Since the dataset isn't well-suited for this by default, we'll have to do some data pre-processing on it.
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
df = pd.read_csv('hotel_bookings.csv')
# Truncate
df = df[['arrival_date_year', 'arrival_date_month', 'stays_in_week_nights']]
# Save the order of the arrival months
order = df['arrival_date_month']
# Pivot the table to turn it into wide-form
df_wide = df.pivot_table(index='arrival_date_month', columns='arrival_date_year', values='stays_in_week_nights')
# Reindex the DataFrame with the `order` variable to keep the same order of months as before
df_wide = df_wide.reindex(order, axis=0)
print(df_wide)
Here, we've firstly truncated the dataset to a few relevant columns. Then, we've saved the order of arrival date months so we can preserve it for later. You can put in any order here, though.
Then, to turn the narrow-form data into a wide-form, we've pivoted the table around the arrival_date_month
feature, turning arrival_date_year
into columns, and stays_in_week_nights
into values. Finally, we've used reindex()
to enforce the same order of arrival months as we had before.
Let's take a look at how our dataset looks like now:
arrival_date_year 2015 2016 2017
arrival_date_month
July 2.789625 2.836177 2.787502
July 2.789625 2.836177 2.787502
July 2.789625 2.836177 2.787502
July 2.789625 2.836177 2.787502
July 2.789625 2.836177 2.787502
... ... ... ...
August 2.654153 2.859964 2.956142
August 2.654153 2.859964 2.956142
August 2.654153 2.859964 2.956142
August 2.654153 2.859964 2.956142
August 2.654153 2.859964 2.956142
Great! Our dataset is now correctly formatted for wide-form visualization, with the central tendency of the stays_in_week_nights
calculated. Now that we're working with a wide-form dataset, all we have to do to plot it is:
sns.lineplot(data=df_wide)
plt.show()
The lineplot()
function can natively recognize wide-form datasets and plots them accordingly. This results in:
Customizing Line Plots with Seaborn
Now that we've explored how to plot manually inserted data, how to plot simple dataset features, as well as manipulated a dataset to conform to a different type of visualization - let's take a look at how we can customize our line plots to provide more easy-to-digest information.
Plotting Line Plot with Hues
Hues can be used to segregate a dataset into multiple individual line plots, based on a feature you'd like them to be grouped (hued) by. For example, we can visualize the central tendency of the stays_in_week_nights
feature, over the months, but take the arrival_date_year
into consideration as well and group individual line plots based on that feature.
This is exactly what we've done in the previous example - manually. We've converted the dataset into a wide-form dataframe and plotted it. However, we could've grouped the years into hues as well, which would net us the exact same result:
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
df = pd.read_csv('hotel_bookings.csv')
sns.lineplot(x = "arrival_date_month", y = "stays_in_week_nights", hue='arrival_date_year', data = df)
plt.show()
By setting the arrival_date_year
feature as the hue
argument, we've told Seaborn to segregate each X-Y mapping by the arrival_date_year
feature, so we'll end up with three different line plots:
This time around, we've also got confidence intervals marked around our central tendencies.
Customize Line Plot Confidence Interval with Seaborn
You can fiddle around, enable/disable and change the type of confidence intervals easily using a couple of arguments. The ci
argument can be used to specify the size of the interval, and can be set to an integer, 'sd'
(standard deviation) or None
if you want to turn it off.
The err_style
can be used to specify the style of the confidence intervals - band
or bars
. We've seen how bands work so far, so let's try out a confidence interval that uses bars
instead:
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
df = pd.read_csv('hotel_bookings.csv')
sns.lineplot(x = "arrival_date_month", y = "stays_in_week_nights", err_style='bars', data = df)
plt.show()
This results in:
And let's change the confidence interval, which is by default set to 95
, to display standard deviation instead:
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
df = pd.read_csv('hotel_bookings.csv')
sns.lineplot(x = "arrival_date_month", y = "stays_in_week_nights", err_style='bars', ci='sd', data = df)
plt.show()
Conclusion
In this tutorial, we've gone over several ways to plot a Line Plot in Seaborn. We've taken a look at how to plot simple plots, with numerical and categorical X-axes, after which we've imported a dataset and visualized it.
We've explored how to manipulate datasets and change their form to visualize multiple features, as well as how to customize Line Plots.
If you're interested in Data Visualization and don't know where to start, make sure to check out our book on Data Visualization in Python.
Data Visualization in Python, a book for beginner to intermediate Python developers, will guide you through simple data manipulation with Pandas, cover core plotting libraries like Matplotlib and Seaborn, and show you how to take advantage of declarative and experimental libraries like Altair.
PyCharm
PyCharm and AWS Toolkit Tutorial
Cloud development with Python is a hot topic right now. Amazon recently started shipping their AWS Toolkit for PyCharm, and we already have a tutorial on it from Mukul Mantosh in the PyCharm Guide.
Calvin Hendryx-Parker is a familiar guest on our PyCharm webinars, and he is an expert when it comes to talking about AWS. He joins us to review Mukul’s tutorial, demonstrate it in action, and connect the topics with the wider world of AWS.
This webinar will be extra-interactive! We will be taking lots of questions from the audience about AWS, showing audience suggestions sent in beforehand, and there will be a surprise or two to look forward to in the intermissions.
Day: Tuesday
Date: March 16, 2021
Time: 17:00 CET
Asking questions
If you have any questions on this topic, you can submit them now or during the live stream. To ask your questions now, post them as comments to this blog post. To ask them during the live stream, please use the chat window.
The host will try to answer all your questions during the session. If we run out of time, we’ll post the answers to any remaining questions in a follow-up blog post. We’ll do our best to try to answer all your questions.
About the presenter
Calvin Hendryx-Parker
https://twitter.com/calvinhp
Co-Founder, CTO, Six Feet Up
AWS Community Hero
Calvin Hendryx-Parker is the co-founder and CTO of Six Feet Up, a Python web application development company focused on deploying content management systems, intranets and portals, as well as custom web apps using Django, Pyramid and Plone. Under Calvin’s technical leadership, Six Feet Up has served organizations like Amtrak, Eli Lilly, NASA, UCLA and the United Nations.
As an advocate of open source, Calvin is also the founder and organizer of the IndyPy meetup group and Pythology training series in Indianapolis. In 2016 Calvin was nominated for a MIRA Tech Educator of the Year Award.
Cusy
New: Pattern Matching in Python 3.10
The originally object-oriented programming language Python is to receive a new feature in version 3.10, which is mainly known from functional languages: pattern matching. The change is controversial in the Python community and has triggered a heated debate.
Pattern matching is a symbol-processing method that uses a pattern to identify discrete structures or subsets, e.g. strings, trees or graphs. This procedure is found in functional or logical programming languages where a match expression is used to process data based on its structure, e.g. in Scala, Rust and F#. A match statement takes an expression and compares it to successive patterns specified as one or more cases. This is superficially similar to a switch statement in C, Java or JavaScript, but much more powerful.
Python 3.10 is now also to receive such a match expression. The implementation is described in PEP (Python Enhancement Proposal) 634. [1] Further information on the plans can be found in PEP 635 [2] and PEP 636 [3]. How pattern matching is supposed to work in Python 3.10 is shown by this very simple example, where a value is compared with several literals:
def http_error(status): match status: case 400: return "Bad request" case 401: return "Unauthorized" case 403: return "Forbidden" case 404: return "Not found" case 418: return "I'm a teapot" case _: return "Something else"
In the last case of the match statement, an underscore _ acts as a placeholder that intercepts everything. This has caused irritation among developers because an underscore is usually used in Python before variable names to declare them for internal use. While Python does not distinguish between private and public variables as strictly as Java does, it is still a very widely used convention that is also specified in the Style Guide for Python Code [4].
However, the proposed match statement can not only check patterns, i.e. detect a match between the value of a variable and a given pattern, it also rebinds the variables that match the given pattern.
This leads to the fact that in Python we suddenly have to deal with Schrödinger constants, which only remain constant until we take a closer look at them in a match statement. The following example is intended to explain this:
NOT_FOUND = 404 retcode = 200 match retcode: case NOT_FOUND: print('not found') print(f"Current value of {NOT_FOUND=}")
This results in the following output:
not found Current value of NOT_FOUND=200
This behaviour leads to harsh criticism of the proposal from experienced Python developers such as Brandon Rhodes, author of «Foundations of Python Network Programming»:
If this poorly-designed feature is really added to Python, we lose a principle I’ve always taught students: “if you see an undocumented constant, you can always name it without changing the code’s meaning.” The Substitution Principle, learned in algebra? It’ll no longer apply.
— Brandon Rhodes on 12 February 2021, 2:55 pm on Twitter [5]
Many long-time Python developers, however, are not only grumbling about the structural pattern-matching that is to come in Python 3.10. They generally regret developments in recent years in which more and more syntactic sugar has been sprinkled over the language. Original principles, as laid down in the Zen of Python [6], would be forgotten and functional stability would be lost.
Although Python has defined a sophisticated process with the Python Enhancement Proposals (PEPs) [7] that can be used to collaboratively steer the further development of Python, there is always criticism on Twitter and other social media, as is the case now with structural pattern matching. In fact, the topic has already been discussed intensively in the Python community. The Python Steering Council [8] recommended adoption of the Proposals as early as December 2020. Nevertheless, the topic only really boiled up with the adoption of the Proposals. The reason for this is surely the size and diversity of the Python community. Most programmers are probably only interested in discussions about extensions that solve their own problems. The other developments are overlooked until the PEPs are accepted. This is probably the case with structural pattern matching. It opens up solutions to problems that were hardly possible in Python before. For example, it allows data scientists to write matching parsers and compilers for which they previously had to resort to functional or logical programming languages.
With the adoption of the PEP, the discussion has now been taken into the wider Python community. Incidentally, Brett Cannon, a member of the Python Steering Council, pointed out in an interview [9] that the last word has not yet been spoken: until the first beta version, there is still time for changes if problems arise in practically used code. He also held out the possibility of changing the meaning of _ once again.
So maybe we will be spared Schrödinger’s constants.
[1] | PEP 634: Specification |
[2] | PEP 635: Motivation and Rationale |
[3] | PEP 636: Tutorial |
[4] | https://pep8.org/#descriptive-naming-styles |
[5] | @brandon_rhodes |
[6] | PEP 20 – The Zen of Python |
[7] | Index of Python Enhancement Proposals (PEPs) |
[8] | Python Steering Council |
[9] | Python Bytes Episode #221 |
Python Bytes
#223 Beware: A ninja is shadowing Sebastian from FastAPI
<p>Sponsored by Datadog: <a href="http://pythonbytes.fm/datadog"><strong>pythonbytes.fm/datadog</strong></a></p> <p>Special guest: <a href="https://twitter.com/tiangolo"><strong>Sebastián Ramírez</strong></a></p> <p><strong>Live stream</strong></p> <a href='https://www.youtube.com/watch?v=OP54N64AEVU' style='font-weight: bold;'>Watch on YouTube</a><br> <br> <p><strong>Brian #1:</strong> <a href="https://www.jetbrains.com/lp/python-developers-survey-2020/"><strong>Python Developers Survey 2020 Results</strong></a></p> <ul> <li>Using Python for? <ul> <li>Lots of reductions in percentages. </li> <li>Increases in Education, Desktop, Games, Mobile, and Other</li> </ul></li> <li>Python 3 vs 2 <ul> <li>94% Python3 vs 90% last year</li> <li>Python 3.8 has 44% of Python 3 usage, 3.5 or lower down to 3%</li> </ul></li> <li>environment isolation <ul> <li>54% virtualenv (I assume that includes venv)</li> <li>32% Docker </li> <li>22% Conda</li> </ul></li> <li>Web frameworks <ul> <li>46% Flask</li> <li>43% Django</li> <li>12% FastAPI</li> <li>…</li> <li>2% Pyramid :(</li> <li>…</li> </ul></li> <li>Unit testing <ul> <li>49% pytest</li> <li>28% unittest</li> <li>13% mock</li> </ul></li> <li>OS <ul> <li>68% Linux, 48% Windows, 29% Mac, 2% BSD, 1% other</li> </ul></li> <li>CI: Gitlab, Jenkins, Travis, CircleCI … (Where’s GH Actions?)</li> <li>Editors: PyCharm, VS Code, Vim, …</li> <li>Lots of other great stuff in there</li> </ul> <p><strong>Michael #2:</strong> <a href="https://django-ninja.rest-framework.com/"><strong>Django Ninja - Fast Django REST Framework</strong></a></p> <ul> <li>via Marcus Sharp and Adam Parkin (Codependent Codr) independently</li> <li>Django Ninja is a web framework for building APIs with Django and Python 3.6+ type hints.</li> <li>This project was heavily inspired by <a href="https://fastapi.tiangolo.com/">FastAPI</a> (developed by <a href="https://github.com/tiangolo">Sebastián Ramírez</a>)</li> <li>Key features: <ul> <li><strong>Easy</strong>: Designed to be easy to use and intuitive.</li> <li><strong>FAST execution</strong>: Very high performance thanks to <a href="https://pydantic-docs.helpmanual.io"><strong>Pydantic</strong></a> and <a href="https://django-ninja.rest-framework.com/async-support/"><strong>async support</strong></a>.</li> <li><strong>Fast to code</strong>: Type hints and automatic docs lets you focus only on business logic.</li> <li><strong>Standards-based</strong>: Based on the open standards for APIs: <strong>OpenAPI</strong> (previously known as Swagger) and <strong>JSON Schema</strong>.</li> <li><strong>Django friendly</strong>: (obviously) has good integration with the Django core and ORM.</li> <li><strong>Production ready</strong>: Used by multiple companies on live projects.</li> </ul></li> <li>Benchmarks are interesting</li> <li>Example</li> </ul> <pre><code> api = NinjaAPI() @api.get("/add") def add(request, a: int, b: int): return {"result": a + b} </code></pre> <p><strong>Sebastian #3:</strong> <a href="https://pydantic-docs.helpmanual.io/changelog/#v18-2021-02-26"><strong>Pydantic 1.8</strong></a></p> <ul> <li>Hypothesis plugin (for property-based testing).</li> <li>Support for <code>[NamedTuple](https://pydantic-docs.helpmanual.io/usage/types/#namedtuple)</code> and <code>[TypedDict](https://pydantic-docs.helpmanual.io/usage/types/#typeddict)</code> in models.</li> <li>Support for <code>[Annotated](https://pydantic-docs.helpmanual.io/usage/schema/#typingannotated-fields)</code> types, e.g.:</li> </ul> <pre><code> def some_func(name: Annotated[str, Field(max_length=256)] = 'Bar'): pass </code></pre> <p><code>Annotated</code> makes default and required values more “correct” in terms of types. E.g. the editor won't assume that a function's parameter is optional because it has a default value of <code>Field(``'``Bar``'``, max_length=256)</code>, this will be especially useful for FastAPI dependency functions that could be called directly in other places in the code.</p> <p><strong>Michael #4:</strong> <a href="https://searchapparchitecture.techtarget.com/news/252496553/Google-Microsoft-back-Python-and-Rust-programming-languages?utm_source=flipboard&utm_medium=syndication&utm_campaign=searchAppArchitecture&utm_term=0&utm_content=image-y"><strong>Google, Microsoft back Python and Rust programming languages</strong></a></p> <ul> <li>Partially via Will Shanks</li> <li>Google and Microsoft join and strengthen forces with the foundations behind the Python and Rust programming languages</li> <li>The companies will get to help shape their future.</li> <li>Microsoft has joined Mozilla, AWS, Huawei and Google as founding members of the Rust Foundation.</li> <li>Google donated $350,000 to the Python Software Foundation (PSF), making the company the organization's first visionary sponsor.</li> <li>Google is investing in improved PyPI malware detection, better foundational Python tools and services, and hiring a CPython Developer-in-Residence for 2021.</li> <li>Other PSF sponsors include Salesforce, a sustainability sponsor contributing $90,000. Microsoft, Fastly, Bloomberg and Capital One are maintaining sponsors contributing $60,000 apiece.</li> <li>You’ll find Talk Python Training over at the PSF Sponsors as well.</li> <li>Microsoft has shown an interest in Rust, particularly for writing secure code: “Rust programming changes the game when it comes to writing safe systems software”</li> <li>Microsoft is forming a Rust programming team to contribute engineering efforts to the language's ecosystem, focusing on the compiler, core tooling, documentation and more.</li> </ul> <p><strong>Brian #5:</strong> <a href="https://hynek.me/articles/semver-will-not-save-you/"><strong>Semantic Versioning Will Not Save You</strong></a></p> <ul> <li>Hynek Schlawack</li> <li>Version numbers are usually 3 decimals separated by dots. </li> <li>SemVer is Major.Minor.Micro</li> <li>Implied promise is that if you depend on something and anything other than the Major version changes, your code won’t break.</li> <li>In practice, you have to be proactive <ul> <li>Have tests with good coverage</li> <li>Pin your dependencies</li> <li>Regularly try to update your dependencies and retest</li> <li>If they pass, pin new versions</li> <li>If not, notify the maintainer of a bug or fix your code</li> <li>Block the versions that don’t work</li> </ul></li> <li>Consequences: <ul> <li>ZeroVer</li> <li>Version conflicts</li> <li>mayhem</li> </ul></li> <li>Consider CalVer</li> </ul> <p><strong>Sebastian #6:</strong> <a href="https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.1.0.md"><strong>OpenAPI 3.1.0</strong></a> <a href="https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.1.0.md"></a></p> <ul> <li>It was released on February.</li> <li>Now the OpenAPI schemas are in sync and based on the latest version of JSON Schema. That improves compatibility with other tools. E.g. frontend components auto-generated from JSON Schema.</li> <li>Very small details to adjust in Pydantic and FastAPI, but they are actually more “strictly compatible” with OpenAPI 3.1.0, as they were made with the most recent JSON Schema available at the moment. The differences are mainly in one or two very specific corner cases.</li> </ul> <p><strong>Note</strong>: OpenAPI 3.1.0 might not be Python-specific enough, so, in that case, I have an alternative topic: <a href="https://github.com/idom-team/idom">IDOM</a>, which is more or less React in Python on the server with live syncing with the browser.</p> <p><strong>Extras</strong></p> <p>Michael</p> <ul> <li>Installing Python - <a href="https://training.talkpython.fm/installing-python"><strong>training.talkpython.fm/installing-python</strong></a></li> <li>boto3 types update (via Dean Langsam) - seems like boto type annotations is not maintained anymore, and the rabbit hole of github links sends you to <strong><em>*<a href="https://github.com/vemel/mypy_boto3_builder"></strong>mypy_boto3_builder<strong></a> *</em></strong>(they have a gif example).</li> <li><a href="https://github.com/brettcannon/python-launcher/issues/75">Traverse up from the cwd to look for [HTML_REMOVED].venv[HTML_REMOVED] virtual environments #75 [](https://github.com/brettcannon/python-launcher/issues/75)[<strong>CLOSED</strong>](https://github.com/brettcannon/python-launcher/issues/75)[]</a></li> <li><a href="https://docs.google.com/forms/d/e/1FAIpQLScFtfHLsjxExgwvO_jQ9pwb8IpNezEdSjsIwnEQz0vb5il16w/viewform">Talk Python: AMA 2021 Episode</a></li> </ul> <p>Brian</p> <ul> <li>Thanks to Matthew Casari and NOAA for the great shirts.</li> </ul> <p><strong>Joke</strong></p> <p>More <a href="https://betterprogramming.pub/56-funny-code-comments-that-people-actually-wrote-6074215ab387">code comments</a> jokes</p> <pre><code> try { } finally { // should never happen } </code></pre> <pre><code> /* You may think you know what the following code does. * But you dont. Trust me. * Fiddle with it, and you'll spend many a sleepless * night cursing the moment you thought youd be clever * enough to "optimize" the code below. * Now close this file and go play with something else. */ </code></pre> <pre><code> const int TEN=10; // As if the value of 10 will fluctuate... </code></pre> <pre><code> // I am not responsible for this code. // They made me write it, against my will. </code></pre> <pre><code> // If this code works, it was written by Paul DiLascia. // If not, we don't know who wrote it </code></pre> <pre><code> options.BatchSize = 300; //Madness? THIS IS SPARTA! </code></pre>
Montreal Python User Group
Introduction to programming with Python
Have you always wanted to try out programming and find out if it’s for you?
Come to our Introduction to Programming with Python workshop, taking place Saturday, March 13, at 1 PM Montreal time (13:00 EST). It’s free!
The workshop is designed for adults and will be held in French, though you’re welcome to ask questions in English. No previous programming experience is presumed or required. :)
To sign up, simply confirm your presence on Meetup. Places are limited.
The workshop will be a mix of commented examples and hands-on programming practice in small groups.
Call for volunteers Perhaps you have a good (or even great!) mastery of Python and would like to contribute to the workshop? Would you like to become a mentor and help the workshop participants learn? Contact Edith Viau on Montréal-Python’s Slack to find out more!
Get your Slack invitation here: mtlpy.org/en/slackin
See you soon !
Kushal Das
Get a TLS certificate for your onion service
For a long time, I wanted to have a certificate for the onion address of my blog. Digicert was the only CA who was providing those certificates with an Extended Validation. Those are costly and suitable for an organization to get, but not for me personally, especially due to the cost.
A few days ago, on IRC, I found out that Harica is providing Domain validation for the onion sites for around €30 per year. I jumped in to get one. At the same time, ahf was also getting his certificate. He helped me with the configuration for nginx.
How to get your own certificate?
- Make sure you have your site running as Tor v3 onion service
- Create an account at https://cm.harica.gr/
- Goto server certificates on the left bar, and make a new request for your domain, provide the onion address as requested in the form.
- It will give you the option to upload a CSR
Certificate Signing Request
. You can generate one byopenssl req -newkey rsa:4096 -keyout kushaldas.in.onion.key -out csr.csr
. For the common name, provide the same onion address. - After the click on the website, it will ask you to download a file and put it in your web root inside of
.well-known/pki-validation/
directory. Make sure that you can access the file over Tor Browser. - When you click the final submission button, the system will take some time to verify the domain. After payment, you should be able to download the certificate with the full chain (the file ending with .p7b). There are 3 options on the webpage, so please remember to download the correct file :)
- You will have to convert it into PEM format, I used the command ahf showed me:
openssl pkcs7 -inform pem -in kushaldas.in.p7b -print_certs -out kushaldas.in.onion.chain.pem -outform pem
Setting up nginx
This part will be the same as any other standard nginx
configuration. The following is what I use. Please uncomment
the Strict-Transport-Security
header line only after you are sure everything is working fine.
server {
listen unix:/var/run/tor-hs-kushal.sock;
server_name kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion;
access_log /var/log/nginx/kushal_onion-access.log;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen unix:/var/run/tor-hs-kushal-https.sock ssl http2;
server_name kushal76uaid62oup5774umh654scnu5dwzh4u2534qxhcbi4wbab3ad.onion;
access_log /var/log/nginx/kushal_onion-access.log;
ssl_certificate /etc/pki/kushaldas.in.onion.chain.pem;
ssl_certificate_key /etc/pki/kushaldas.in.onion.open.key;
#add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
# Turn on OCSP stapling as recommended at
# https://community.letsencrypt.org/t/integration-guide/13123
# requires nginx version >= 1.3.7
ssl_stapling on;
ssl_stapling_verify on;
# modern configuration. tweak to your needs.
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;
index index.html;
root /var/www/kushaldas.in;
location / {
try_files $uri $uri/ =404;
}
}
I also have the following configuration in the /etc/tor/torrc
file to use the unix socket
files.
HiddenServiceDir /var/lib/tor/hs-kushal/
HiddenServiceVersion 3
HiddenServicePort 80 unix:/var/run/tor-hs-kushal-me.sock
HiddenServicePort 443 unix:/var/run/tor-hs-kushal-https.sock
In case you want to know more about why do you need the certificate for your onion address, the Tor Project has a very nice explanation.
Mike Driscoll
Python GUI Frameworks (Video)
In this tutorial, I talk about some of Python’s most popular GUI frameworks. You will learn the basics of graphical user interfaces. Then you will learn how to create a simple image viewer using wxPython. Finally, you will see how to rewrite the image viewer using PySimpleGUI.
Related Reading
- Creating an Image Viewer with PySimpleGUI
-
Creating a Cross-Platform Image Viewer with wxPython (Video)
The post Python GUI Frameworks (Video) appeared first on Mouse Vs Python.
Python⇒Speed
The security scanner that cried wolf
If you run a security scanner on your Docker image, you might be in for a shock: often you’ll be warned of dozens of security vulnerabilities, even on the most up-to-date image. After the third or fourth time you get this result, you’ll start tuning the security scanner out.
Eventually, you won’t pay attention to the security scanner at all—and you might end up missing a real security vulnerability that slipped through.
This is not your fault: the problem is the way many security scanners report their results. So let’s see what they output, why it’s problematic, and how to get more useful security scanner results.
Read more...March 02, 2021
Ben Cook
Finding the mode of an empirical continuous distribution
You can find the mode of an empirical continuous distribution by plotting the histogram and looking for the maximum bin.
PyCoder’s Weekly
Issue #462 (March 2, 2021)
#462 – MARCH 2, 2021
View in Browser »
Semantic Versioning Will Not Save You
Semantic versioning aims to both communicate the version of software as well as promise that certain versions won’t break anything. Sounds great, right? In a lot of cases it is, but a blind reliance on semantic versioning can come back to haunt you.
HYNEK SCHLAWACK
Python and MongoDB: Connecting to NoSQL Databases
Learn how to use Python to interface with the NoSQL database system MongoDB. You’ll get an overview of the differences between SQL and NoSQL, and you’ll also learn about related tools, including PyMongo and MongoEngine.
REAL PYTHON
Automate Python Profiling and Performance Testing
Performance is a feature, make sure it is tested as such. Integrate performance testing in CI/CD. Validate production deploys. Run tests upon any event. Blackfire offers a robust way to run test scenarios and validate code changes, automatically. Discover Blackfire Builds now. Free 15 days trial →
BLACKFIRE sponsor
Generate Customizable PDF Reports With Python
Learn how to generate custom PDF reports using reportlab
and pdfrw
with a PyQt GUI.
MARTIN FITZPATRICK
Python 3.10.0a6 Is Now Available for Testing
Now including structural pattern matching!
CPYTHON DEV BLOG
Discussions
In Python’s near future, indexing may support keyword arguments
For example, you could do matrix[row=20, col=40]
. Read more about it in PEP 637.
RAYMOND HETTINGER
Python Jobs
Senior Backend Developer (Berlin, Germany)
Advanced Python Engineer (Newport Beach, CA, USA)
Python Tutorial Authors Wanted (Remote)
Full-Stack Django Developer (Oslo, Norway)
Articles & Tutorials
Navigating Namespaces and Scope in Python
Learn about Python namespaces, the structures used to store and organize the symbolic names created during the execution of a Python program. You’ll learn when namespaces are created, how they are implemented, and how they define variable scope.
REAL PYTHON course
Friendly-traceback: Testing With Real Python
See how friendly-traceback
improves syntax error reporting by comparing the output from friendly-traceback
with examples in the Real Python tutorial Invalid Syntax in Python: Common Reasons for SyntaxError.
ANDRÉ ROBERGE
Free SQL 101 Workshop with Metis
Register to attend Metis’s next One Hour at Bootcamp workshop on March 3rd at 6pm ET! Our data science team will teach you the core components of SQL queries and how to write moderately complex SQL queries to aggregate data →
METIS sponsor
The Challenges of Developing Into a Python Professional
What’s the difference between writing code for yourself and developing for others? What new considerations do you need to take into account as a professional Python developer? This week on the show, we talk to Dane Hillard about his book “Practices of the Python Pro”.
REAL PYTHON podcast
Brython: Python in Your Browser
Learn how to use Brython to run Python code in the browser. Although most front-end web applications are written in JavaScript, you can use Brython to access JavaScript libraries and APIs and deploy Python-based applications to the web.
REAL PYTHON
Make Tests a Part of Your App
Have you ever written a test that re-implements a library-specific case? What if that test was just a part of the library code? See how tightly integrating tests into your library code can save users time and help them find bugs.
NIKITA SOBOLEV
Spend Less Time Debugging and More Time Building with Scout APM
Scout APM uses tracing logic to tie bottlenecks to source code to help developers identify and resolve performance issues at only $39 a month! Start your free 14-day trial today and we’ll donate $5 to the OSS project of your choice when you deploy!
SCOUT APM sponsor
Efficient Postgres Full Text Search in Django
Learn how to optimize a Full Text Search implementation with Django and Postgres. Even on a small table, you can reduce the query execution time from 0.045 seconds to 0.001 seconds!
ADEYINKA ADEGBENRO • Shared by Manuel Weiss
Profiling Python code with line_profiler
Use line_profiler
to see line-level execution time for your python code. It may surprise you where your code is slow and what it takes to speed it up!
MATT WRIGHT
Projects & Code
absolufy-imports: Automatically Convert Your Relative Imports to Absolute
GITHUB.COM/MARCOGORELLI • Shared by Marco Gorelli
NBShare: Share Your Python Notebooks
NBSHARE.IO • Shared by John Ludhi
Events
Real Python Office Hours (Virtual)
March 3, 2020
REALPYTHON.COM
Python Web Conference 2021 (Virtual)
March 22 – 26, 2021
PYTHONWEBCONF.COM
PyCon Israel 2021 (Virtual)
May 2 – 3, 2021
PYCON.ORG.IL
PyCon 2021 (Virtual)
May 12 – 18, 2021
PYCON.ORG
DjangoCon Europe 2021 (Virtual)
June 2 – 6, 2021
DJANGOCON.EU
Happy Pythoning!
This was PyCoder’s Weekly Issue #462.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]