skip to navigation
skip to content

Planet Python

Last update: August 18, 2019 07:48 AM UTC

August 17, 2019


TechBeamers Python

Python Filter()

Python filter() function applies another function on a given iterable (List/String/Dictionary, etc.) to test which of its item to keep or discard. In simple words, it filters the ones that don’t pass the test and returns the rest as a filter object. The filter object is of the iterable type. It retains those elements which the function passed by returning True. We can also convert it to List or Tuple or other types using their factory functions. In this tutorial, you’ll learn how to use the filter() function with different types of sequences. Also, you can refer to the examples

The post Python Filter() appeared first on Learn Programming and Software Testing.

August 17, 2019 06:54 AM UTC


Codementor

Creating a Docker Swarm Stack with Terraform (Terrascript Python), Persistent Volumes and Dynamic HAProxy.

This article demonstrate how to create a Docker Swarm cluster with Volume, Firewall, DNS and Load Balance using terraform wrapped by a python script.

August 17, 2019 03:20 AM UTC

August 16, 2019


Brett Cannon

How do you verify that PyPI can be trusted?

A co-worker of mine attended a technical talk about how Go's module mirror works and he asked me whether there was something there that Python should do.

Now Go's packaging story is rather different from Python's since in Go you specify the location of a module by the URL you fetch it from, e.g. github.com/you/hello specifies the hello module as found at https://github.com/you/hello. This means Go's module ecosystem is distributed, which leads to interesting problems of caching so code doesn't disappear off the internet (e.g. a left-pad incident), and needing to verify that a module's provider isn't suddenly changing the code they provide with something malicious.

But since the Python community has PyPI our problems are slightly different in that we just have to worry about a single point of failure (which has its own downsides). Now obviously you can run your own mirror of PyPI (and plenty of companies do), but for the general community no one wants to bother to set something up like that and try to keep it maintained (do you really need your own mirror to download some dependencies for the script you just wrote to help clean up your photos from your latest trip?). But we should still care about whether PyPI has been compromised such that packages hosted there have not been tampered with somehow between when the project owner uploaded their release's files and from when you download them.

Verifying PyPI is okay

So the first thing we can do is see if we can tell if PyPI has been compromised somehow. This takes on two different levels of complexity. One is checking if post-release anything nefarious has occurred. The fancier step is to provide a way for project owners to tell other folks what they are giving PyPI to act as an auditor.

Post-release trust

In a post-release scenario you're trusting that PyPI received a release from a project owner successfully and safely. What you're worrying about here is that at some later point PyPI gets compromised and someone e.g. swapped out the files in requests so that someone could steal some Bitcoin. So what are some options here?

Trust PyPI

The simplest one is don't worry about it. 😁 PyPI is run by some very smart, dedicated folks and so if you feel comfortable trusting them to not mess up then you can simply not stress about compromises.

Trust PyPI up to when you froze your dependencies

Now perhaps you do generally trust the PyPI administrators and don't think anything has happened yet, but you wouldn't mind a fairly cheap way that's available to today to make sure nothing fishy happens in the future. In that case you can record the hashes of your locked dependencies. (If you're an app developer you are locking your dependencies, right?)

Basically what you do is you have whatever tool you're using to lock your dependencies – e.g. pip-tools, pipenv, poetry – record the hash of the files you depend on upon locking. That way in the future you can check for yourself that the files you downloaded from PyPI match bit-for-bit what you previously downloaded and used. Now this doesn't guarantee that what you initially downloaded when you froze your dependencies didn't contain compromised code, but at least you know going forward nothing questionable has occurred.

Trust PyPI or an independent 3rd-party since they started running

Now we're into the "someone would have to do work to make this happen" realm; everything up until now you can do today, but this idea requires money (although PyPI still requires money to simply function as well, so please have your company donate if you use PyPI at work).

What one could do is run a 3rd-party service that records all the hashes of files that end up on PyPI. That way, if one wanted to see if the hash from PyPI hasn't changed since the 3rd-party service started running then one could simply ask the 3rd-party service for the hash for whatever file they want from PyPI, ask PyPI what they think the hash should be, and then check if the hashes match. If they do match then you should be able to trust the hashes, but if they differ then either PyPI or the 3rd-party service is compromised.

Now this is predicated on the idea that the 3rd-party service is truly 3rd-party. If any staff is shared between the 3rd-party service and PyPI then that's a potential point of compromise. This is also assuming that PyPI has not already been compromised. But at least in this scenario the point in time where your trust in PyPI starts from when the 3rd-party service began running and not when you locked your dependencies.

You can also extend this out to multiple 3rd-parties recording file hashes so that you can compare hashes against multiple sources. This not only makes it harder by forcing someone to compromise multiple services in order to cover up a file change, but if someone is compromised you could choose to use quorum to decide who's right and who's wrong.

Auditing what everyone claims

This entire blog post started because of a Twitter thread about how to be able to validate what PyPI claims. At some point I joked that I was shocked no one had mentioned the blockchain yet. And that's when I was informed that Certificate Transparency logs are basically what we would want and they use something called Merkle hash trees that started with P2P networks and have been used in blockchains.

I'm not going to go into all the details as how Certificate Transparency works, but basically they use an append-only log that can be cryptographically verified as having not been manipulated (and you could totally treat recording hashes of files on PyPI as an append-only log).

There are two very nice properties of these hash trees. One is it is very cheap to verify when an update has been made that all the previous entries in the log have not changed. Basically what you need is some key values from the previous version of the hash tree so that when you add new values to the tree and re-balance it's easy to verify the old stuff is still the same. This is great to help monitor for manipulation of previous data while also making it easy to add to the log.

The second property is that checking an entry hasn't been tampered with can be done without having the entire tree available. Basically you only need all nodes along a path from a leaf node to the root plus all immediate siblings of those nodes. This means that even if your hash tree has a massive amount of leaf nodes it doesn't take much to audit that a single leaf node has not not changed.

So all of this leads to a nice system to help keep PyPI honest if you can assume the initial hashes are reliable.

Release-in-progress trust

So all of the above scenarios assume PyPI was secure at the time of initially receiving a file but then potentially was compromised later. But how could we check that PyPI isn't already compromised?

One idea I had was that twine could upload a project releases' hashes to some trusted 3rd-parties as well as to PyPI. Then the 3rd-parties could either directly compare the hashes PyPI claims to have to what they were given independently or they could use their data to create that release's entry in the append-only hash tree log and see if the final hash matched what PyPI claims. And if a 3rd-party wasn't given some hashes by the project owner then they could simply fill in with what PyPI has. But the key point is that by having the project owner directly share hashes with 3rd-parties that are monitoring PyPI we would then have a way to detect if PyPI isn't providing files as the project owner expected.

Making PyPI harder to hack

Now obviously it would be best if PyPI was as hard to compromise as possible as well as detecting compromises on its own. There are actually two PEPs on the topic: PEP 458 and PEP 480. I'm not going to go into details since that's why we have PEPs, but people have thought through how to make PyPI hard to compromise as well as how to detect it.

But knowing that a design is available, you may be wondering why hasn't it been implemented?

What can you do to help?

There is a major reason why the ideas above have not been implemented: money. People using Python for personal projects typically don't worry about this sort of stuff because it just isn't a big concern, so people are not chomping at the bit to implement any of this for fun in their spare time. But for any business relying on packages coming from PyPI, it should be a concern since their business relies on the integrity of PyPI and the Python ecosystem. And so if you work for a company that uses packages from PyPI, then please consider having the company donate to the packaging WG (you can also find the link by going to PyPI and clicking the "Donate" button). Previous donations got us the current back-end and look of PyPI as well as the recent work to add two-factor authentication and API tokens, so they already know how to handle donations and turning them into results. So if anything I talked about here sounds worth doing, then please consider donating to help making it so they can happen.

August 16, 2019 11:26 PM UTC


Vinta Software

PyBay 2019: Talking about Python in SF

We are back to San Francisco! Our team will be joining PyBay's conference, one of the biggest Python events in the Bay Area. For this year, we'll be giving the talk: Building effective Django queries with expressions. PyBay has been a fantastic place to meet new people, connect with new ideas, and integrate this thriving community. Here is the sl

August 16, 2019 07:47 PM UTC


Quansight Labs Blog

Spyder 4.0 beta4: Kite integration is here

Kite is sponsoring the work discussed in this blog post, and in addition supports Spyder 4.0 development through a Quansight Labs Community Work Order.

As part of our next release, we are proud to announce an additional completion client for Spyder, Kite. Kite is a novel completion client that uses Machine Learning techniques to find and predict the best autocompletion for a given text. Additionally, it collects improved documentation for compiled packages, i.e., Matplotlib, NumPy, SciPy that cannot be obtained easily by using traditional code analysis packages such as Jedi.

alt_text

Read more… (3 min remaining to read)

August 16, 2019 07:19 PM UTC


Stack Abuse

Basics of Memory Management in Python

Introduction

Memory management is the process of efficiently allocating, de-allocating, and coordinating memory so that all the different processes run smoothly and can optimally access different system resources. Memory management also involves cleaning memory of objects that are no longer being accessed.

In Python, the memory manager is responsible for these kinds of tasks by periodically running to clean up, allocate, and manage the memory. Unlike C, Java, and other programming languages, Python manages objects by using reference counting. This means that the memory manager keeps track of the number of references to each object in the program. When an object's reference count drops to zero, which means the object is no longer being used, the garbage collector (part of the memory manager) automatically frees the memory from that particular object.

The user need not to worry about memory management as the process of allocation and de-allocation of memory is fully automatic. The reclaimed memory can be used by other objects.

Python Garbage Collection

As explained earlier, Python deletes objects that are no longer referenced in the program to free up memory space. This process in which Python frees blocks of memory that are no longer used is called Garbage Collection. The Python Garbage Collector (GC) runs during the program execution and is triggered if the reference count reduces to zero. The reference count increases if an object is assigned a new name or is placed in a container, like tuple or dictionary. Similarly, the reference count decreases when the reference to an object is reassigned, when the object's reference goes out of scope, or when an object is deleted.

The memory is a heap that contains objects and other data structures used in the program. The allocation and de-allocation of this heap space is controlled by the Python Memory manager through the use of API functions.

Python Objects in Memory

Each variable in Python acts as an object. Objects can either be simple (containing numbers, strings, etc.) or containers (dictionaries, lists, or user defined classes). Furthermore, Python is a dynamically typed language which means that we do not need to declare the variables or their types before using them in a program.

For example:

>>> x = 5
>>> print(x)
5
>>> del x
>>> print(x)
Traceback (most reent call last):
  File "<mem_manage>", line 1, in <module>
    print(x)
NameError : name 'x' is not defined

If you look at the first 2 lines of the above program, object x is known. When we delete the object x and try to use it, we get an error stating that the variable x is not defined.

You can see that the garbage collection in Python is fully automated and the programmer does not need worry about it, unlike languages like C.

Modifying the Garbage Collector

The Python garbage collector has three generations in which objects are classified. A new object at the starting point of it's life cycle is the first generation of the garbage collector. As the object survives garbage collection, it will be moved up to the next generations. Each of the 3 generations of the garbage collector has a threshold. Specifically, when the threshold of number of allocations minus the number of de0allocations is exceeded, that generation will run garbage collection.

Earlier generations are also garbage collected more often than the higher generations. This is because newer objects are more likely to be discarded than old objects.

The gc module includes functions to change the threshold value, trigger a garbage collection process manually, disable the garbage collection process, etc. We can check the threshold values of different generations of the garbage collector using the get_threshold() method:

import gc
print(gc.get_threshold())

Sample Output:

(700, 10, 10)

As you see, here we have a threshold of 700 for the first generation, and 10 for each of the other two generations.

We can alter the threshold value for triggering the garbage collection process using the set_threshold() method of the gc module:

gc.set_threshold(900, 15, 15)

In the above example, we have increased the threshold value for all the 3 generations. Increasing the threshold value will decrease the frequency of running the garbage collector. Normally, we need not think too much about Python's garbage collection as a developer, but this may be useful when optimizing the Python runtime for your target system. One of the key benefits is that Python's garbage collection mechanism handles a lot of low-level details for the developer automatically.

Why Perform Manual Garbage Collection?

We know that the Python interpreter keeps a track of references to objects used in a program. In earlier versions of Python (until version 1.6), the Python interpreter used only the reference counting mechanism to handle memory. When the reference count drops to zero, the Python interpreter automatically frees the memory. This classical reference counting mechanism is very effective, except that it fails to work when the program has reference cycles. A reference cycle happens if one or more objects are referenced each other, and hence the reference count never reaches zero.

Let's consider an example.

>>> def create_cycle():
...     list = [8, 9, 10]
...     list.append(list)
...     return list
... 
>>> create_cycle()
[8, 9, 10, [...]]

The above code creates a reference cycle, where the object list refers to itself. Hence, the memory for the object list will not be freed automatically when the function returns. The reference cycle problem can't be solved by reference counting. However, this reference cycle problem can be solved by change the behavior of the garbage collector in your Python application.

To do so, we can use the gc.collect() function of the gc module.

import gc
n = gc.collect()
print("Number of unreachable objects collected by GC:", n)

The gc.collect() returns the number of objects it has collected and de-allocated.

There are two ways to perform manual garbage collection: time-based or event-based garbage collection.

Time-based garbage collection is pretty simple: the gc.collect() function is called after a fixed time interval.

Event-based garbage collection calls the gc.collect() function after an event occurs (i.e. when the application is exited or the application remains idle for a specific time period).

Let's understand the manual garbage collection work by creating a few reference cycles.

import sys, gc

def create_cycle():
    list = [8, 9, 10]
    list.append(list)

def main():
    print("Creating garbage...")
    for i in range(8):
        create_cycle()

    print("Collecting...")
    n = gc.collect()
    print("Number of unreachable objects collected by GC:", n)
    print("Uncollectable garbage:", gc.garbage)

if __name__ == "__main__":
    main()
    sys.exit()

The output is as below:

Creating garbage...
Collecting...
Number of unreachable objects collected by GC: 8
Uncollectable garbage: []

The script above creates a list object that is referred by a variable, creatively named list. The first element of the list object refers to itself. The reference count of the list object is always greater than zero even if it is deleted or out of scope in the program. Hence, the list object is not garbage collected due to the circular reference. The garbage collector mechanism in Python will automatically check for, and collect, circular references periodically.

In the above code, as the reference count is at least 1 and can never reach 0, we have forcefully garbage collected the objects by calling gc.collect(). However, remember not to force garbage collection frequently. The reason is that even after freeing the memory, the GC takes time to evaluate the object's eligibility to be garbage collected, taking up processor time and resources. Also, remember to manually manage the garbage collector only after your app has started completely.

Conclusion

In this article, we discussed how memory management in Python is handled automatically by using reference counting and garbage collection strategies. Without garbage collection, implementing a successful memory management mechanism in Python is impossible. Also, programmers need not worry about deleting allocated memory, as it is taken care by Python memory manager. This leads to fewer memory leaks and better performance.

August 16, 2019 12:57 PM UTC


PyCharm

PyCharm 2019.2.1 RC

PyCharm 2019.2.1 release candidate is available now!

Fixed in this Version

Further Improvements

Getting the New Version

Download the RC from Confluence.

The release candidate (RC) is not an early access program (EAP) build, and does not bundle an EAP license. If you get the PyCharm Professional Edition RC, you will either need a currently active PyCharm subscription, or you will receive a 30-day free trial.

August 16, 2019 12:48 PM UTC


Test and Code

83: PyBites Code Challenges behind the scenes - Bob Belderbos

Bob Belderbos and Julian Sequeira started PyBites a few years ago.
They started doing code challanges along with people around the world and writing about it.

Then came the codechalleng.es platform, where you can do code challenges in the browser and have your answer checked by pytest tests. But how does it all work?

Bob joins me today to go behind the scenes and share the tech stack running the PyBites Code Challenges platform.

We talk about the technology, the testing, and how it went from a cool idea to a working platform.

Special Guest: Bob Belderbos.

Sponsored By:

Support Test & Code - Python Testing & Development

Links:

<p>Bob Belderbos and Julian Sequeira started <a href="https://pybit.es/" rel="nofollow">PyBites</a> a few years ago.<br> They started doing code challanges along with people around the world and writing about it. </p> <p>Then came the <a href="https://codechalleng.es/" rel="nofollow">codechalleng.es</a> platform, where you can do code challenges in the browser and have your answer checked by pytest tests. But how does it all work?</p> <p>Bob joins me today to go behind the scenes and share the tech stack running the PyBites Code Challenges platform.</p> <p>We talk about the technology, the testing, and how it went from a cool idea to a working platform.</p><p>Special Guest: Bob Belderbos.</p><p>Sponsored By:</p><ul><li><a href="https://testandcode.com/pycharm" rel="nofollow">PyCharm Professional</a>: <a href="https://testandcode.com/pycharm" rel="nofollow">PyCharm is designed by programmers, for programmers, to provide all the tools you need for productive Python development.</a></li></ul><p><a href="https://www.patreon.com/testpodcast" rel="payment">Support Test & Code - Python Testing & Development</a></p><p>Links:</p><ul><li><a href="https://pybit.es/" title="PyBites" rel="nofollow">PyBites</a></li><li><a href="https://codechalleng.es/" title="PyBites Code Challenges coding platform" rel="nofollow">PyBites Code Challenges coding platform</a></li><li><a href="https://codechalleng.es/bites/paths" title="Learning Paths" rel="nofollow">Learning Paths</a></li><li><a href="https://pybit.es/whiteboard-interviews.html" title="Julian's article on whiteboard interviews" rel="nofollow">Julian's article on whiteboard interviews</a></li><li><a href="https://www.youtube.com/watch?v=Jpwn2yOppPo" title="Selenium running on CodeChalleng.es" rel="nofollow">Selenium running on CodeChalleng.es</a></li></ul>

August 16, 2019 07:00 AM UTC

August 15, 2019


Twisted Matrix Labs

Twisted 19.7.0 Released

On behalf of Twisted Matrix Laboratories and our long-suffering release manager Amber Brown, I am honored to announce1 the release of Twisted 19.7.0!


The highlights of this release include:
  • A full description on the PyPI page!  Check it out here: https://pypi.org/project/Twisted/19.7.0/ (and compare to the slightly sad previous version, here: https://pypi.org/project/Twisted/19.2.1/)
  • twisted.test.proto_helpers has been renamed to "twisted.internet.testing"
    • This removes the gross special-case carve-out where it was the only "public" API in a test module, and now the rule is that all test modules are private once again.
  • Conch's SSH server now supports hmac-sha2-512.
  • The XMPP server in Twisted Words will now validate certificates!
  • A nasty data-corruption bug in the IOCP reactor was fixed. If you're doing high-volume I/O on Windows you'll want to upgrade!
  • Twisted Web no longer gives clients a traceback by default, both when you instantiate Site and when you use twist web on the command line.  You can turn this behavior back on for local development with twist web --display-tracebacks.
  • Several bugfixes and documentation fixes resolving bytes/unicode type confusion in twisted.web.
  • Python 3.4 is no longer supported.
pip install -U twisted[tls] and enjoy all these enhancements today!

Thanks for using Twisted,

-glyph

1: somewhat belatedly: it came out 10 days ago.  Oops!

August 15, 2019 11:38 PM UTC


Python Insider

Inspect PyPI event logs to audit your account's and project's security

To help you check for security problems, PyPI is adding an advanced audit log of user actions beyond the current (existing) journal. This will, for instance, allow publishers to track all actions taken by third party services on their behalf.

This beta feature is live now on PyPI and on Test PyPI.

Background:
We're further increasing the security of the Python Package Index with another new beta feature: an audit log of sensitive actions that affect users and projects. This is thanks to a grant from the Open Technology Fund, coordinated by the Packaging Working Group of the Python Software Foundation.

Details:

Project security history display, listing
events (such as "file removed from release version 1.0.1")
with user, date/time, and IP address for each event.
We're adding a display so you can look at things that have happened in your user account or project, and check for signs someone's stolen your credentials.

In your account settings, you can view a log of sensitive actions from the last two weeks that are relevant to your user account, and if you are an Owner at least one project on PyPI, you can go to that project's Manage Project page to view a log of sensitive actions (performed by any user) relevant to that project. (And PyPI site administrators are able to view the full audit log for all users and all projects.)

Please help us test this, and report issues.

User security history display, listing
events (such as "API token added")
with additional details (such as token scope), date/time,
and IP address for each event.
In beta:
We're still refining this and may fail to log, or to properly display, events in the audit log. And the sensitive event logging and display starting on 16 August 2019, so you won't see sensitive events from before that date. (Read more technical details about implementation in the GitHub issue.)

Next:
We're continuing to refine all our beta features, while working on accessibility improvements and starting to work on localization on PyPI. Follow our progress reports in more detail on Discourse.

August 15, 2019 05:45 PM UTC


Continuum Analytics Blog

4 Machine Learning Use Cases in the Automotive Sector

From parts suppliers to vehicle manufacturers, service providers to rental car companies, the automotive and related mobility industries stand to gain significantly from implementing machine learning at scale. We see the big automakers investing in…

The post 4 Machine Learning Use Cases in the Automotive Sector appeared first on Anaconda.

August 15, 2019 01:54 PM UTC


Kushal Das

git checkout to previous branch

We regularly move between git branches while working on projects. I always used to type in the full branch name, say to go back to develop branch and then come back to the feature branch. This generally takes a lot of typing (for the branch names etc.). I found out that we can use - like in the way we use cd - to go back to the previous directory we were in.

git checkout -

Here is a small video for demonstration.

I hope this will be useful for some people.

August 15, 2019 11:26 AM UTC

August 14, 2019


Malthe Borch

Using built-in transparent compression on MacOS

Ever since DriveSpace on MS-DOS (or really, Stacker), we've had transparent file compression, with varying degrees of automation; in fact, while the DriveSpace-compression on MS-DOS was a fully automated affair, the built-in transparent compression in newer filesystems such as ZFS, Btrfs, APFS (and even HFS+), is engaged manually on a per-file or folder basis.

But no one's using it!

On my system, compressing /Applications saved 18GB (38.7%).

MacOS doesn't actually come with a utility to do this even though the core functionality is included, so you'll need to install an open source tool in order to use it.

$ brew install afsctool

To compress a file or folder, use the -c flag like so:

$ afsctool -c /Applications

(You might need to use root for some application and/or system files).

August 14, 2019 09:41 PM UTC


Continuum Analytics Blog

Accessing Remote Data with a Generalized File System

Originally created for the needs of Dask, we have spun out a general file system implementation and specification, to provide all users with simple access to many local, cluster, and remote storage media. Dask and Intake…

The post Accessing Remote Data with a Generalized File System appeared first on Anaconda.

August 14, 2019 07:31 PM UTC


TechBeamers Python

Append Vs. Extend in Python List

In this tutorial, you’ll explore the difference between append and extend methods of Python List. Both these methods are used to manipulate the lists in their specific way. The append method adds a single or a group of items (sequence) as one element at the tail of a list. On the other hand, the extend method appends the input elements to the end as part of the original list. After reading the above description about append() and extend(), it may seem a bit confusing to you. So, we’ll explain each of these methods with examples and show the difference between

The post Append Vs. Extend in Python List appeared first on Learn Programming and Software Testing.

August 14, 2019 06:51 PM UTC


Real Python

An Effective Python Environment: Making Yourself at Home

When you’re first learning a new programming language, a lot of your time and effort go into understanding the syntax, code style, and built-in tooling. This is just as true for Python as it is for any other language. Once you gain enough familiarity to be comfortable with the ins and outs of Python, you can start to invest time into building a Python environment that will foster your productivity.

Your shell is more than a prebuilt program provided to you as-is. It’s a framework on which you can build an ecosystem. This ecosystem will come to fit your needs so that you can spend less time fiddling and more time thinking about the next big project you’re working on.

Although no two developers have the same setup, there are a number of choices everyone faces when cultivating their Python environment. It’s important to understand each of these decisions and the options available to you!

By the end of this article, you’ll be able to answer questions like:

Once you’ve answered these questions for yourself, you can embark on the journey of creating a Python environment to call your very own. Let’s get started!

Free Bonus: Click here to get access to a free 5-day class that shows you how to avoid common dependency management issues with tools like Pip, PyPI, Virtualenv, and requirements files.

Shells

When you use a command-line interface (CLI), you execute commands and see their output. A shell is a program that provides this (usually text-based) interface to you. Shells often provide their own programming language that you can use to manipulate files, install software, and so on.

There are more unique shells than could be reasonably listed here, so you’ll see a few prominent ones. Others differ in syntax or enhanced features, but they generally provide the same core functionality.

Unix Shells

Unix is a family of operating systems first developed in the early days of computing. Unix’s popularity has lasted through today, heavily inspiring Linux and macOS. The first shells were developed for use with Unix and Unix-like operating systems.

Bourne Shell (sh)

The Bourne shell—developed by Stephen Bourne for Bell Labs in 1979—was one of the first to incorporate the idea of environment variables, conditionals, and loops. It has provided a strong basis for many other shells in use today and is still available on most systems at /bin/sh.

Bourne-Again Shell (bash)

Built on the success of the original Bourne shell, bash introduced improved user-interaction features. With bash, you get Tab completion, history, and wildcard searching for commands and paths. The bash programming language provides more data types, like arrays.

Z Shell (zsh)

zsh combines many of the best features from other shells along with a few of its own tricks into one experience. zsh offers autocorrection of misspelled commands, shorthand for manipulating multiple files, and advanced options for customizing your command prompt.

zsh also provides a framework for deep customization. The Oh My Zsh project supplies a rich set of themes and plugins, and is often used hand in hand with zsh.

macOS will ship with zsh as its default shell starting with Catalina, speaking to the shell’s popularity. Consider acquainting yourself with zsh now so that you’ll be comfortable with it going forward.

Xonsh

If you’re feeling particularly adventurous, you can give Xonsh a try. Xonsh is a shell that combines some features of other Unix-like shells with the power of Python syntax. You can use the language you already know to accomplish tasks on your filesystem and so on.

Although Xonsh is powerful, it lacks the compatibility other shells tend to share. You might not be able to run many existing shell scripts in Xonsh as a result. If you find that you like Xonsh, but compatibility is a concern, then you can use Xonsh as a supplement to your activities in a more widely used shell.

Windows Shells

Similarly to Unix-like operating systems, Windows also offers a number of options when it comes to shells. The shells offered in Windows vary in features and syntax, so you may need to try several to find one you like best.

CMD (cmd.exe)

CMD (short for “command”) is the default CLI shell for Windows. It’s the successor to COMMAND.COM, the shell built for DOS (disk operating system).

Because DOS and Unix evolved independently, the commands and syntax in CMD are markedly different from shells built for Unix-like systems. However, CMD still provides the same core functionality for browsing and manipulating files, running commands, and viewing output.

PowerShell

PowerShell was released in 2006 and also ships with Windows. It provides Unix-like aliases for most commands, so if you’re coming to Windows from macOS or Linux or have to use both, then PowerShell might be great for you.

PowerShell is vastly more powerful than CMD. With PowerShell you can:

Windows Subsystem for Linux

Microsoft has released a Windows subsystem for Linux (WSL) for running Linux directly on Windows. If you install WSL, then you can use zsh, bash, or any other Unix-like shell. If you want strong compatibility across your Windows and macOS or Linux environments, then be sure to give WSL a try. You may also consider dual-booting Linux and Windows as an alternative.

See this comparison of command shells for exhaustive coverage.

Terminal Emulators

Early developers used terminals to interact with a central mainframe computer. These were devices with a keyboard and a screen or printer that would display computed output.

Today, computers are portable and don’t require separate devices to interact with them, but the terminology still remains. Whereas a shell provides the prompt and interpreter you use to interface with text-based CLI tools, a terminal emulator (often shortened to terminal) is the graphical application you run to access the shell.

Almost any terminal you encounter should support the same basic features:

macOS Terminals

The terminal options available for macOS are all full-featured, differing mostly in aesthetics and specific integrations with other tools.

Terminal

If you’re using a Mac, then you may have used the built-in Terminal app before. Terminal supports all the usual functionality, and you can also customize the color scheme and a few hotkeys. It’s a nice enough tool if you don’t need many bells and whistles. You can find the Terminal app in Applications → Utilities → Terminal on macOS.

iTerm2

I’ve been a long-time user of iTerm2. It takes the developer experience on Mac a step further, offering a much wider palette of customization and productivity options that enable you to:

A Python API ships with the latest versions of iTerm2, so you can even improve your Python chops by developing more intricate customizations!

iTerm2 is popular enough to enjoy first-class integration with several other tools, and has a healthy community building plugins and so on. It’s a good choice because of its more frequent release cycle compared to Terminal, which only updates as often as macOS does.

Hyper

A relative newcomer, Hyper is a terminal built on Electron, a framework for building desktop applications using web technologies. Electron apps are heavily customizable because they’re “just JavaScript” under the hood. You can create any functionality that you can write the JavaScript for.

On the other hand, JavaScript is a high-level programming language and won’t always perform as well as low-level languages like Objective-C or Swift. Be mindful of the plugins you install or create!

Windows Terminals

As with the shell options, Windows terminal options vary widely in utility. Some are tightly bound to a particular shell as well.

Command Prompt

Command Prompt is the graphical application you can use to work with CMD in Windows. Like CMD, it’s a bare-bones tool for getting a few small things done. Although Command Prompt and CMD provide fewer features than other alternatives, you can be confident that they’ll be available on nearly every Windows installation and in a consistent place.

Cygwin

Cygwin is a third-party suite of tools for Windows that provides a Unix-like wrapper. This was my preferred setup when I was in Windows, but you may consider adopting the Windows Subsystem for Linux as it receives more traction and polish.

Windows Terminal

Microsoft recently released an open source terminal for Windows 10 called Windows Terminal. It lets you work in CMD, PowerShell, and even the Windows Subsystem for Linux. If you need to do a fair amount of shell work in Windows, then Windows Terminal is probably your best bet! Windows Terminal is still in late beta, so it doesn’t ship with Windows yet. Check the documentation for instructions on getting access.

Python Version Management

With your choice of terminal and shell made, you can focus your attention on your Python environment specifically.

Something you’ll eventually run into is the need to run multiple versions of Python. Projects you use may only run on certain versions, or you may be interested in creating a project that supports multiple Python versions. You can configure your Python environment to accommodate these needs.

macOS and most Unix operating systems come with a version of Python installed by default. This is often called the system Python. The system Python works just fine, but it’s usually out of date. As of this writing, macOS High Sierra still ships with Python 2.7.10 as the system Python.

Note: You’ll almost certainly want to install the latest version of Python at a minimum, so you’ll have at least two versions of Python already.

It’s important that you leave the system Python as the default, because many parts of the system rely on the default Python being a specific version. This is one of many great reasons to customize your Python environment!

How do you navigate this? Tooling is here to help.

pyenv

pyenv is a mature tool for installing and managing multiple Python versions on macOS. I recommend installing it with Homebrew. If you’re using Windows, you can use pyenv-win. After you’ve got pyenv installed, you can install multiple versions of Python into your Python environment with a few short commands:

$ pyenv versions
* system
$ python --version
Python 2.7.10
$ pyenv install 3.7.3  # This may take some time
$ pyenv versions
* system
  3.7.3

You can manage which Python you’d like to use in your current session, globally, or on a per-project basis as well. pyenv will make the python command point to whichever Python you specify. Note that none of these overrides the default system Python for other applications, so you’re safe to use them however they work best for you within your Python environment:

$ pyenv global 3.7.3
$ pyenv versions
  system
* 3.7.3 (set by /Users/dhillard/.pyenv/version)

$ pyenv local 3.7.3
$ pyenv versions
  system
* 3.7.3 (set by /Users/dhillard/myproj/.python-version)

$ pyenv shell 3.7.3
$ pyenv versions
  system
* 3.7.3 (set by PYENV_VERSION environment variable)

$ python --version
Python 3.7.3

Because I use a specific version of Python for work, the latest version of Python for personal projects, and multiple versions for testing open source projects, pyenv has proven to be a fairly smooth way for me to manage all these different versions within my own Python environment. See Managing Multiple Python Versions with pyenv for a detailed overview of the tool.

conda

If you’re in the data science community, you might already be using Anaconda (or Miniconda). Anaconda is a sort of one-stop shop for data science software that supports more than just Python.

If you don’t need the data science packages or all the things that come pre-packaged with Anaconda, pyenv might be a better lightweight solution for you. Managing Python versions is pretty similar in each, though. You can install Python versions similarly to pyenv, using the conda command:

$ conda install python=3.7.3

You’ll see a verbose list of all the dependent software conda will install, and it will ask you to confirm.

conda doesn’t have a way to set the “default” Python version or even a good way to see which versions of Python you’ve installed. Rather, it hinges on the concept of “environments,” which you can read more about in the following sections.

Virtual Environments

Now you know how to manage multiple Python versions. Often, you’ll be working on multiple projects that need the same Python version.

Because each project has its own set of dependencies, it’s a good practice to avoid mixing them. If all the dependencies are installed together in a single Python environment, then it will be difficult to discern where each one came from. In the worst cases, two different projects may depend on two different versions of a package, but with Python you can only have one version of a package installed at one time. What a mess!

Enter virtual environments. You can think of a virtual environment as a carbon copy of a base version of Python. If you’ve installed Python 3.7.3, for example, then you can create many virtual environments based off of it. When you install a package in a virtual environment, you do it in isolation from other Python environments you may have. Each virtual environment has its own copy of the python executable.

Tip: Most virtual environment tooling provides a way to update your shell’s command prompt to show the current active virtual environment. Make sure to do this if you frequently switch between projects so you’re sure you’re working inside the correct virtual environment.

venv

venv ships with Python versions 3.3+. You can create virtual environments just by passing it a path at which to store the environment’s python, installed packages, and so on:

$ python -m venv ~/.virtualenvs/my-env

You activate a virtual environment by sourcing its activate script:

$ source ~/.virtualenvs/my-env/bin/activate

You exit the virtual environment using the deactivate command, which is made available when you activate the virtual environment:

(my-env)$ deactivate

venv is built on the wonderful work and successes of the independent virtualenv project. virtualenv still provides a few interesting features of its own, but venv is nice because it provides the utility of virtual environments without requiring you to install additional software. You can probably get pretty far with it if you’re working mostly in a single Python version in your Python environment.

If you’re already managing multiple Python versions (or plan to), then it could make sense to integrate with that tooling to simplify the process of making new virtual environments with specific versions of Python. The pyenv and conda ecosystems both provide ways to specify the Python version to use when you create new virtual environments, covered in the following sections.

pyenv-virtualenv

If you’re using pyenv, then pyenv-virtualenv enhances pyenv with a subcommand for managing virtual environments:

// Create virtual environment
$ pyenv virtualenv 3.7.3 my-env

// Activate virtual environment
$ pyenv activate my-env

// Exit virtual environment
(my-env)$ pyenv deactivate

I switch contexts between a large handful of projects on a day-to-day basis. As a result, I have at least a dozen distinct virtual environments to manage in my Python environment. What’s really nice about pyenv-virtualenv is that you can configure a virtual environment using the pyenv local command and have pyenv-virtualenv auto-activate the right environments as you switch to different directories:

$ pyenv virtualenv 3.7.3 proj1
$ pyenv virtualenv 3.7.3 proj2
$ cd /Users/dhillard/proj1
$ pyenv local proj1
(proj1)$ cd ../proj2
$ pyenv local proj2
(proj2)$ pyenv versions
  system
  3.7.3
  3.7.3/envs/proj1
  3.7.3/envs/proj2
  proj1
* proj2 (set by /Users/dhillard/proj2/.python-version)

pyenv and pyenv-virtualenv have provided a particularly fluid workflow in my Python environment.

conda

You saw earlier that conda treats environments, rather than Python versions, as the main method of working. conda has built-in support for managing virtual environments:

// Create virtual environment
$ conda create --name my-env python=3.7.3

// Activate virtual environment
$ conda activate my-env

// Exit virtual environment
(my-env)$ conda deactivate

conda will install the specified version of Python if it isn’t already installed, so you don’t have to run conda install python=3.7.3 first.

pipenv

pipenv is a relatively new tool that seeks to combine package management (more on this in a moment) with virtual environment management. It mostly abstracts the virtual environment management from you, which can be great as long as things go smoothly:

$ cd /Users/dhillard/myproj

// Create virtual environment
$ pipenv install
Creating a virtualenv for this project…
Pipfile: /Users/dhillard/myproj/Pipfile
Using /path/to/pipenv/python3.7 (3.7.3) to create virtualenv…
✔ Successfully created virtual environment!
Virtualenv location: /Users/dhillard/.local/share/virtualenvs/myproj-nAbMEAt0
Creating a Pipfile for this project…
Pipfile.lock not found, creating…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (a65489)!
Installing dependencies from Pipfile.lock (a65489)…
  🐍   ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0/0 — 00:00:00
To activate this project's virtualenv, run pipenv shell.
Alternatively, run a command inside the virtualenv with pipenv run.

// Activate virtual environment (uses a subshell)
$ pipenv shell
Launching subshell in virtual environment…
 . /Users/dhillard/.local/share/virtualenvs/test-nAbMEAt0/bin/activate

// Exit virtual environment (by exiting subshell)
(myproj-nAbMEAt0)$ exit

pipenv does all the heavy lifting of creating a virtual environment and activating it for you. If you look carefully, you can see that it also creates a file called Pipfile. After you first run pipenv install, this file contains just a few things:

[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true

[dev-packages]

[packages]

[requires]
python_version = "3.7"

In particular, note that it shows python_version = "3.7". By default, pipenv creates a virtual Python environment using the same Python version it was installed under. If you want to use a different Python version, then you can create the Pipfile yourself before running pipenv install and specify the version you want. If you have pyenv installed, then pipenv will use it to install the specified Python version if necessary.

Abstracting virtual environment management is a noble goal of pipenv, but it does get hung up with hard-to-read errors occasionally. Give it a try, but don’t worry if you feel confused or overwhelmed by it. The tool, documentation, and community will grow and improve around it as it matures.

To get an in-depth introduction to virtual environments, be sure to read Python Virtual Environments: A Primer.

Package Management

For many of the projects you work on, you’ll probably need some number of third-party packages. Those packages may have their own dependencies in turn. In the early days of Python, using packages involved manually downloading files and pointing Python at them. Today, we’re fortunate to have a variety of package management tools available to us.

Most package managers work in tandem with virtual environments, isolating the packages you install in one Python environment from another. Using the two together is where you really start to see the power of the tools available to you.

pip

pip (pip installs packages) has been the de facto standard for package management in Python for several years. It was heavily inspired by an earlier tool called easy_install. Python incorporated pip into the standard distribution starting in version 3.4. pip automates the process of downloading packages and making Python aware of them.

If you have multiple virtual environments, then you can see that they’re isolated by installing a few packages in one:

$ pyenv virtualenv 3.7.3 proj1
$ pyenv activate proj1
(proj1)$ pip list
Package    Version
---------- ---------
pip        19.1.1
setuptools 40.8.0

(proj1)$ python -m pip install requests
Collecting requests
  Downloading .../requests-2.22.0-py2.py3-none-any.whl (57kB)
    100% |████████████████████████████████| 61kB 2.2MB/s
Collecting chardet<3.1.0,>=3.0.2 (from requests)
  Downloading .../chardet-3.0.4-py2.py3-none-any.whl (133kB)
    100% |████████████████████████████████| 143kB 1.7MB/s
Collecting certifi>=2017.4.17 (from requests)
  Downloading .../certifi-2019.6.16-py2.py3-none-any.whl (157kB)
    100% |████████████████████████████████| 163kB 6.0MB/s
Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests)
  Downloading .../urllib3-1.25.3-py2.py3-none-any.whl (150kB)
    100% |████████████████████████████████| 153kB 1.7MB/s
Collecting idna<2.9,>=2.5 (from requests)
  Downloading .../idna-2.8-py2.py3-none-any.whl (58kB)
    100% |████████████████████████████████| 61kB 26.6MB/s
Installing collected packages: chardet, certifi, urllib3, idna, requests
Successfully installed packages

$ pip list
Package    Version
---------- ---------
certifi    2019.6.16
chardet    3.0.4
idna       2.8
pip        19.1.1
requests   2.22.0
setuptools 40.8.0
urllib3    1.25.3

pip installed requests, along with several packages it depends on. pip list shows you all the currently installed packages and their versions.

Warning: You can uninstall packages using pip uninstall requests, for example, but this will only uninstall requests—not any of its dependencies.

A common way to specify project dependencies for pip is with a requirements.txt file. Each line in the file specifies a package name and, optionally, the version to install:

scipy==1.3.0
requests==2.22.0

You can then run python -m pip install -r requirements.txt to install all of the specified dependencies at once. For more on pip, see What is Pip? A Guide for New Pythonistas.

pipenv

pipenv has most of the same basic operations as pip but thinks about packages a bit differently. Remember the Pipfile that pipenv creates? When you install a package, pipenv adds that package to Pipfile and also adds more detailed information to a new lock file called Pipfile.lock. Lock files act as a snapshot of the precise set of packages installed, including direct dependencies as well as their sub-dependencies.

You can see pipenv sorting out the package management when you install a package:

$ pipenv install requests
Installing requests…
Adding requests to Pipfile's [packages]…
✔ Installation Succeeded
Pipfile.lock (444a6d) out of date, updating to (a65489)…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
✔ Success!
Updated Pipfile.lock (444a6d)!
Installing dependencies from Pipfile.lock (444a6d)…
  🐍   ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 5/5 — 00:00:00

pipenv will use this lock file, if present, to install the same set of packages. You can ensure that you always have the same set of working dependencies in any Python environment you create using this approach.

pipenv also distinguishes between development dependencies and production (regular) dependencies. You may need some tools during development, such as black or flake8, that you don’t need when you run your application in production. You can specify that a package is for development when you install it:

$ pipenv install --dev flake8
Installing flake8…
Adding flake8 to Pipfile's [dev-packages]…
✔ Installation Succeeded
...

pipenv install (without any arguments) will only install your production packages by default, but you can tell it to install development dependencies as well with pipenv install --dev.

poetry

poetry addresses additional facets of package management, including creating and publishing your own packages. After installing poetry, you can use it to create a new project:

$ poetry new myproj
Created package myproj in myproj
$ ls myproj/
README.rst    myproj    pyproject.toml    tests

Similarly to how pipenv creates the Pipfile, poetry creates a pyproject.toml file. This recent standard contains metadata about the project as well as dependency versions:

[tool.poetry]
name = "myproj"
version = "0.1.0"
description = ""
authors = ["Dane Hillard <github@danehillard.com>"]

[tool.poetry.dependencies]
python = "^3.7"

[tool.poetry.dev-dependencies]
pytest = "^3.0"

[build-system]
requires = ["poetry>=0.12"]
build-backend = "poetry.masonry.api"

You can install packages with poetry add (or as development dependencies with poetry add --dev):

$ poetry add requests
Using version ^2.22 for requests

Updating dependencies
Resolving dependencies... (0.2s)

Writing lock file


Package operations: 5 installs, 0 updates, 0 removals

  - Installing certifi (2019.6.16)
  - Installing chardet (3.0.4)
  - Installing idna (2.8)
  - Installing urllib3 (1.25.3)
  - Installing requests (2.22.0)

poetry also maintains a lock file, and it has a benefit over pipenv because it keeps track of which packages are subdependencies. As a result, you can uninstall requests and its dependencies with poetry remove requests.

conda

With conda, you can use pip to install packages as usual, but you can also use conda install to install packages from different channels , which are collections of packages provided by Anaconda or other providers. To install requests from the conda-forge channel, you can run conda install -c conda-forge requests.

Learn more about package management in conda in Setting Up Python for Machine Learning on Windows.

Python Interpreters

If you’re interested in further customization of your Python environment, you can choose the command line experience you have when interacting with Python. The Python interpreter provides a read-eval-print loop (REPL), which is what comes up when you type python with no arguments in your shell:

>>>
Python 3.7.3 (default, Jun 17 2019, 14:09:05)
[Clang 10.0.1 (clang-1001.0.46.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> 2 + 2
4
>>> exit()

The REPL reads what you type, evaluates it as Python code, and prints the result. Then it waits to do it all over again. This is about as much as the default Python REPL provides, which is sufficient for a good portion of typical work.

IPython

Like Anaconda, IPython is a suite of tools supporting more than just Python, but one of its main features is an alternative Python REPL. IPython’s REPL numbers each command and explicitly labels each command’s input and output. After installing IPython (python -m pip install ipython), you can run the ipython command in place of the python command to use the IPython REPL:

>>>
Python 3.7.3
Type 'copyright', 'credits' or 'license' for more information
IPython 6.0.0.dev -- An enhanced Interactive Python. Type '?' for help.

In [1]: 2 + 2
Out[1]: 4

In [2]: print("Hello!")
Out[2]: Hello!

IPython also supports Tab completion, more powerful help features, and strong integration with other tooling such as matplotlib for graphing. IPython provided the foundation for Jupyter, and both have been used extensively in the data science community because of their integration with other tools.

The IPython REPL is highly configurable too, so while it falls just shy of being a full development environment, it can still be a boon to your productivity. Its built-in and customizable magic commands are worth checking out.

bpython

bpython is another alternative REPL that provides inline syntax highlighting, tab completion, and even auto-suggestions as you type. It provides quite a few of the quick benefits of IPython without altering the interface much. Without the weight of the integrations and so on, bpython might be good to add to your repertoire for a while to see how it improves your use of the REPL.

Text Editors

You spend a third of your life sleeping, so it makes sense to invest in a great bed. As a developer, you spend a great deal of your time reading and writing code, so it follows that you should invest time in setting up your Python environment’s text editor just the way you like it.

Each editor offers a different set of key bindings and model for manipulating text. Some require a mouse to interact with them effectively, whereas others can be controlled with only the keyboard. Some people consider their choice of text editor and customizations some of the most personal decisions they make!

There are so many options to choose from in this arena, so I won’t attempt to cover it in detail here. Check out Python IDEs and Code Editors (Guide) for a broad overview. A good strategy is to find a simple, small text editor for quick changes and a full-featured IDE for more involved work. Vim and PyCharm, respectively, are my editors of choice.

Python Environment Tips and Tricks

Once you’ve made the big decisions about your Python environment, the rest of the road is paved with little tweaks to make your life a little easier. These tweaks each save minutes or seconds alone, but they collectively save you hours of time.

Making a certain activity easier reduces your cognitive load so you can focus on the task at hand instead of the logistics surrounding it. If you notice yourself performing an action over and over, then consider automating it. Use this wonderful chart from XKCD to determine if it’s worth automating a particular task.

Here are a few final tips.

Know your current virtual environment

As mentioned earlier, it’s a great idea to display the active Python version or virtual environment in your command prompt. Most tools will do this for you, but if not (or if you want to customize the prompt), the value is usually contained in the VIRTUAL_ENV environment variable.

Disable unnecessary, temporary files

Have you ever noticed *.pyc files all over your project directories? These files are pre-compiled Python bytecode—they help Python start your application faster. In production, these are a great idea because they’ll give you some performance gain. During local development, however, they’re rarely useful. Set PYTHONDONTWRITEBYTECODE=1 to disable this behavior. If you find use cases for them later, then you can easily remove this from your Python environment.

Customize your Python interpreter

You can affect how the REPL behaves using a startup file. Python will read this startup file and execute the code it contains before entering the REPL. Set the PYTHONSTARTUP environment variable to the path of your startup file. (Mine’s at ~/.pystartup.) If you’d like to hit Up for command history and Tab for completion like your shell provides, then give this startup file a try.

Conclusion

You learned about many facets of the typical Python environment. Armed with this knowledge, you can:

When you’ve got your Python environment just so, I hope you’ll share screenshots, screencasts, or blog posts about your perfect setup ✨


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

August 14, 2019 02:00 PM UTC


Catalin George Festila

Python 3.7.3 : Using the flask - part 014.

Today I worked on YouTube search with flask and Google A.P.I. project. The source code is simple to understand and you can test any A.P.I. from google using this way. I created a new Google project with YouTube A.P.I. version 3 and with the A.P.I. key. I use this key to connect with flask python module. I used the isodate python module. You can see the source code on my GitHub repo named flask_yt

August 14, 2019 10:20 AM UTC


Python Bytes

#143 Spike the robot, powered by Python!

August 14, 2019 08:00 AM UTC

August 13, 2019


Python Engineering at Microsoft

What’s New for Python in Visual Studio (16.3 Preview 2)

Today, we are releasing Visual Studio 2019 (16.3 Preview 2), which contains an updated testing experience for Python developersWe are happy to announce that the popular Python testing framework pytest is now supported. Additionally, we have re-worked the unittest experience for Python users in this release. 

Continue reading to learn more about how you can enable and configure pytest and/or unittest for your development environment. What’s even better is that each testing framework is supported in both project mode and in Open Folder scenarios.   

 

Enabling and Configuring Testing for Projects

Configuring and working with Python tests in Visual Studio is easier than ever before.

 For users who are new to the testing experience within Visual Studio 2019 for Python projects, right-click on the project name and select the ‘Properties’ option. This option opens the project designer, which allows you to configure tests by going to the ‘Test’ tab.

From this tab, simply click the ’Test Framework’ dropdown box to select the testing framework you wish to use:

Walkthrough of New Testing Features in VS2019 16.3

Once you select and save your changes in the window, test discovery is initiated in the Test Explorer. If the Test Explorer is not open, navigate to the Toolbar and select Test > Windows > Test Explorer. Test Discovery can take up to 60 seconds, after which the test discovery process will end.

Once in the Test Explorer window, you have the ability to re-run your tests (by clicking the ‘Run All’ button or pressing CTRL + R,A) as well as view the status of your test runs.  Additionally, you can see the total number of tests your project contains and the duration of test runs:

to show the status of test runs in VS

If you wish to keep working while tests are running in the background but want to monitor the progress of your test run, you can go to the Output window and choose ‘Show output from: Tests’:

show the user how to select the tests output

We have also made it simple for users with pre-existing projects that contain test files to quickly continue working with their code in Visual Studio 2019. When you open a project that contains testing configuration files (e.g. a .ini file for pytest), but you have not installed or enabled pytest, you will be prompted to install the necessary packages and configure them for the Python environment in which you are working:

install and enable pytest infobar

For open folder scenarios (described below), these informational bars will also be triggered if you have not configured your workspace for pytest or unittest.

 

Configuring Tests for Open Folder Scenarios

In this release of Visual Studio 2019, users can configure tests to work in our popular open folder scenario.

To configure and enable tests, navigate to the Solution explorer, click the “Show All Files” icon to show all files in the current folder and select the PythonSettings.json file within the ‘Local Settings’ folder. (If this file doesn’t exist, create one in ‘Local Settings’ folder). Next, add the field TestFramework: “pytest” to your settings file or TestFramework: “unittest” depending on the testing framework you wish to use.

json settings file for tests

 

For the unittest framework, If UnitTestRootDirectory and/or UnitTestPattern are not specified in PythonSettings.json, they are added and assigned default values of “.” and “test*.py”, respectively.

As in project mode, editing and saving any file triggers test discovery for the test framework that you specified. If you already have the Test Explorer window open, clicking CTRL + R,A also triggers discovery.

Note: If your folder contains a ‘src’ directory which is separate from the folder that contains your tests, you’ll need to specify the path to the src folder in your PythonSettings.json with the setting SearchPaths:

add searchpaths settings to json file

 

 

Debugging Tests

In this latest release, we’ve updated test debugging to use our new ptvsd 4 debugger, which is faster and more reliable than ptvsd 3. We’ve added an option so that you can use the legacy debugger if you run into issues. To enable it, go to Tools > Options > Python > Debugging > Use Legacy Debugger and check the box to enable it.

As in previous releases, if you wish to debug a test, set an initial breakpoint in your code, then right-click the test (or a selection) in Test Explorer and select Debug Selected Tests. Visual Studio starts the Python debugger as it would for application code.

debug python tests in VS2019 16.3 P2

Note: everyone that tries to debug a test will find that the debugging does not automatically end when the debugging session completes. This is a known issue and the current workaround is to click ‘Stop Debugging’ (Shift + F5).

There’s also the ‘Test Detail Summary’ view that allows you to see the Stack Trace of a failed test which makes troubleshooting failed tests even easier. To access this view, simply click on the test within the Test Explorer that you wish to inspect and the ‘Test Detail Summary’ window will appear.

Test Detail Summary View for VS2019 Preview 2

 

Try it Out!

Be sure to download Visual Studio 2019 (16.3 Preview 2), install the Python Workload, and give feedback or view a list of existing issues on our GitHub repo.

The post What’s New for Python in Visual Studio (16.3 Preview 2) appeared first on Python.

August 13, 2019 11:41 PM UTC


Roberto Alsina

Episodio 5: Muchos Pythons

Una pseudo secuela de "Puede Fallar" mostrando varias cosas:

  • Anvil: una manera de hacer aplicaciones web full-stack con Python!
  • Skulpt

Y mucho más!

La aplicación que muestro en el video: En Anvil

El código: lo podés clonar

Detalle: "lo de twitter" quedó reducido a un botón adentro de la aplicación, pero sirvió como disparador :-)

August 13, 2019 08:55 PM UTC


PyCoder’s Weekly

Issue #381 (Aug. 13, 2019)

#381 – AUGUST 13, 2019
View in Browser »

The PyCoder’s Weekly Logo


Traditional Face Detection With Python

In this course on face detection with Python, you’ll learn about a historically important algorithm for object detection that can be successfully applied to finding the location of a human face within an image.
REAL PYTHON video

Three Techniques for Inverting Control, in Python

Inversion of Control, in which code delegates control using plugins, is a powerful way of modularising software. It may sound complicated, but it can be achieved in Python with very little work. This article examines three different techniques for handling IOC in Python.
DAVID SEDDON

Save 40% on Your Order at manning.com

alt

Take the time to learn something new! Manning Publications are offering 40% off everything at manning.com, including everything Pythonic. Just enter the code pycoders40 at the cart before you checkout to save →
MANNING PUBLICATIONS sponsor

NumPy 1.17.0 Drops Python 2 Support

“The 1.17.0 release contains a number of new features that should substantially improve its performance and usefulness. The Python versions supported are 3.5-3.7, note that Python 2.7 has been dropped.”
MAIL-ARCHIVE.COM

print() Function Deep Dive (17,000 Words Guide)

Learn all there is to know about the print() function in Python and discover some of its lesser-known features. Avoid common mistakes, take your “hello world” to the next level, and know when to use a better alternative.
REAL PYTHON

An Overview of the Python Tooling Landscape

An opinionated guide to tooling in Python covering pyenv, poetry, black, flake8, isort, pre-commit, pytest, coverage, tox, Azure Pipelines, sphinx, and readthedocs.
ADITHYA BALAJI

Learn How Python Async Web Frameworks Work by Writing One From Scratch

OLEH KUCHUK

Discussions

Tools for Remote/Pair-Programming In-The-Cloud?

MAIL.PYTHON.ORG

Who Says Python Programmers Don’t Have a Sense of Humor?

RAYMOND HETTINGER

As an Expert, Which Bad Habits Would You Advice Beginner Python Programmers to Avoid?

REDDIT

Python Is Eating the World (Discussion)

HACKER NEWS

Python Jobs

Senior Python Developer (Austin, TX)

InQuest

Backend and DataScience Engineers (London, Relocation & Visa Possible)

Citymapper Ltd

Software Engineering Lead (Houston, TX)

SimpleLegal

Software Engineer (Multiple US Locations)

Invitae

Python Software Engineer (Munich, Germany)

Stylight GmbH

Senior Software Developer (Edmonton, AB)

Levven Electronics Ltd.

Lead Data Scientist (Buffalo, NY)

Utilant LLC

Python Developer (Remote)

418 Media

More Python Jobs >>>

Articles & Tutorials

Inheritance and Composition: A Python OOP Guide

In this step-by-step tutorial, you’ll learn about inheritance and composition in Python. You’ll improve your object oriented programming skills by understanding how to use inheritance and composition and how to leverage them in their design.
REAL PYTHON

An Intro to Flake8

Flake8 is a style guide enforcement tool for Python that you can use in place of PyLint to help you find errors in your code and more closely follow PEP8. This article shows you how to get up and running with Flake8.
MIKE DRISCOLL

Python Developers Are in Demand on Vettery

alt

Vettery is an online hiring marketplace that’s changing the way people hire and get hired. Ready for a bold career move? Make a free profile, name your salary, and connect with hiring managers from top employers today →
VETTERY sponsor

Organizing PythonPune Meetups

PythonPune is a meetup group in Pune India. This blog post is about how the author got involved in organizing the meetup and what the process looks like.
BHAVIN GANDHI

Keras for Beginners: Implementing a Convolutional Neural Network

A beginner-friendly guide on using Keras to implement a simple Convolutional Neural Network (CNN) in Python.
VICTOR ZHOU

What Every Developer Should Learn Early On

“These are a few of the things I wish they were teaching at university instead of pure theory.”
RYLAND GOLDSTEIN

Making an Interactive Projected Surface With Python + OpenCV

“Computer vision + music = life-sized rhythm games”
TSUKURU.CLUB

Python Libraries for Interpretable Machine Learning

REBECCA VICKERY

Using Django Signals to Simplify and Decouple Code

ROBLEY GORI

Testing Scientific Code

PHILIPP JUNG

Projects & Code

austin: Frame Stack Sampler for CPython

“The most interesting use of Austin is probably in conjunction with FlameGraph to profile Python applications while they are running, without the need of instrumentation. This means that Austin can be used on production code with little or even no impact on performance.”
GITHUB.COM/P403N1X87

jupyter-black: Black Formatter for Jupyter Notebook

GITHUB.COM/DRILLAN

moviepy: Video Editing With Python

GITHUB.COM/ZULKO

LibCST: Concrete Syntax Tree Parser and Serializer Library

Parses Python 3.7 source code as a CST tree that keeps all formatting details (comments, whitespaces, parentheses, etc). It’s useful for building automated refactoring (codemod) applications and linters.
GITHUB.COM/INSTAGRAM

chart: Python Charts With 0 Dependencies

GITHUB.COM/MAXHUMBER • Shared by Max Humber

mintotp: Minimal TOTP Generator in 20 Lines of Python

GITHUB.COM/SUSAM

mesapy: Memory-Safe Python Based on PyPy

GITHUB.COM/MESALOCK-LINUX

pip-tools: Set of Tools to Keep Your Pinned Python Dependencies Fresh

GITHUB.COM/JAZZBAND

Poetry: Python Dependency Management and Packaging Made Easy

EUSTACE.IO

scalpl: Seamlessly Operate on Nested Dictionaries

GITHUB.COM/DUCDETRONQUITO

Events

PyBay

August 15 to August 19, 2019
PYBAY.COM

PyCon Korea 2019

August 15 to August 19, 2019
PYCON.KR

Karlsruhe Python User Group (KaPy)

August 16, 2019
BL0RG.NET

PyDelhi User Group Meetup

August 17, 2019
MEETUP.COM

Dominican Republic Python User Group

August 20, 2019
PYTHON.DO

Kiwi PyCon X

August 23 to August 26, 2019
PYTHON.NZ

IndyPy Web Conf 2019

August 23 to August 24, 2019
INDYPY.ORG


Happy Pythoning!
This was PyCoder’s Weekly Issue #381.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

August 13, 2019 07:30 PM UTC


Mike Driscoll

An Intro to Flake8

Python has several linters that you can use to help you find errors in your code or warnings about style. For example, one of the most thorough linters is Pylint.

Flake8 is described as a tool for style guide enforcement. It is also a wrapper around PyFlakes, pycodestyle and Ned Batchelder’s McCabe script. You can use Flake8 as a way to lint your code and enforce PEP8 compliance.


Installation

Installing Flake8 is pretty easy when you use pip. If you’d like to install flake8 to your default Python location, you can do so with the following command:

python -m pip install flake8

Now that Flake8 is installed, let’s learn how to use it!


Getting Started

The next step is to use Flake8 on your code base. Let’s write up a small piece of code to run Flake8 against.

Put the following code into a file named hello.py:

from tkinter import *
 
class Hello:
    def __init__(self, parent):
        self.master = parent
        parent.title("Hello Tkinter")
 
        self.label = Label(parent, text="Hello there")
        self.label.pack()
 
        self.close_button = Button(parent, text="Close",
                                   command=parent.quit)
        self.close_button.pack()
 
    def greet(self):
        print("Greetings!")
 
root = Tk()
my_gui = Hello(root)
root.mainloop()

Here you write a small GUI application using Python’s tkinter library. A lot of tkinter code on the internet uses the from tkinter import * pattern, which is something that you should normally avoid. The reason being that you don’t know everything you are importing and you could accidentally overwrite an import with your own variable name.

Let’s run Flake8 against this code example.

Open up your terminal and run the following command:

flake8 hello.py

You should see the following output:


tkkkk.py:1:1: F403 'from tkinter import *' used; unable to detect undefined names
tkkkk.py:3:1: E302 expected 2 blank lines, found 1
tkkkk.py:8:22: F405 'Label' may be undefined, or defined from star imports: tkinter
tkkkk.py:11:29: F405 'Button' may be undefined, or defined from star imports: tkinter
tkkkk.py:18:1: E305 expected 2 blank lines after class or function definition, found 1
tkkkk.py:18:8: F405 'Tk' may be undefined, or defined from star imports: tkinter
tkkkk.py:20:16: W292 no newline at end of file

The items that start with “F” are PyFlake error codes and point out potential errors in your code. The other errors are from pycodestyle. You should check out those two links to see the full error code listings and what they mean.

You can also run Flake8 against a directory of files rather than one file at a time.

If you want to limit what types of errors you want to catch, you can do something like the following:

flake8 --select E123 my_project_dir

This will only show the E123 error for any files that have them in the specified directory and ignore all other types of errors.

For a full listing of the command line arguments you can use with Flake8, check out their documentation.

Finally, Flake8 allows you to change its configuration. For example, your company might only adhere to parts of PEP8, so you don’t want Flake8 to flag things that your company doesn’t care about. Flake8 also supports using plugins.


Wrapping Up

Flake8 might be just the tool for you to use to help keep your code clean and free of errors. If you use a continuous integration system, like TravisCI or Jenkins, you can combine Flake8 with Black to automatically format your code and flag errors.

This is definitely a tool worth checking out and it’s a lot less noisy than PyLint. Give it a try and see what you think!


Related Reading

The post An Intro to Flake8 appeared first on The Mouse Vs. The Python.

August 13, 2019 05:15 PM UTC


Bhavin Gandhi

Organizing PythonPune Meetups

One thing I like most about meetups is, you get to meet new people. Talking with people, sharing what they are doing helps a lot to gain more knowledge. It is also a good platform to make connections with people having similar area of interests. I have been attending PythonPune meetup since last 2 years. In this blog post, I will be sharing some history about this group and how I got involved in organizing meetups.

August 13, 2019 03:23 PM UTC


Continuum Analytics Blog

Getting Started with Machine Learning in the Enterprise

Machine learning (ML) is a subset of artificial intelligence (AI) in which data scientists use algorithms and statistical  models to predict outcomes and/or perform specific tasks. ML models can automatically “learn from” data sets to…

The post Getting Started with Machine Learning in the Enterprise appeared first on Anaconda.

August 13, 2019 02:47 PM UTC

What’s in a Name? Clarifying the Anaconda Metapackage

The name “Anaconda” is overloaded in many ways. There’s our company, Anaconda, Inc., the Anaconda Distribution, the anaconda metapackage, Anaconda Enterprise, and several other, sometimes completely unrelated projects (like Red Hat’s Anaconda). Here we hope…

The post What’s in a Name? Clarifying the Anaconda Metapackage appeared first on Anaconda.

August 13, 2019 02:40 PM UTC