skip to navigation
skip to content

Planet Python

Last update: October 23, 2018 07:47 PM UTC

October 23, 2018


Codementor

Skills Series: A Beginner’s Guide to Python

Start wPython is the ?1 most prevalent programming dialect utilized by information examiners, information researchers, and programming designers to mechanize forms, assemble the usefulness of...

October 23, 2018 06:12 PM UTC

5 Steps to Prepare for a Data Science Job

What are the key steps in preparing for a data science job? The basic things you absolutely must do? We answer all the these question in this article.

October 23, 2018 06:10 PM UTC


"Morphex's Blogologue"

Taking a look at my Python surveil(lance) app

So, I created this surveillance app in Python, to surveil (https://github.com/morphex/surveil) the room where I spend most of my time, just to make sure that nobody else visits it, without my approval.

Before I wrote this app, I saw there were different applications out there, that could do some sort of surveillance, but I guess I recognized early on that I could easily mail images to myself, and that this was a good approach as it kind of disconnects the surveil app from outside dependencies, at least it doesn't have to have an internet connection up absolutely all the time, to function.

Another feature of mailing myself images compiled into videos, is that as soon as it comes into (in my case) GMail's system, there is a record of the video, and it is because of that, difficult to manipulate data, when a mail has been delivered.

Python was the language of choice, because I wanted to make things easy for myself, and Python is the language I've worked with the most, and it is easy to read and write things in Python.

Before this, I had dabbled a bit with ffmpeg, playing around with videos, adding effects to them and so on.

I'd say fortunately I dabbled with ffmpeg, because it is quite a powerful video processing package. The command line is not intuitive and user friendly, but once the command line is right, ffmpeg seems to be pretty stable.

I read about Python and the GIL (Global Interpreter Lock) today, and I remember this from years back. I guess over 90% of the programming work I've done to date, is in Python and Zope 2. Zope 2 had a way around the GIL and exploiting all CPU cores, and that was running a main database server, and then having several database clients/applications on each of their on process, which effectively went around the GIL, as each application was one process to the operating system. This worked well as the system was built for few writes to the database and many reads.

So, fortunately I dabbled with ffmpeg before this project, because I soon realized that threading could be a bit of a headache, so I opted for running ffmpeg as a subprocess of the surveil app, and using shell scripting and files to pass messages; so when ffmpeg is done processing a set of images into a video, it creates a file, and a thread with a loop in the surveil app monitors for these files, then mails the video and deletes the files related to that video afterwards.

Python is simple and intuitive, and that's the main reason I think, that it became my programming language of choice. It was easy to get started, interesting to explore Python via the interpreter, and there was an advanced web system which used Python as the main programming language.

Years back, I was pretty much obsessed with cross-platform portability, but with this surveil app, I'm creating a temporary (RAM-based) file system which is Linux-specific, and I'd be happy if it runs on *nix/POSIX systems. It's all in keeping things simple.

So I guess the point of this post was to underline that, yes, Python has a GIL, but in most instances it is a non-issue, and if there are CPU-intensive things, these can be forked out to a command-line tool, or you could even have an extension in C which does the heavy lifting. If I hadn't learnt ffmpeg, I could easily have spent a lot more time writing this app.

Another point that is worth mentioning, is the config.py file for this project, is easy to read, and goes with the Python style of simplicity, and goes one step beyond being a config file, it also has some (easy to understand) logic.

This project is a mix of different systems; Python to glue it all together, *nix file systems and scripting in a middle layer, and the ffmpeg application for the heavy-duty work. As far as I can tell, this is all good and stable, although the surveil app reboots about every 24 hours, so it is hard to know how well it runs with a 100% uptime.

October 23, 2018 03:49 PM UTC

Adding (mandatory) SMTP authentication to my surveil app

So, I was thinking a bit lately, about adding a small feature to my surveillance app, so that it would send a mail whenever it was started.

Went about to add that this evening, when I discovered that there were large gaps in time between mailed videos in my Surveillance folder/label.

After some searching and debugging, I found that the SMTP server (GMail's in this case), was rejecting emails from my IP. I guess that's just an automated response, when over time, I sent emails to myself, morphex@gmail.com, from myself and at some point that gets flagged as spam, because I wasn't logged in before sending emails.

Anyway, I guess it was naive to try to run something without logging into an SMTP with spam being what it is, so I added (mandatory) support for logging into the outgoing SMTP server today.

In addition to this, I guess it is nice to have the ability to reboot the host regularly, as a host system might for some reason become bogged down. So I added a config option to specify the amount of time between each reboot, in config.py:

https://github.com/morphex/surveil/blob/d1ff83091d9f33533ce9...

I think the config file is quite neat, because Python is neat; one can add config options, and even some logic to deal with config options in one file, and it seems natural.

At the end of the config file as it was on that commit, there is a statement that adds 5-60 minutes of delay to the reboot interval specified above - so that it is a bit harder to predict when a surveillance camera is rebooted.

October 23, 2018 03:26 PM UTC


Wallaroo Labs

Introducing Connectors: Wallaroo’s Window to the World

Introduction We’re excited today to introduce you to a preview release of a new Wallaroo feature: Connectors. Connectors make inputting and receiving data from Wallaroo even easier. In this post, we’ll briefly go over what Wallaroo is, the role connectors now play as Sources and Sinks for Wallaroo, how to get started with connectors, and talk about what is coming next. If you’re familiar with what Wallaroo is feel free to skip the next section.

October 23, 2018 03:00 PM UTC


Mike Driscoll

Creating Jupyter Notebook Widgets with interact

The Jupyter Notebook has a feature known as widgets. If you have ever created a desktop user interface, you may already know and understand the concept of widgets. They are basically the controls that make up the user interface. In your Jupyter Notebook you can create sliders, buttons, text boxes and much more.

We will learn the basics of creating widgets in this chapter. If you would like to see some pre-made widgets, you can go to the following URL:

These widgets are Notebook extensions that can be installed in the same way that we learned about in my Jupyter extensions article. They are really interesting and well worth your time if you’d like to study how more complex widgets work by looking at their source code.


Getting Started

To create your own widgets, you will need to install the ipywidgets extension.

Installing with pip

Here is how you would install the widget extension with pip:

pip install ipywidgets
jupyter nbextension enable --py widgetsnbextension

If you are using virtualenv, you may need to use the --sys-prefix option to keep your environment isolated.

Installing with conda

Here is how you would install the widgets extension with conda:

conda install -c conda-forge ipywidgets

Note that when installing with conda, the extension will be automatically enabled.


Learning How to Interact

There are a number of methods for creating widgets in Jupyter Notebook. The first and easiest method is by using the interact function from ipywidgets.interact which will automatically generate user interface controls (or widgets) that you can then use to explore your code and interact with data.

Let’s start out by creating a simple slider. Start up a new Jupyter Notebook and enter the following code into the first cell:

from ipywidgets import interact
 
def my_function(x):
    return x
 
# create a slider
interact(my_function, x=20)

Here we import the interact class from ipywidgets. Then we create a simple function called my_function that accepts a single argument and then returns it. Finally we instantiate interact by passing it a function along with the value that we want interact to pass to it. Since we passed in an integer (i.e. 20), the interact class will automatically create a slider.

Try running the cell that contains the code above and you should end up with something that looks like this:

That was pretty neat! Try moving the slider around with your mouse. If you do, you will see that the slider updates interactively and the output from the function is also automatically updated.

You can also create a FloatSlider by passing in floating point numbers instead of integers. Give that a try to see how it changes the slider.

Checkboxes

Once you are done playing with the slider, let’s find out what else we can do with interact. Add a new cell in the same Jupyter Notebook with the following code:

interact(my_function, x=True)

When you run this code you will discover that interact has created a checkbox for you. Since you set “x” to **True**, the checkbox is checked. This is what it looked like on my machine:

You can play around with this widget as well by just checking and un-checking the checkbox. You will see its state change and the output from the function call will also get printed on-screen.

Textboxes

Let’s change things up a bit and try passing a string to our function. Create a new cell and enter the following code:

interact(my_function, x='Jupyter Notebook!')

When you run this code, you will find that interact generates a textbox with the string we passed in as its value:

Try editing the textbox’s value. When I tried doing that, I saw that the output text also changed.

Comboboxes / Drop-downs

You can also create a combobox or drop-down widget by passing a list or a dictionary to your function in interact. Let’s try passing in a list of tuples and see how that behaves. Go back to your Notebook and enter the following code into a new cell:

languages = [('Python', 'Rocks!'), ('C++', 'is hard!')]
interact(my_function, x=languages)

When you run this code, you should see “Python” and “C++” as items in the combobox. If you select one, the Notebook will display the second element of the tuple to the screen. Here is how my mine rendered when I ran this example:

If you’d like to try out a dictionary instead of a list, here is an example:

languages = {'Python': 'Rocks!', 'C++': 'is hard!'}
interact(my_function, x=languages)

The output of running this cell is very similar to the previous example.


More About Sliders

Let’s back up a minute so we can talk a bit more about sliders. We can actually do a bit more with them than I had originally let on. When you first created a slider, all you needed to do was pass our function an integer. Here’s the code again for a refresher:

from ipywidgets import interact
 
def my_function(x):
    return x
 
# create a slider
interact(my_function, x=20)

The value, 20, here is technically an abbreviation for creating an integer-valued slider. The code:

interact(my_function, x=20)

is actually the equivalent of the following:

interact(my_function, x=widgets.IntSlider(value=20))

Technically, Bools are an abbreviation for Checkboxes, lists / dicts are abbreviations for Comboboxes, etc.

Anyway, back to sliders again. There are actually two other ways to create integer-valued sliders. You can also pass in a tuple of two or three items:

  • (min, max)
  • (min, max, step)

This allows us to make the slider more useful as now we get to control the min and max values of the slider as well as set the step. The step is the amount of change to the slider when we change it. If you want to set an initial value, then you need to change your code to be like this:

def my_function(x=5):
    return x
 
interact(my_function, x=(0, 20, 5))

The x=5 in the function is what sets the initial value. I personally found that a little counter-intuitive as the IntSlider itself appears to be defined to work like this:

IntSlider(min, max, step, value)

The interact class does not instantiate IntSlider the same way that you would if you were creating one yourself.

Note that if you want to create a FloatSlider, all you need to do is pass in a float to any of the three arguments: min, max or step. Setting the function’s argument to a float will not change the slider to a FloatSlider.


Using interact as a decorator

The interact class can also be used as a Python decorator. For this example, we will also add a second argument to our function. Go back to your running Notebook and add a new cell with the following code:

from ipywidgets import interact
 
@interact(x=5, y='Python')
def my_function(x, y):
    return (x, y)

You will note that in this example, we do not need to pass in the function name explicitly. In fact, if you did so, you would see an error raised. Decorators call functions implicitly. The other item of note here is that we are passing in two arguments instead of one: an integer and a string. As you might have guessed, this will create a slider and a text box respectively:

As with all the previous examples, you can interact with these widgets in the browser and see their outputs.


Fixed Arguments

There are many times where you will want to set one of the arguments to a set or fixed value rather than allowing it to be manipulated through a widget. The Jupyter Notebook ipywidgets package supports this via the fixed function. Let’s take a look at how you can use it:

from ipywidgets import interact, fixed
 
@interact(x=5, y=fixed('Python'))
def my_function(x, y):
    return (x, y)

Here we import the fixed function from ipywidgets. Then in our interact decorator, we set the second argument as “fixed”. When you run this code, you will find that it only creates a single widget: a slider. That is because we don’t want or need a widget to manipulate the second argument.

In this screenshot, you can see that we have just the one slider and the output is a tuple. If you change the slider’s value, you will see just the first value in the tuple change.


The interactive function

There is also a second function that is worth covering in this chapter that is called interactive. This function is useful during those times when you want to reuse widgets or access the data that is bound to said widgets. The biggest difference between interactive and interact is that with interactive, the widgets are not displayed on-screen automatically. If you want the widget to be shown, then you need to do so explicitly.

Let’s take a look at a simple example. Open up a new cell in your Jupyter Notebook and enter the following code:

from ipywidgets import interactive
 
def my_function(x):
    return x
 
widget = interactive(my_function, x=5)
type(widget)

When you run this code, you should see the following output:

ipywidgets.widgets.interaction.interactive

But you won’t see a slider widget like you did had you used the interact function. Just to demonstrate, here is a screenshot of what I got when I ran the cell:

If you’d like the widget to be shown, you need to import the display function. Let’s update the code in the cell to be the following:

from ipywidgets import interactive
from IPython.display import display
 
def my_function(x):
    return x
 
widget = interactive(my_function, x=5)
display(widget)

Here we import the display function from IPython.display and then we call it at the end of the code. When I ran this cell, I got the slider widget:

Why is this helpful? Why wouldn’t you just use interact instead of jumping through extra hoops? Well the answer is that the interactive function gives you additional information that interact does not. You can access the widget’s keyword arguments and its result. Add the following two lines to the end of the cell that you just edited:

print(widget.kwargs)
print(widget.result)

Now when you run the cell, it will print out the arguments that were passed to the function and the return value (i.e. result) of calling the function.


Wrapping Up

We learned a lot about Jupyter Notebook widgets in this chapter. We covered the basics of using the `interact` function as well as the `interactive` function. However there is more that you can about these functions by checking out the documentation.

Even this is just scratching the surface of what you can do with widgets in Jupyter. In the next chapter we will dig into creating widgets by hand outside of using the interact / interactive functions we learned about this in this chapter. We will learn much more about how widgets work and how you can use them to make your Notebooks much more interesting and potentially much more powerful.


Related Reading

October 23, 2018 02:31 PM UTC


Robin Wilson

I give talks – on science, programming and more

The quick summary of this post is: I give talks. You might like them. Here are some details of talks I’ve done. Feel free to invite me to speak to your group – contact me at robin@rtwilson.com. Read on for more details.

I enjoy giving talks on a variety of subjects to a range of groups. I’ve mentioned some of my programming talks on my blog before, but I haven’t mentioned anything about my other talks so far. I’ve spoken at amateur science groups (Cafe Scientifique or U3A science groups and similar), programming conferences (EuroSciPy, PyCon UK etc), schools (mostly to sixth form students), unconferences (including short talks made up on the day) and at academic conferences.

Feedback from audiences has been very good. I’ve won the ‘best talk’ prize at a number of events including the Computational Modelling Group at the University of Southampton, the Student Conference on Complexity Science, and EuroSciPy. A local science group recently wrote:

“The presentation that Dr Robin Wilson gave on Complex systems in the world around us to our Science group was excellent. The clever animated video clips, accompanied by a clear vocal description gave an easily understood picture of the underlining principles involved. The wide range of topics taken from situations familiar to everyone made the examples pertinent to all present and maintained their interest throughout. A thoroughly enjoyable and thought provoking talk.”

A list of talks I’ve done, with a brief summary for each talk, is at the end of this post. I would be happy to present any of these talks at your event – whether that is a science group, a school Geography class, a programming meet-up or something else appropriate. Just get in touch on robin@rtwilson.com.

Science talks

All of these are illustrated with lots of images and videos – and one even has live demonstrations of complex system models. They’re designed for people with an interest in science, but they don’t assume any specific knowledge – everything you need is covered from the ground up.

Monitoring the environment from space

Hundreds of satellites orbit the Earth every day, collecting data that is used for monitoring almost all aspects of the environment. This talk will introduce to you the world of satellite imaging, take you beyond the ‘pretty pictures’ to the scientific data behind them, and show you how the data can be applied to monitor plant growth, air pollution and more.

From segregation to sand dunes: complex systems in the world around us

‘Complex’ systems are all around us, and are often difficult to understand and control. In this talk you will be introduced to a range of complex systems including segregation in cities, sand dune development, traffic jams, weather forecasting, the cold war and more – and will show how looking at these systems in a decentralised way can be useful in understanding and controlling them. I’m also working on a talk for a local science and technology group on railway signalling, which should be fascinating. I’m happy to come up with new talks in areas that I know a lot about – just ask.

Programming talks

These are illustrated with code examples, and can be made suitable for a range of events including local programming meet-ups, conferences, keynotes, schools and more.

Writing Python to process millions of row of mobile data – in a weekend

In April 2105 there was a devastating earthquake in Nepal, killing thousands and displacing hundreds of thousands more. Robin Wilson was working for the Flowminder Foundation at the time, and was given the task of processing millions of rows of mobile phone call records to try and extract useful information on population displacement due to the disaster. The aid agencies wanted this information as quickly as possible – so he was given the unenviable task of trying to produce preliminary outputs in one bank-holiday weekend… This talk is the story of how he wrote code in Python to do this, and what can be learnt from his experience. Along the way he’ll show how Python enables rapid development, introduce some lesser-used built-in data structures, explain how strings and dictionaries work, and show a slightly different approach to data processing.

xarray: the power of pandas for multidimensional arrays

“I wish there was a way to easily manipulate this huge multi-dimensional array in Python…”, I thought, as I stared at a huge chunk of satellite data on my laptop. The data was from a satellite measuring air quality – and I wanted to slice and dice the data in some supposedly simple ways. Using pure numpy was just such a pain. What I wished for was something like pandas – with datetime indexes, fancy ways of selecting subsets, group-by operations and so on – but something that would work with my huge multi-dimensional array.

The solution: xarray – a wonderful library which provides the power of pandas for multi-dimensional data. In this talk I will introduce the xarray library by showing how just a few lines of code can answer questions about my data that would take a lot of complex code to answer with pure numpy – questions like ‘What is the average air quality in March?’, ‘What is the time series of air quality in Southampton?’ and ‘What is the seasonal average air quality for each census output area?’.

After demonstrating how these questions can be answered easily with xarray, I will introduce the fundamental xarray data types, and show how indexes can be added to raw arrays to fully utilise the power of xarray. I will discuss how to get data in and out of xarray, and how xarray can use dask for high-performance data processing on multiple cores, or distributed across multiple machines. Finally I will leave you with a taster of some of the advanced features of xarray – including seamless access to data via the internet using OpenDAP, complex apply functions, and xarray extension libraries.

recipy: effortless provenance in Python

Imagine the situation: You’ve written some wonderful Python code which produces a beautiful output: a graph, some wonderful data, a lovely musical composition, or whatever. You save that output, naturally enough, as awesome_output.png. You run the code a couple of times, each time making minor modifications. You come back to it the next week/month/year. Do you know how you created that output? What input data? What version of your code? If you’re anything like me then the answer will often, frustratingly, be “no”.

This talk will introduce recipy, a Python module that will save you from this situation! With the addition of a single line of code to the top of your Python files, recipy will log each run of your code to a database, keeping track of all of your input files, output files and the code that was used – as well as a lot of other useful information. You can then query this easily and find out exactly how that output was created.

In this talk you will hear how to install and use recipy and how it will help you, how it hooks into Python and how you can help with further development.

 

School talks/lessons

Decentralised systems, complexity theory, self-organisation and more

This talk/lesson is very similar to my complex systems talk described above, but is altered to make it more suitable for use in schools. So far I have run this as a lesson in the International Baccalaureate Theory of Knowledge (TOK) course, but it would also be suitable for A-Level students studying a wide range of subjects.

GIS/Remote sensing for geographers

I’ve run a number of lessons for sixth form geographers introducing them to the basics of GIS and remote sensing. These topics are often included in the curriculum for A-Level or equivalent qualifications, but it’s often difficult to teach them without help from outside experts. In this lesson I provide an easily-understood introduction to GIS and remote sensing, taking the students from no knowledge at all to a basic understanding of the methods involved, and then run a discussion session looking at potential uses of GIS/RS in topics they have recently covered. This discussion session really helps the content stick in their minds and relates it to the rest of their course.

Computing

As an experienced programmer, and someone with formal computer science education, I have provided input to a range of computing lessons at sixth-form level. This has included short talks and part-lessons covering various programming topics, including examples of ‘programming in the real world’ and discussions on structuring code for larger projects. Recently I have provided one-on-one support to A-Level students on their coursework projects, including guidance on code structure, object-oriented design, documentation and GUI/backend interfaces.

October 23, 2018 10:38 AM UTC


PyBites

Code Challenge 56 - Calculate the Duration of a Directory of Audio Files

There is an immense amount to be learned simply by tinkering with things. - Henry Ford

Hey Pythonistas,

It's time for another code challenge! This week we're asking you to work with directory, files and audio meta data!

The Challenge

Write a script that receives a directory name and retrieves all mp3 (or mp4 or m4a) files. It then sums up the durations of each file and prints them in a nice table with a total duration.

This could look like the following:

    $ module_duration.py ~/Music/iTunes/iTunes\ Media/Music/Manu\ Chao/Manu\ Chao\ -\ Esperanza/
    Manu Chao - Bixo.m4a                    : 112
    Manu Chao - Denia.m4a                   : 279
    Manu Chao - El Dorrado 1997.m4a         : 89
    Manu Chao - Homens.m4a                  : 198
    Manu Chao - Infinita Tristeza.m4a       : 236
    Manu Chao - La Chinita.m4a              : 93
    Manu Chao - La Marea.m4a                : 136
    Manu Chao - La Primavera.m4a            : 112
    Manu Chao - La Vacaloca.m4a             : 143
    Manu Chao - Le Rendez Vous.m4a          : 116
    Manu Chao - Me Gustas Tu.m4a            : 240
    Manu Chao - Merry Blues.m4a             : 216
    Manu Chao - Mi Vida.m4a                 : 152
    Manu Chao - Mr Bobby.m4a                : 229
    Manu Chao - Papito.m4a                  : 171
    Manu Chao - Promiscuity.m4a             : 96
    Manu Chao - Trapped by Love.m4a         : 114
    --------------------------------------------------
    Total                                   : 0:45:32

What will you learn?

Why do we think this is cool? There are a couple of subtasks here:

  1. You learn how to do a common sysadmin task of listing files in a directory (check out the os, glob and pathlib modules).

  2. You learn how to convert and calculate mm:ss (minutes/seconds) timings, which will hone your datetime skills.

  3. As far as we know Python cannot extract audio meta data natively (yet), so you probably want to try a tool like FFmpeg which is cool because then you need to know how to call an external command with Python and parse its output. You probably want to check out subprocess for this:

    The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. - docs

Good luck and have fun!

PyBites Community

A few more things before we take off:


>>> from pybites import Bob, Julian

Keep Calm and Code in Python!

October 23, 2018 09:15 AM UTC

Code Challenge 55 - #100DaysOfCode Curriculum Generator - Review

In this article we review last week's #100DaysOfCode Curriculum Generator code challenge.

Final reminder: Hacktoberfest

8 days left to have your PRs to our challenges repo count towards Hacktoberfest No. 5.

Community Pull Requests

Another 10+ PRs this week, cool!

Check out the awesome PRs by our community for PCC55 (or from fork: git checkout community && git merge upstream/community):

Featured

vipinreyo wrote a script that requests and parses our platform and collects meta data from our challenges and bites. From that list it will randomly create a task list for 100 days and return it as a JSON string.

danshorstein built 100 Days of Awesome Python. Using the Awesome Python repo it creates a curriculum of 100 days of python awesomeness. Each day you get a new library to explore. It started as a random selection, but he then wrote a second version to sort the libraries on the lowest number of stars so the libraries selected are lesser known.

bbelderbos made a #100DaysOfCode Reading Planner, a script that takes one or more book IDs from our reading list app which allowed him to put his curriculum to test starting his long desired #100DaysOfData.

PCC55 Lessons

I had fun with list comprehensions and JSON

Requests and Beautifulsoup are awesome.

The json part was easy, the complex part was how to evenly split 4 or 5 books over 100 days: some days you finish a book, but still have pages left (from "avg pages per day") you need to spend on the new book, going with generators / itertools.islice made this easier to accomplish, which was a great learning exercise.

Read Code for Fun and Profit

You can look at all submitted code here and/or on our Community branch.

Other learnings we spotted in Pull Requests for other challenges this week:

(PCC02) I had more practice with generators and itertools, learning how to use them effectively.

(PCC04) yield and Tweepy

(PCC14) More knowledge about decorators

(PCC42) I improved my regular expression skills, learning about capturing groups.


Thanks to everyone for your participation in our blog code challenges! Keep the PRs coming and include a README.md with one or more screenshots if you want to be featured in this weekly review post.

Become a Python Ninja

Master Python through Code Challenges:


Keep Calm and Code in Python!

-- Bob and Julian

October 23, 2018 09:10 AM UTC


Codementor

Leveraging Starlette in Django Applications.

Incremental switch to leveraging Asynchronous web frameworks in a Django Application .

October 23, 2018 06:01 AM UTC

October 22, 2018


"Morphex's Blogologue"

Changes to my Python surveillance (webcam, web camera) app

So, I created a surveillance app a week ago, because I felt it would be comforting to be able to see if somebody had been into my room.

Since then, I had to make the mailer code, the codes that mails a compiled video to the given email address, a bit more robust:

https://github.com/morphex/surveil/commit/a02dbc05d78b71aea2...

As I one day discovered that the last mail sent to me with a video, was sent at 03:34, and looking at the log for surveil, I could see that DNS had stopped working.

Today I added another mailer feature, which is simply moving the mailer code into its own function, and then running that code as a loop forever, in a separate thread:

https://github.com/morphex/surveil/commit/146f1fe88511f94358...

Other notable changes are a separate configuration file, as well as parsing the output from fswebcam when a picture is taken, and if an error is detected, re-get the image:

https://github.com/morphex/surveil/commit/8c8b84fc46fd0863e7...

Finally, I added a script that can be started from Cron, so that the surveil app starts running as soon as the laptop/desktop/demoboard - whatever, boots up.

https://github.com/morphex/surveil/commit/6291603045579228fc...

I added the script, because it's easy to forget to include the PATH etc. - which leads to confusion, irritation and so on, when automating things.

Of all the things I did summarized here, I think the mailer hack is the best; I simply moved some code around and fired up another thread, so that images are taken and videos created and mailed, without delays which could create gaps in time where the room was not surveilled.

October 22, 2018 09:15 PM UTC


Vasudev Ram

Solution for A Python email signature puzzle

By Vasudev Ram



Hi, readers,

Here is the answer to the puzzle in my recent post titled A Python email signature puzzle:

The answer is that the program prints this output:
Truly rural
Truly rural
Truly rural
Truly rural
Truly rural






There are a few blank lines in the output, so I intentionally included them above.

I'll first mention that there was a mistake in the code in the post, and the fix for it, and will then describe how the code works, below:

In the 5th line of the code, i.e. in the line below the one in which the print keyword appears, the fragment:
1 >> 8
was a typo; it should really be:
1 << 8
i.e. 1 left-shifted by 8, not 1 right-shifted by 8.
Sorry about that.

I''ve reproduced the full (corrected) code below for your convenience:
for ix, it in enumerate([('\n', 2 >> 1), \
("larur ylurt", (8 >> 1) + 1), ('\n', 1 << 1),]):
for tm in range(it[1]):
print chr(ord(it[0][::-1][0]) - 2 ** 5) + \
it[0][::-1][(1 << 8) - (2 ** 8 - 1):] \
if ix % 2 == 1 else it[0] * int(bin(4) \
and bin(1), 2) * ix
Now let's dissect the code a piece at a time to understand how it works:

The outer for loop starts with:
for ix, it in enumerate([('\n', 2 >> 1), \
("larur ylurt", (8 >> 1) + 1), ('\n', 1 << 1),]):
When evaluating or figuring out what nested expressions mean, we have to evaluate them from the inside out, because the inner expressions' values need to be first computed in order to be used as terms or arguments in outer ones.

So we can start with the argument to enumerate().

It is a list of 3 tuples.

Each tuple has two items.

The first tuple is:

('\n', 2 >> 1)
which evaluates to:
('\n', 1)
because
2 >> 1
equals
1

(Right-shifting a number by 1 is the same as dividing it by 2, BTW, and left-shifting a number by 1 is the same as multiplying it by 2. This is for integers. See Arithmetic shift.)

The second tuple is:
("larur ylurt", (8 >> 1) + 1)
which evaluates to:
("larur ylurt", 5)
because
(8 >> 1) + 1
equals
4 + 1
The third tuple is:
('\n', 1 << 1)
which evaluates to:
('\n', 2)
because
1 << 1
equals
2
So the list of tuples evaluates to:
[('\n', 1), ("larur ylurt", 5), ('\n', 2)]
(BTW, in Python, it is legal (although optional), to put a comma after the last item in a list, before the closing square bracket, as I did in the code. It is a convenience, so that when adding another item to the list later, maybe on the next line, if won't matter if you forget to add a comma before that new item. A bit of defensive programming there. Putting two consecutive commas is illegal, though.)

Now that we have evaluated the list of tuples, let's see what the outer for loop does with it. This bit:

for ix, it in enumerate(...)
where the ... is a placeholder for the argument to enumerate() (i.e. list of tuples), causes Python to run the block controlled by the outer loop, 3 times (since the list has 3 items), with ix and it set to the successive indices and item of the list, with i going from 0 to 2.

Within the outer for loop, the first (and only) statement is the inner for loop, which goes like this:
for tm in range(it[1]):
That creates a loop with tm going from 0 to it[1] - 1 (for each time the outer loop executes), where it[1] is the 2nd item of each tuple, i.e. the number that follows the string. But it[1] varies with each iteration of the outer loop.

So the inner for statement effectively runs 3 times (which is not the same as a for loop having 3 iterations). This is because there are 3 tuples in the list looped over by the outer for statement. The index for the inner for loop start at 0 each time, and goes up to the value of it[1] - 1 each time. This is because range(n) gives us 0 to n - 1.
Since it[1] is successively 1, then 5, then 2, the first time, the inner for loop has 1 iteration, then it has 5 iterations, and finally it has 2 iterations.

For the last part that needs to be explained, i.e. the print statement that is controlled by the inner for statement, I'll reverse direction temporarily, i.e. I'll first go outside in, for the main components of the print statement, then for each detailed part, I'll again go inside out.

The print statement prints the evaluated value of a Python conditional expression. See later in this post for background information about conditional expressions in Python. We can describe the statement and its embedded conditional expression like this in English:

Print some item if a condition is True, else print some other item.

That was the outside in part. Now the inside out part, for the conditional expression in the print statement:

This is the "some item" part:

chr(ord(it[0][::-1][0]) - 2 ** 5) +
it[0][::-1][(1 >> 8) - (2 ** 8 - 1):]

(For display here, I've removed the backslashes that were there at the ends of some lines, since they are only Python syntax to indicate the continuation of a logical line across physical lines.)

The above "some item" can be evaluated like this:

In each of the 3 iterations of the outer loop, it[0] becomes the string which is the first item of each of the 3 tuples.
So it[0] is successively "\n", "larur ylurt", and "\n".

The chr() function returns the ASCII character for its ASCII code (integer) argument, and the ord() function (the inverse of the chr() function), returns the ASCII code for its ASCII character argument. So chr(65) is 'A' and ord('A') is 65, for example.

Also, in the ASCII code, the uppercase letters A-Z are separated from the corresponding lowercase letters 32 positions each.
That is, ord('t') - ord('T') = 32. The same goes for any other lowercase letter and its corresponding uppercase one.

2 ** 5 is 32.

For a given string s, the expression s[::-1] uses Python string slicing to reverse the string.

1 >> 8 is 256, as is 2 ** 8.

The part x % 2 == 1 makes one part of the conditional expression get evaluated in one case,
when the condition is True ((when ix is 1), and the other part get evaluated in the other two cases,
when the condition is False (when ix is 0 or 2).

bin() is a Python built-in function which converts its argument to binary:
>>> print bin.__doc__
bin(number) -> string

Return the binary representation of an integer or long integer.
The values of bin(4) and bin(1) are as below:
>>> bin(4)
'0b100'
>>> bin(1)
'0b1'
If you need a clue as to why the and operator in the above expression works the way it does, see the below code snippets:
>>> def foo(n):
... print "in foo, n =", n
... return n
...
>>> def bar(m):
... print "in bar, m =", m
... return m
...
>>> foo(1)
in foo, n = 1
1
>>> bar(2)
in bar, m = 2
2
>>> foo(1) and bar(2)
in foo, n = 1
in bar, m = 2
2
>>> foo(1) or bar(2)
in foo, n = 1
1

The last fragment of the expression for the else part (of the conditional expression), uses string repetition, i.e. the * operator used with a string on the left and an integer on the right, to repeat the string that many times.

Given all the above information, a reader should be able to see (some assembly required) that the program prints the output as shown near the top of the post above, i.e. the string "Truly rural" 5 times, with some number of newlines before and after those 5 lines :)

Here is some background material on conditional expressions in Python:

Python 2 - conditional expressions

Python 3 - conditional expressions

PEP 308

Some examples of the use of conditional expressions, run in the Python shell:

>>> a = 1
>>> b = 1
>>> print 1 if a == b else 2
1
>>> b = 2
>>> print 1 if a == b else 2
2
>>> # A dummy function that considers even-numbered days as fine weather.
...
>>> def fine_weather(day):
... return day % 2 == 0
...
>>> for i in range(4): print i, fine_weather(i)
...
0 True
1 False
2 True
3 False
>>> for i in range(4): print 'Go out' if fine_weather(i) else 'Stay in'
...
Go out
Stay in
Go out
Stay in

A Python code recipe example for conditional expressions on my ActiveState Code recipes page (over 90 Python recipes there):

Classifying characters using nested conditional expressions

The same code on my blog:

Using nested conditional expressions to classify characters

- Enjoy.


- Vasudev Ram - Online Python training and consulting

I conduct online courses on Python programming, Unix/Linux (commands and shell scripting) and SQL programming and database design, with personal coaching sessions. See the Training page on my blog.

Contact me for details of course content, terms and schedule.

DPD: Digital Publishing for Ebooks and Downloads.

Hit the ground running with my vi quickstart tutorial. I wrote it at the request of two Windows system administrator friends who were given additional charge of some Unix systems. They later told me that it helped them to quickly start using vi to edit text files on Unix.

Check out WP Engine, powerful WordPress hosting.

Get a fast web site with A2 Hosting.

Creating or want to create online products for sale? Check out ConvertKit, email marketing for online creators.

Own a piece of history:
Legendary American Cookware

Teachable: feature-packed course creation platform, with unlimited video, courses and students.

Posts about: Python * DLang * xtopdf

My ActiveState Code recipes

Follow me on:


October 22, 2018 04:29 PM UTC


Stack Abuse

Vim for Python Development

What is Vim?

Vim is a powerful text editor that belongs to one of the default components on every Linux distribution, as well as Mac OSX. Vim follows its own concept of usage, causing the community to divide into strong supporters and vehement opponents that are in favor for other editors like Emacs. (By the way, that's very nice in winter in order to see the two enthusiastic teams having an extensive snowball fight together).

Vim can be individualized and extended using additional plugins in order to adjust the tool to your specific needs. In this article we highlight a selection of extensions and discuss a useful setup to improve software development with Python.

Auto-Completion

Vim is already equipped with an auto-completion feature. This works well but is limited to words that already exist in the current text buffer. In insert mode, using the key combination CTRL+N you get the next word in the current buffer, and CTRL+P the last one. In either way a menu with words pops up from which you choose the word to be pasted in the text at the current cursor position of the document.

A selection of words to choose from

This is already quite cool. Luckily, the same feature exists for entire lines of text. In insert mode press CTRL+X first, followed by CTRL+L. A menu pops up with similar lines from which you choose the line you would like to be pasted in the text at the current cursor position of the document.

A selection of lines to choose from

To develop effectively in Python, Vim contains a standard module named pythoncomplete (Python Omni Completion). In order to activate this plugin add the following two lines to your Vim configuration file .vimrc:

filetype plugin on  
set omnifunc=syntaxcomplete#Complete  

Then, in the Vim editor window the completion works in insert mode based on the key combination CTRL+X followed by CTRL+O. A submenu pops up that offers you Python functions and keywords to be used. The menu entries are based on Python module descriptions ("docstrings"). The example below shows the abs() function with additional help on top of the vim editor screen.

A selection of keywords to choose from

The next plugin I'd like to discuss is named Jedi-Vim. It connects Vim with the Jedi autocompletion library.

Having installed the according package on your Debian GNU/Linux system it needs an additional step to make Jedi-Vim work. The plugin has to be activated using the Vim plugin manager as follows:

$ vim-addons install python-jedi
Info: installing removed addon 'python-jedi' to /home/frank/.vim  
Info: Rebuilding tags since documentation has been modified ...  
Processing /home/frank/.vim/doc/  
Info: done.  

Next, check the status of the plugin:

$ vim-addons status python-jedi
# Name                     User Status  System Status 
python-jedi                 installed     removed  

Now the plugin is activated and you can use it in Vim while programming. As soon as you either type a dot or press CTRL+Space the menu opens and shows you method and operator names that could fit. The image below shows the according entries from the csv module. As soon as you choose an item from the menu it will be pasted into your source code.

Jedi-Vim in action

An interactive plugin is youcompleteme. It describes itself as "a fast, as-you-type, fuzzy-search code completion engine for Vim". For Python 2 and 3, the automatic completion is based on Jedi as well. Among other programming languages it also supports C#, Go, Rust, and Java.

Provided in a Git repository, the setup requires additional steps in order to use it. The package on Debian GNU/Linux comes with a compiled version, and after installing the package via apt-get the following steps will make it work. First, enable the package using the Vim Addon Manager (vam) or the command vim-addons:

$ vim-addons install youcompleteme
Info: installing removed addon 'youcompleteme' to /home/frank/.vim  
Info: Rebuilding tags since documentation has been modified ...  
Processing /home/frank/.vim/doc/  
Info: done.  

Next, check the status of the plugin. The output below shows you that the plugin is successfully installed for you as a regular user:

$ vim-addons status youcompleteme
# Name                     User Status  System Status 
youcompleteme              installed    removed  

Third, copy the default ycm_extra_conf.py file from the examples directory to your ~/.vim/ folder as follows:

$ cp -v /usr/share/doc/vim-youcompleteme/examples/ycm_extra_conf.py .ycm_extra_conf.py
"/usr/share/doc/vim-youcompleteme/examples/ycm_extra_conf.py" -> ".ycm_extra_conf.py"

The final step is to add the following two lines to your .vimrc file:

" youcompleteme
let g:ycm_global_ycm_extra_conf = "~/.vim/.ycm_extra_conf.py"  

The first line is a comment that could be omitted, and the second line defines the configuration file for the youcompleteme plugin. Et voila - now Vim accepts automated completion of code. When you see a useful completion string being offered, press the TAB key to accept it. This inserts the completion string at the current position. Repeated presses of the TAB key cycle through the offered completions.

The plugin youcompleteme in action

Syntax Highlighting

Vim already comes with syntax highlighting for a huge number of programming languages that includes Python. There are three plugins that help to improve it - one is called python-syntax, the other one is python-mode, and the third one is python.vim.

Among others the python-syntax project site lists a high number of improvements such as highlighting for exceptions, doctests, errors, and constants. A nice feature is the switch between syntax highlighting for Python 2 and 3 based on an additional Vim command - :Python2Syntax and :Python3Syntax. This helps to identify possible changes that are required to run your script with both versions.

Changing between Python 2 and 3

Combining Vim with the Revision Control System Git

Revision control is quite essential for developers, and Git is probably the best system for that. Compiling Python code, the interpreter creates a number of temporary files like __pycache__ and *.pyc. The changes of these files need not to be tracked in Git. To ignore them Git offers the feature of a so-called .gitignore file. Create this file in your Git-managed development branch with the following contents:

*.pyc
__pycache__  

Also, add a README file for your project to document what it is about. No matter how small your project is the README file helps you (and others) to remember what the code is meant to do. Writing this file in Markdown format is especially helpful if you synchronize your Python code with your repository on GitHub. The README file is rendered automatically to HTML that can be viewed easily in your web browser, then.

Vim can collaborate with Git directly using special plugins. Among others there is vim-fugitive, gv.vim and vimagit. All of them are available from Github, and mostly as a package for Debian GNU/Linux.

Having downloaded vim-fugitive via apt-get it needs to be activated in a similar way as done before with the other plugins:

$ vim-addons install fugitive 
Info: installing removed addon 'fugitive' to /home/frank/.vim  
Info: Rebuilding tags since documentation has been modified ...  
Processing /home/frank/.vim/doc/  
Info: done  

This plugin works with files that are tracked with Git, only. A large number of additional Vim commands becomes available such as :Gedit, :Gdiff, :Gstatus, :Ggrep and :Glog. As stated on the project website these Vim commands correspond with the following Git commands and actions:

Finding out who changed which line of code using `:Gblame`

Working With Skeletons

Skeleton files (or templates) are a nice feature of Vim that helps improve your productivity by adding default text to a file when a new one is created. For example, in many Python files you'll have the a shebang, license, docstring, and author info at the beginning of the file. It would be a hassle to have to type or even copy this info to each file. Instead, you can use skeleton files to add this default text for you.

Let's say, for example, you want all new Python files to start with the following text:

#!/user/bin/env python3
"""
[Add module documentation here]

Author: Frank  
Date: [Add date here]  
"""

You would create a file with this content and call it something like "skeleton.py", and then move it to the directory ~/.vim/skeleton.py. To tell Vim which file should be used as the skeleton file for Python, add the following to your .vimrc file:

au BufNewFile *.py 0r ~/.vim/skeleton.py  

This tells Vim to use the specified skeleton file for all new files matching the filename "*.py".

Notes on Using Plug-ins

Usually, Vim is quite fast. The more plugins you activate the longer it takes. The start of Vim is delayed, and takes noticeably longer than before. Also, it is common that Debian/Ubuntu packages work out of the box, and the installation scripts include all the steps to setup the plugin properly. I noticed that this is not the case, and sometimes additional steps are required.

More Resources

There are a number of courses and blog posts that cover various Vim settings for day-to-day use as a Python developer, which I'd highly recommend looking in to.

The following course aims for you to master Vim on any operating system, helping you gain a level of knowledge and comfortability with the editor that's difficult to achieve by reading articles alone:

The rest are some great resources from around the web that we've found to be very helpful as well:

These articles help to extend your knowledge. Enjoy :)

Acknowledgements

The author would like to thank Zoleka Hatitongwe for her help and critical comments while preparing the article.

October 22, 2018 03:18 PM UTC


Real Python

Getting Started With Testing in Python

This tutorial is for anyone who has written a fantastic application in Python but hasn’t yet written any tests.

Testing in Python is a huge topic and can come with a lot of complexity, but it doesn’t need to be hard. You can get started creating simple tests for your application in a few easy steps and then build on it from there.

In this tutorial, you’ll learn how to create a basic test, execute it, and find the bugs before your users do! You’ll learn about the tools available to write and execute tests, check your application’s performance, and even look for security issues.

Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you'll need to take your Python skills to the next level.

Testing Your Code

There are many ways to test your code. In this tutorial, you’ll learn the techniques from the most basic steps and work towards advanced methods.

Automated vs. Manual Testing

The good news is, you’ve probably already created a test without realizing it. Remember when you ran your application and used it for the first time? Did you check the features and experiment using them? That’s known as exploratory testing and is a form of manual testing.

Exploratory testing is a form of testing that is done without a plan. In an exploratory test, you’re just exploring the application.

To have a complete set of manual tests, all you need to do is make a list of all the features your application has, the different types of input it can accept, and the expected results. Now, every time you make a change to your code, you need to go through every single item on that list and check it.

That doesn’t sound like much fun, does it?

This is where automated testing comes in. Automated testing is the execution of your test plan (the parts of your application you want to test, the order in which you want to test them, and the expected responses) by a script instead of a human. Python already comes with a set of tools and libraries to help you create automated tests for your application. We’ll explore those tools and libraries in this tutorial.

Unit Tests vs. Integration Tests

The world of testing has no shortage of terminology, and now that you know the difference between automated and manual testing, it’s time to go a level deeper.

Think of how you might test the lights on a car. You would turn on the lights (known as the test step) and go outside the car or ask a friend to check that the lights are on (known as the test assertion). Testing multiple components is known as integration testing.

Think of all the things that need to work correctly in order for a simple task to give the right result. These components are like the parts to your application, all of those classes, functions, and modules you’ve written.

A major challenge with integration testing is when an integration test doesn’t give the right result. It’s very hard to diagnose the issue without being able to isolate which part of the system is failing. If the lights didn’t turn on, then maybe the bulbs are broken. Is the battery dead? What about the alternator? Is the car’s computer failing?

If you have a fancy modern car, it will tell you when your light bulbs have gone. It does this using a form of unit test.

A unit test is a smaller test, one that checks that a single component operates in the right way. A unit test helps you to isolate what is broken in your application and fix it faster.

You have just seen two types of tests:

  1. An integration test checks that components in your application operate with each other.
  2. A unit test checks a small component in your application.

You can write both integration tests and unit tests in Python. To write a unit test for the built-in function sum(), you would check the output of sum() against a known output.

For example, here’s how you check that the sum() of the numbers (1, 2, 3) equals 6:

>>>
>>> assert sum([1, 2, 3]) == 6, "Should be 6"

This will not output anything on the REPL because the values are correct.

If the result from sum() is incorrect, this will fail with an AssertionError and the message "Should be 6". Try an assertion statement again with the wrong values to see an AssertionError:

>>>
>>> assert sum([1, 1, 1]) == 6, "Should be 6"
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AssertionError: Should be 6

In the REPL, you are seeing the raised AssertionError because the result of sum() does not match 6.

Instead of testing on the REPL, you’ll want to put this into a new Python file called test_sum.py and execute it again:

def test_sum():
    assert sum([1, 2, 3]) == 6, "Should be 6"

if __name__ == "__main__":
    test_sum()
    print("Everything passed")

Now you have written a test case, an assertion, and an entry point (the command line). You can now execute this at the command line:

$ python test_sum.py
Everything passed

You can see the successful result, Everything passed.

In Python, sum() accepts any iterable as its first argument. You tested with a list. Now test with a tuple as well. Create a new file called test_sum_2.py with the following code:

def test_sum():
    assert sum([1, 2, 3]) == 6, "Should be 6"

def test_sum_tuple():
    assert sum((1, 2, 2)) == 6, "Should be 6"

if __name__ == "__main__":
    test_sum()
    test_sum_tuple()
    print("Everything passed")

When you execute test_sum_2.py, the script will give an error because the sum() of (1, 2, 2) is 5, not 6. The result of the script gives you the error message, the line of code, and the traceback:

$ python test_sum_2.py
Traceback (most recent call last):
  File "test_sum_2.py", line 9, in <module>
    test_sum_tuple()
  File "test_sum_2.py", line 5, in test_sum_tuple
    assert sum((1, 2, 2)) == 6, "Should be 6"
AssertionError: Should be 6

Here you can see how a mistake in your code gives an error on the console with some information on where the error was and what the expected result was.

Writing tests in this way is okay for a simple check, but what if more than one fails? This is where test runners come in. The test runner is a special application designed for running tests, checking the output, and giving you tools for debugging and diagnosing tests and applications.

Choosing a Test Runner

There are many test runners available for Python. The one built into the Python standard library is called unittest. In this tutorial, you will be using unittest test cases and the unittest test runner. The principles of unittest are easily portable to other frameworks. The three most popular test runners are:

Choosing the best test runner for your requirements and level of experience is important.

unittest

unittest has been built into the Python standard library since version 2.1. You’ll probably see it in commercial Python applications and open-source projects.

unittest contains both a testing framework and a test runner. unittest has some important requirements for writing and executing tests.

unittest requires that:

To convert the earlier example to a unittest test case, you would have to:

  1. Import unittest from the standard library
  2. Create a class called TestSum that inherits from the TestCase class
  3. Convert the test functions into methods by adding self as the first argument
  4. Change the assertions to use the self.assertEqual() method on the TestCase class
  5. Change the command-line entry point to call unittest.main()

Follow those steps by creating a new file test_sum_unittest.py with the following code:

import unittest


class TestSum(unittest.TestCase):

    def test_sum(self):
        self.assertEqual(sum([1, 2, 3]), 6, "Should be 6")

    def test_sum_tuple(self):
        self.assertEqual(sum((1, 2, 2)), 6, "Should be 6")

if __name__ == '__main__':
    unittest.main()

If you execute this at the command line, you’ll see one success (indicated with .) and one failure (indicated with F):

$ python test_sum_unittest.py
.F
======================================================================
FAIL: test_sum_tuple (__main__.TestSum)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_sum_unittest.py", line 9, in test_sum_tuple
    self.assertEqual(sum((1, 2, 2)), 6, "Should be 6")
AssertionError: Should be 6

----------------------------------------------------------------------
Ran 2 tests in 0.001s

FAILED (failures=1)

You have just executed two tests using the unittest test runner.

Note: Be careful if you’re writing test cases that need to execute in both Python 2 and 3. In Python 2.7 and below, unittest is called unittest2. If you simply import from unittest, you will get different versions with different features between Python 2 and 3.

For more information on unittest, you can explore the unittest Documentation.

nose

You may find that over time, as you write hundreds or even thousands of tests for your application, it becomes increasingly hard to understand and use the output from unittest.

nose is compatible with any tests written using the unittest framework and can be used as a drop-in replacement for the unittest test runner. The development of nose as an open-source application fell behind, and a fork called nose2 was created. If you’re starting from scratch, it is recommended that you use nose2 instead of nose.

To get started with nose2, install nose2 from PyPI and execute it on the command line. nose2 will try to discover all test scripts named test*.py and test cases inheriting from unittest.TestCase in your current directory:

$ pip install nose2
$ python -m nose2
.F
======================================================================
FAIL: test_sum_tuple (__main__.TestSum)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_sum_unittest.py", line 9, in test_sum_tuple
    self.assertEqual(sum((1, 2, 2)), 6, "Should be 6")
AssertionError: Should be 6

----------------------------------------------------------------------
Ran 2 tests in 0.001s

FAILED (failures=1)

You have just executed the test you created in test_sum_unittest.py from the nose2 test runner. nose2 offers many command-line flags for filtering the tests that you execute. For more information, you can explore the Nose 2 documentation.

pytest

pytest supports execution of unittest test cases. The real advantage of pytest comes by writing pytest test cases. pytest test cases are a series of functions in a Python file starting with the name test_.

pytest has some other great features:

Writing the TestSum test case example for pytest would look like this:

def test_sum():
    assert sum([1, 2, 3]) == 6, "Should be 6"

def test_sum_tuple():
    assert sum((1, 2, 2)) == 6, "Should be 6"

You have dropped the TestCase, any use of classes, and the command-line entry point.

More information can be found at the Pytest Documentation Website.

Writing Your First Test

Let’s bring together what you’ve learned so far and, instead of testing the built-in sum() function, test a simple implementation of the same requirement.

Create a new project folder and, inside that, create a new folder called my_sum. Inside my_sum, create an empty file called __init__.py. Creating the __init__.py file means that the my_sum folder can be imported as a module from the parent directory.

Your project folder should look like this:

project/
│
└── my_sum/
    └── __init__.py

Open up my_sum/__init__.py and create a new function called sum(), which takes an iterable (a list, tuple, or set) and adds the values together:

def sum(arg):
    total = 0
    for val in arg:
        total += val
    return total

This code example creates a variable called total, iterates over all the values in arg, and adds them to total. It then returns the result once the iterable has been exhausted.

Where to Write the Test

To get started writing tests, you can simply create a file called test.py, which will contain your first test case. Because the file will need to be able to import your application to be able to test it, you want to place test.py above the package folder, so your directory tree will look something like this:

project/
│
├── my_sum/
│   └── __init__.py
|
└── test.py

You’ll find that, as you add more and more tests, your single file will become cluttered and hard to maintain, so you can create a folder called tests/ and split the tests into multiple files. It is convention to ensure each file starts with test_ so all test runners will assume that Python file contains tests to be executed. Some very large projects split tests into more subdirectories based on their purpose or usage.

Note: What if your application is a single script?

You can import any attributes of the script, such as classes, functions, and variables by using the built-in __import__() function. Instead of from my_sum import sum, you can write the following:

target = __import__("my_sum.py")
sum = target.sum

The benefit of using __import__() is that you don’t have to turn your project folder into a package, and you can specify the file name. This is also useful if your filename collides with any standard library packages. For example, math.py would collide with the math module.

How to Structure a Simple Test

Before you dive into writing tests, you’ll want to first make a couple of decisions:

  1. What do you want to test?
  2. Are you writing a unit test or an integration test?

Then the structure of a test should loosely follow this workflow:

  1. Create your inputs
  2. Execute the code being tested, capturing the output
  3. Compare the output with an expected result

For this application, you’re testing sum(). There are many behaviors in sum() you could check, such as:

The most simple test would be a list of integers. Create a file, test.py with the following Python code:

import unittest

from my_sum import sum


class TestSum(unittest.TestCase):
    def test_list_int(self):
        """
        Test that it can sum a list of integers
        """
        data = [1, 2, 3]
        result = sum(data)
        self.assertEqual(result, 6)

if __name__ == '__main__':
    unittest.main()

This code example:

  1. Imports sum() from the my_sum package you created

  2. Defines a new test case class called TestSum, which inherits from unittest.TestCase

  3. Defines a test method, .test_list_int(), to test a list of integers. The method .test_list_int() will:

    • Declare a variable data with a list of numbers (1, 2, 3)
    • Assign the result of my_sum.sum(data) to a result variable
    • Assert that the value of result equals 6 by using the .assertEqual() method on the unittest.TestCase class
  4. Defines a command-line entry point, which runs the unittest test-runner .main()

If you’re unsure what self is or how .assertEqual() is defined, you can brush up on your object-oriented programming with Python 3 Object-Oriented Programming.

How to Write Assertions

The last step of writing a test is to validate the output against a known response. This is known as an assertion. There are some general best practices around how to write assertions:

unittest comes with lots of methods to assert on the values, types, and existence of variables. Here are some of the most commonly used methods:

Method Equivalent to
.assertEqual(a, b) a == b
.assertTrue(x) bool(x) is True
.assertFalse(x) bool(x) is False
.assertIs(a, b) a is b
.assertIsNone(x) x is None
.assertIn(a, b) a in b
.assertIsInstance(a, b) isinstance(a, b)

.assertIs(), .assertIsNone(), .assertIn(), and .assertIsInstance() all have opposite methods, named .assertIsNot(), and so forth.

Side Effects

When you’re writing tests, it’s often not as simple as looking at the return value of a function. Often, executing a piece of code will alter other things in the environment, such as the attribute of a class, a file on the filesystem, or a value in a database. These are known as side effects and are an important part of testing. Decide if the side effect is being tested before including it in your list of assertions.

If you find that the unit of code you want to test has lots of side effects, you might be breaking the Single Responsibility Principle. Breaking the Single Responsibility Principle means the piece of code is doing too many things and would be better off being refactored. Following the Single Responsibility Principle is a great way to design code that it is easy to write repeatable and simple unit tests for, and ultimately, reliable applications.

Executing Your First Test

Now that you’ve created the first test, you want to execute it. Sure, you know it’s going to pass, but before you create more complex tests, you should check that you can execute the tests successfully.

Executing Test Runners

The Python application that executes your test code, checks the assertions, and gives you test results in your console is called the test runner.

At the bottom of test.py, you added this small snippet of code:

if __name__ == '__main__':
    unittest.main()

This is a command line entry point. It means that if you execute the script alone by running python test.py at the command line, it will call unittest.main(). This executes the test runner by discovering all classes in this file that inherit from unittest.TestCase.

This is one of many ways to execute the unittest test runner. When you have a single test file named test.py, calling python test.py is a great way to get started.

Another way is using the unittest command line. Try this:

$ python -m unittest test

This will execute the same test module (called test) via the command line.

You can provide additional options to change the output. One of those is -v for verbose. Try that next:

$ python -m unittest -v test
test_list_int (test.TestSum) ... ok

----------------------------------------------------------------------
Ran 1 tests in 0.000s

This executed the one test inside test.py and printed the results to the console. Verbose mode listed the names of the tests it executed first, along with the result of each test.

Instead of providing the name of a module containing tests, you can request an auto-discovery using the following:

$ python -m unittest discover

This will search the current directory for any files named test*.py and attempt to test them.

Once you have multiple test files, as long as you follow the test*.py naming pattern, you can provide the name of the directory instead by using the -s flag and the name of the directory:

$ python -m unittest discover -s tests

unittest will run all tests in a single test plan and give you the results.

Lastly, if your source code is not in the directory root and contained in a subdirectory, for example in a folder called src/, you can tell unittest where to execute the tests so that it can import the modules correctly with the -t flag:

$ python -m unittest discover -s tests -t src

unittest will change to the src/ directory, scan for all test*.py files inside the the tests directory, and execute them.

Understanding Test Output

That was a very simple example where everything passes, so now you’re going to try a failing test and interpret the output.

sum() should be able to accept other lists of numeric types, like fractions.

At the top of the test.py file, add an import statement to import the Fraction type from the fractions module in the standard library:

from fractions import Fraction

Now add a test with an assertion expecting the incorrect value, in this case expecting the sum of 1/4, 1/4, and 2/5 to be 1:

import unittest

from my_sum import sum


class TestSum(unittest.TestCase):
    def test_list_int(self):
        """
        Test that it can sum a list of integers
        """
        data = [1, 2, 3]
        result = sum(data)
        self.assertEqual(result, 6)

    def test_list_fraction(self):
        """
        Test that it can sum a list of fractions
        """
        data = [Fraction(1, 4), Fraction(1, 4), Fraction(2, 5)]
        result = sum(data)
        self.assertEqual(result, 1)

if __name__ == '__main__':
    unittest.main()

If you execute the tests again with python -m unittest test, you should see the following output:

$ python -m unittest test
F.
======================================================================
FAIL: test_list_fraction (test.TestSum)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test.py", line 21, in test_list_fraction
    self.assertEqual(result, 1)
AssertionError: Fraction(9, 10) != 1

----------------------------------------------------------------------
Ran 2 tests in 0.001s

FAILED (failures=1)

In the output, you’ll see the following information:

  1. The first line shows the execution results of all the tests, one failed (F) and one passed (.).

  2. The FAIL entry shows some details about the failed test:

    • The test method name (test_list_fraction)
    • The test module (test) and the test case (TestSum)
    • A traceback to the failing line
    • The details of the assertion with the expected result (1) and the actual result (Fraction(9, 10))

Remember, you can add extra information to the test output by adding the -v flag to the python -m unittest command.

Running Your Tests From PyCharm

If you’re using the PyCharm IDE, you can run unittest or pytest by following these steps:

  1. In the Project tool window, select the tests directory.
  2. On the context menu, choose the run command for unittest. For example, choose Run ‘Unittests in my Tests…’.

This will execute unittest in a test window and give you the results within PyCharm:

PyCharm Testing

More information is available on the PyCharm Website.

Running Your Tests From Visual Studio Code

If you’re using the Microsoft Visual Studio Code IDE, support for unittest, nose, and pytest execution is built into the Python plugin.

If you have the Python plugin installed, you can set up the configuration of your tests by opening the Command Palette with Ctrl+Shift+P and typing “Python test”. You will see a range of options:

Visual Studio Code Step 1

Choose Debug All Unit Tests, and VSCode will then raise a prompt to configure the test framework. Click on the cog to select the test runner (unittest) and the home directory (.).

Once this is set up, you will see the status of your tests at the bottom of the window, and you can quickly access the test logs and run the tests again by clicking on these icons:

Visual Studio Code Step 2

This shows the tests are executing, but some of them are failing.

Testing for Web Frameworks Like Django and Flask

If you’re writing tests for a web application using one of the popular frameworks like Django or Flask, there are some important differences in the way you write and run the tests.

Why They’re Different From Other Applications

Think of all the code you’re going to be testing in a web application. The routes, views, and models all require lots of imports and knowledge about the frameworks being used.

This is similar to the car test at the beginning of the tutorial: you have to start up the car’s computer before you can run a simple test like checking the lights.

Django and Flask both make this easy for you by providing a test framework based on unittest. You can continue writing tests in the way you’ve been learning but execute them slightly differently.

How to Use the Django Test Runner

The Django startapp template will have created a tests.py file inside your application directory. If you don’t have that already, you can create it with the following contents:

from django.test import TestCase

class MyTestCase(TestCase):
    # Your test methods

The major difference with the examples so far is that you need to inherit from the django.test.TestCase instead of unittest.TestCase. These classes have the same API, but the Django TestCase class sets up all the required state to test.

To execute your test suite, instead of using unittest at the command line, you use manage.py test:

$ python manage.py test

If you want multiple test files, replace tests.py with a folder called tests, insert an empty file inside called __init__.py, and create your test_*.py files. Django will discover and execute these.

More information is available at the Django Documentation Website.

How to Use unittest and Flask

Flask requires that the app be imported and then set in test mode. You can instantiate a test client and use the test client to make requests to any routes in your application.

All of the test client instantiation is done in the setUp method of your test case. In the following example, my_app is the name of the application. Don’t worry if you don’t know what setUp does. You’ll learn about that in the More Advanced Testing Scenarios section.

The code within your test file should look like this:

import my_app
import unittest


class MyTestCase(unittest.TestCase):

    def setUp(self):
        my_app.app.testing = True
        self.app = my_app.app.test_client()

    def test_home(self):
        result = self.app.get('/')
        # Make your assertions

You can then execute the test cases using the python -m unittest discover command.

More information is available at the Flask Documentation Website.

More Advanced Testing Scenarios

Before you step into creating tests for your application, remember the three basic steps of every test:

  1. Create your inputs
  2. Execute the code, capturing the output
  3. Compare the output with an expected result

It’s not always as easy as creating a static value for the input like a string or a number. Sometimes, your application will require an instance of a class or a context. What do you do then?

The data that you create as an input is known as a fixture. It’s common practice to create fixtures and reuse them.

If you’re running the same test and passing different values each time and expecting the same result, this is known as parameterization.

Handling Expected Failures

Earlier, when you made a list of scenarios to test sum(), a question came up: What happens when you provide it with a bad value, such as a single integer or a string?

In this case, you would expect sum() to throw an error. When it does throw an error, that would cause the test to fail.

There’s a special way to handle expected errors. You can use .assertRaises() as a context-manager, then inside the with block execute the test steps:

import unittest

from my_sum import sum


class TestSum(unittest.TestCase):
    def test_list_int(self):
        """
        Test that it can sum a list of integers
        """
        data = [1, 2, 3]
        result = sum(data)
        self.assertEqual(result, 6)

    def test_list_fraction(self):
        """
        Test that it can sum a list of fractions
        """
        data = [Fraction(1, 4), Fraction(1, 4), Fraction(2, 5)]
        result = sum(data)
        self.assertEqual(result, 1)

    def test_bad_type(self):
        data = "banana"
        with self.assertRaises(TypeError):
            result = sum(data)

if __name__ == '__main__':
    unittest.main()

This test case will now only pass if sum(data) raises a TypeError. You can replace TypeError with any exception type you choose.

Isolating Behaviors in Your Application

Earlier in the tutorial, you learned what a side effect is. Side effects make unit testing harder since, each time a test is run, it might give a different result, or even worse, one test could impact the state of the application and cause another test to fail!

Testing Side Effects

There are some simple techniques you can use to test parts of your application that have many side effects:

If you’re not familiar with mocking, see Python CLI Testing for some great examples.

Writing Integration Tests

So far, you’ve been learning mainly about unit testing. Unit testing is a great way to build predictable and stable code. But at the end of the day, your application needs to work when it starts!

Integration testing is the testing of multiple components of the application to check that they work together. Integration testing might require acting like a consumer or user of the application by:

Each of these types of integration tests can be written in the same way as a unit test, following the Input, Execute, and Assert pattern. The most significant difference is that integration tests are checking more components at once and therefore will have more side effects than a unit test. Also, integration tests will require more fixtures to be in place, like a database, a network socket, or a configuration file.

This is why it’s good practice to separate your unit tests and your integration tests. The creation of fixtures required for an integration like a test database and the test cases themselves often take a lot longer to execute than unit tests, so you may only want to run integration tests before you push to production instead of once on every commit.

A simple way to separate unit and integration tests is simply to put them in different folders:

project/
│
├── my_app/
│   └── __init__.py
│
└── tests/
    |
    ├── unit/
    |   ├── __init__.py
    |   └── test_sum.py
    |
    └── integration/
        ├── __init__.py
        └── test_integration.py

There are many ways to execute only a select group of tests. The specify source directory flag, -s, can be added to unittest discover with the path containing the tests:

$ python -m unittest discover -s tests/integration

unittest will have given you the results of all the tests within the tests/integration directory.

Testing Data-Driven Applications

Many integration tests will require backend data like a database to exist with certain values. For example, you might want to have a test that checks that the application displays correctly with more than 100 customers in the database, or the order page works even if the product names are displayed in Japanese.

These types of integration tests will depend on different test fixtures to make sure they are repeatable and predictable.

A good technique to use is to store the test data in a folder within your integration testing folder called fixtures to indicate that it contains test data. Then, within your tests, you can load the data and run the test.

Here’s an example of that structure if the data consisted of JSON files:

project/
│
├── my_app/
│   └── __init__.py
│
└── tests/
    |
    └── unit/
    |   ├── __init__.py
    |   └── test_sum.py
    |
    └── integration/
        |
        ├── fixtures/
        |   ├── test_basic.json
        |   └── test_complex.json
        |
        ├── __init__.py
        └── test_integration.py

Within your test case, you can use the .setUp() method to load the test data from a fixture file in a known path and execute many tests against that test data. Remember you can have multiple test cases in a single Python file, and the unittest discovery will execute both. You can have one test case for each set of test data:

import unittest


class TestBasic(unittest.TestCase):
    def setUp(self):
        # Load test data
        self.app = App(database='fixtures/test_basic.json')

    def test_customer_count(self):
        self.assertEqual(len(self.app.customers), 100)

    def test_existence_of_customer(self):
        customer = self.app.get_customer(id=10)
        self.assertEqual(customer.name, "Org XYZ")
        self.assertEqual(customer.address, "10 Red Road, Reading")


class TestComplexData(unittest.TestCase):
    def setUp(self):
        # load test data
        self.app = App(database='fixtures/test_complex.json')

    def test_customer_count(self):
        self.assertEqual(len(self.app.customers), 10000)

    def test_existence_of_customer(self):
        customer = self.app.get_customer(id=9999)
        self.assertEqual(customer.name, u"バナナ")
        self.assertEqual(customer.address, "10 Red Road, Akihabara, Tokyo")

if __name__ == '__main__':
    unittest.main()

If your application depends on data from a remote location, like a remote API, you’ll want to ensure your tests are repeatable. Having your tests fail because the API is offline or there is a connectivity issue could slow down development. In these types of situations, it is best practice to store remote fixtures locally so they can be recalled and sent to the application.

The requests library has a complimentary package called responses that gives you ways to create response fixtures and save them in your test folders. Find out more on their GitHub Page.

Testing in Multiple Environments

So far, you’ve been testing against a single version of Python using a virtual environment with a specific set of dependencies. You might want to check that your application works on multiple versions of Python, or multiple versions of a package. Tox is an application that automates testing in multiple environments.

Installing Tox

Tox is available on PyPI as a package to install via pip:

$ pip install tox

Now that you have Tox installed, it needs to be configured.

Configuring Tox for Your Dependencies

Tox is configured via a configuration file in your project directory. The Tox configuration file contains the following:

Instead of having to learn the Tox configuration syntax, you can get a head start by running the quickstart application:

$ tox-quickstart

The Tox configuration tool will ask you those questions and create a file similar to the following in tox.ini:

[tox]
envlist = py27, py36

[testenv]
deps =

commands =
    python -m unittest discover

Before you can run Tox, it requires that you have a setup.py file in your application folder containing the steps to install your package. If you don’t have one, you can follow this guide on how to create a setup.py before you continue.

Alternatively, if your project is not for distribution on PyPI, you can skip this requirement by adding the following line in the tox.ini file under the [tox] heading:

[tox]
envlist = py27, py36
skipsdist=True

If you don’t create a setup.py, and your application has some dependencies from PyPI, you’ll need to specify those on a number of lines under the [testenv] section. For example, Django would require the following:

[testenv]
deps = django

Once you have completed that stage, you’re ready to run the tests.

You can now execute Tox, and it will create two virtual environments: one for Python 2.7 and one for Python 3.6. The Tox directory is called .tox/. Within the .tox/ directory, Tox will execute python -m unittest discover against each virtual environment.

You can run this process by calling Tox at the command line:

$ tox

Tox will output the results of your tests against each environment. The first time it runs, Tox takes a little bit of time to create the virtual environments, but once it has, the second execution will be a lot faster.

Executing Tox

The output of Tox is quite straightforward. It creates an environment for each version, installs your dependencies, and then runs the test commands.

There are some additional command line options that are great to remember.

Run only a single environment, such as Python 3.6:

$ tox -e py36

Recreate the virtual environments, in case your dependencies have changed or site-packages is corrupt:

$ tox -r

Run Tox with less verbose output:

$ tox -q

Running Tox with more verbose output:

$ tox -v

More information on Tox can be found at the Tox Documentation Website.

Automating the Execution of Your Tests

So far, you have been executing the tests manually by running a command. There are some tools for executing tests automatically when you make changes and commit them to a source-control repository like Git. Automated testing tools are often known as CI/CD tools, which stands for “Continuous Integration/Continuous Deployment.” They can run your tests, compile and publish any applications, and even deploy them into production.

Travis CI is one of many available CI (Continuous Integration) services available.

Travis CI works nicely with Python, and now that you’ve created all these tests, you can automate the execution of them in the cloud! Travis CI is free for any open-source projects on GitHub and GitLab and is available for a charge for private projects.

To get started, login to the website and authenticate with your GitHub or GitLab credentials. Then create a file called .travis.yml with the following contents:

language: python
python:
  - "2.7"
  - "3.7"
install:
  - pip install -r requirements.txt
script:
  - python -m unittest discover

This configuration instructs Travis CI to:

  1. Test against Python 2.7 and 3.7 (You can replace those versions with any you choose.)
  2. Install all the packages you list in requirements.txt (You should remove this section if you don’t have any dependencies.)
  3. Run python -m unittest discover to run the tests

Once you have committed and pushed this file, Travis CI will run these commands every time you push to your remote Git repository. You can check out the results on their website.

What’s Next

Now that you’ve learned how to create tests, execute them, include them in your project, and even execute them automatically, there are a few advanced techniques you might find handy as your test library grows.

Introducing Linters Into Your Application

Tox and Travis CI have configuration for a test command. The test command you have been using throughout this tutorial is python -m unittest discover.

You can provide one or many commands in all of these tools, and this option is there to enable you to add more tools that improve the quality of your application.

One such type of application is called a linter. A linter will look at your code and comment on it. It could give you tips about mistakes you’ve made, correct trailing spaces, and even predict bugs you may have introduced.

For more information on linters, read the Python Code Quality tutorial.

Passive Linting With flake8

A popular linter that comments on the style of your code in relation to the PEP 8 specification is flake8.

You can install flake8 using pip:

$ pip install flake8

You can then run flake8 over a single file, a folder, or a pattern:

$ flake8 test.py
test.py:6:1: E302 expected 2 blank lines, found 1
test.py:23:1: E305 expected 2 blank lines after class or function definition, found 1
test.py:24:20: W292 no newline at end of file

You will see a list of errors and warnings for your code that flake8 has found.

flake8 is configurable on the command line or inside a configuration file in your project. If you wanted to ignore certain rules, like E305 shown above, you can set them in the configuration. flake8 will inspect a .flake8 file in the project folder or a setup.cfg file. If you decided to use Tox, you can put the flake8 configuration section inside tox.ini.

This example ignores the .git and __pycache__ directories as well as the E305 rule. Also, it sets the max line length to 90 instead of 80 characters. You will likely find that the default constraint of 79 characters for line-width is very limiting for tests, as they contain long method names, string literals with test values, and other pieces of data that can be longer. It is common to set the line length for tests to up to 120 characters:

[flake8]
ignore = E305
exclude = .git,__pycache__
max-line-length = 90

Alternatively, you can provide these options on the command line:

$ flake8 --ignore E305 --exclude .git,__pycache__ --max-line-length=90

A full list of configuration options is available on the Documentation Website.

You can now add flake8 to your CI configuration. For Travis CI, this would look as follows:

matrix:
  include:
    - python: "2.7"
      script: "flake8"

Travis will read the configuration in .flake8 and fail the build if any linting errors occur. Be sure to add the flake8 dependency to your requirements.txt file.

Aggressive Linting With a Code Formatter

flake8 is a passive linter: it recommends changes, but you have to go and change the code. A more aggressive approach is a code formatter. Code formatters will change your code automatically to meet a collection of style and layout practices.

black is a very unforgiving formatter. It doesn’t have any configuration options, and it has a very specific style. This makes it great as a drop-in tool to put in your test pipeline.

Note: black requires Python 3.6+.

You can install black via pip:

$ pip install black

Then to run black at the command line, provide the file or directory you want to format:

$ black test.py

Keeping Your Test Code Clean

When writing tests, you may find that you end up copying and pasting code a lot more than you would in regular applications. Tests can be very repetitive at times, but that is by no means a reason to leave your code sloppy and hard to maintain.

Over time, you will develop a lot of technical debt in your test code, and if you have significant changes to your application that require changes to your tests, it can be a more cumbersome task than necessary because of the way you structured them.

Try to follow the DRY principle when writing tests: Don’t Repeat Yourself.

Test fixtures and functions are a great way to produce test code that is easier to maintain. Also, readability counts. Consider deploying a linting tool like flake8 over your test code:

$ flake8 --max-line-length=120 tests/

Testing for Performance Degradation Between Changes

There are many ways to benchmark code in Python. The standard library provides the timeit module, which can time functions a number of times and give you the distribution. This example will execute test() 100 times and print() the output:

def test():
    # ... your code

if __name__ == '__main__':
    import timeit
    print(timeit.timeit("test()", setup="from __main__ import test", number=100))

Another option, if you decided to use pytest as a test runner, is the pytest-benchmark plugin. This provides a pytest fixture called benchmark. You can pass benchmark() any callable, and it will log the timing of the callable to the results of pytest.

You can install pytest-benchmark from PyPI using pip:

$ pip install pytest-benchmark

Then, you can add a test that uses the fixture and passes the callable to be executed:

def test_my_function(benchmark):
    result = benchmark(test)

Execution of pytest will now give you benchmark results:

Pytest benchmark screenshot

More information is available at the Documentation Website.

Testing for Security Flaws in Your Application

Another test you will want to run on your application is checking for common security mistakes or vulnerabilities.

You can install bandit from PyPI using pip:

$ pip install bandit

You can then pass the name of your application module with the -r flag, and it will give you a summary:

$ bandit -r my_sum
[main]  INFO    profile include tests: None
[main]  INFO    profile exclude tests: None
[main]  INFO    cli include tests: None
[main]  INFO    cli exclude tests: None
[main]  INFO    running on Python 3.5.2
Run started:2018-10-08 00:35:02.669550

Test results:
        No issues identified.

Code scanned:
        Total lines of code: 5
        Total lines skipped (#nosec): 0

Run metrics:
        Total issues (by severity):
                Undefined: 0.0
                Low: 0.0
                Medium: 0.0
                High: 0.0
        Total issues (by confidence):
                Undefined: 0.0
                Low: 0.0
                Medium: 0.0
                High: 0.0
Files skipped (0):

As with flake8, the rules that bandit flags are configurable, and if there are any you wish to ignore, you can add the following section to your setup.cfg file with the options:

[bandit]
exclude: /test
tests: B101,B102,B301

More details are available at the GitHub Website.

Conclusion

Python has made testing accessible by building in the commands and libraries you need to validate that your applications work as designed. Getting started with testing in Python needn’t be complicated: you can use unittest and write small, maintainable methods to validate your code.

As you learn more about testing and your application grows, you can consider switching to one of the other test frameworks, like pytest, and start to leverage more advanced features.

Thank you for reading. I hope you have a bug-free future with Python!


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

October 22, 2018 02:00 PM UTC


Graham Dumpleton

Packaging mod_wsgi into a zipapp using shiv.

At the recent DjangoCon US conference, Peter Baumgartner presented a talk titled Containerless Django: Deploying without Docker. In the talk Peter described what a zipapp (executable Python zip archive) is and how these could be created using the shiv tool, the aim being to be able to create a single file executable for a complete Python application, including all its Python package dependencies.

October 22, 2018 10:40 AM UTC


Python Software Foundation

2018 PSF Recurring Giving Campaign

The PSF is launching an end-of-year fundraising drive to build a sustainable community of supporters. Our goal is to raise $30,000! You can help by signing up to give monthly or if you’re already a supporting member (Thank You!!), by checking the box to renew your membership automatically.

The drive begins October 22 and concludes November 21, 2018.

Your donations have IMPACT

Over $118,543 was awarded in financial aid to 143 PyCon attendees in 2018.
$240,000 has been paid in grants from January through September 2018 to recipients in 45 different countries.

Some examples of how your donation dollars are spent:


This work can’t be done without the generous financial support that people like you provide.

It's easy to donate - 

 More details on contributing can be found on the 2018 PSF Recurring Giving Campaign page.

Thank you to everyone who has contributed to our past fundraisers! Your support is what makes the PSF possible and is greatly appreciated by the Python community.

If you would like to share the news about the PSF’s Recurring Giving Campaign, please share a tweet via this tweet button or by copying the text in the following:


Contribute to our Recurring Giving Campaign & help us reach our goal of $30K. The PSF is a non-profit organization entirely supported by its sponsors, members & the public. https://www.python.org/psf/donations/2018-q4-drive/ #idonatedtothepsf #ijoinedthepsf

October 22, 2018 10:20 AM UTC


Mike Driscoll

PyDev of the Week: Philip Guo

This week we welcome Philip Guo (@pgbovine) as our PyDev of the Week! Philip is the creator of the popular Python Tutor website and a professor at UC San Diego in the cognitive science and computer science department. Let’s take some time to get to know Philip!

Can you tell us a little about yourself (hobbies, education, etc):

I am an assistant professor of cognitive science and computer science at UC San Diego. My research and teaching revolve around the topic of how to greatly expand the population of people around the world who can learn programming skills. There’s a lot of great people working on training the next generation of software developers, but I’m also interested in how we can apply programming to many other fields including the physical sciences, social sciences, design, arts, and humanities. Outside of my normal work, I used to write a lot of articles on my website but lately I’ve been trying to stay off the computer in my off-time, so I’ve turned to recording vlogs, podcasts, and other audio/video content that don’t require me to be glued to the computer.

Why did you start using Python?

Wow, that made me think way back … I first started using Python around 2004 (I believe it was Python 2.2 or 2.4 back then) to write small programs to organize my personal photo collection and create static photo galleries for my website. We didn’t have a term for this back then, but now I think people call it a “static site generator” — the idea being that you would write scripts offline to pre-process and organize your content, then generate the appropriate HTML pages. Those HTML pages can then be uploaded (via old-school FTP) to a web server and then visible on the web. The nice thing about static site generators (especially back in those days) was that you could host your website anywhere since it was just a bunch of HTML and image files; there was no need for the server to support any kind of scripting. Python made me fall in love with programming because for the first time I could write code to do something tangible and immediately useful to me. Before that, I saw code mostly as either an academic subject that I learned in classes or a tool that I used for jobs.

What other programming languages do you know and which is your favorite?

Besides Python, I probably know JavaScript the next best since most of my day-to-day programming work is building web apps, which requires JavaScript for the frontend. I’ve been doing JavaScript for longer than Python, starting around 1997 or so, but that was *really* simple copy-and-paste from books sort of stuff. I didn’t actually understand what was going on with the goofy JavaScript animation code that I copied into my very first websites over 20 years ago (wow, time flies!). I’ve been steadily tracking the evolution of the JavaScript ecosystem throughout the past 20 years, and am equal parts inspired and horrified by it.

I also know C pretty well in theory, but in practice I haven’t had to use it much for the past decade. I used to write a ton of low-level source and binary code manipulation tools for my software analysis research back in the day, so I had to wrestle with the innards of the C and Linux world more than I’d want to remember. I think it’s immensely important for people to at least have a decent understanding of C and its compiler toolchain ecosystem (whether on Windows or UNIX-like systems) since many higher-level programming languages are implemented in C.

I’ll say Python is my favorite since I’m doing this interview 🙂 But really I try not to get too dogmatic about particular programming languages. In terms of what I’d turn to for most tasks, I’d pick up Python just because I know it the best.

This article is a retrospective of my history of learning programming: http://pgbovine.net/how-i-learned-programming.htm

What projects are you working on now?

Now that I manage a research group of anywhere from 4 to 6 graduate students at a time, the true answer to this question is: the union of whatever my students are working on, plus whatever my external collaborators work on. I’m mostly herding cats nowadays.

If I had to sum up what we collectively work on, I’d say it’s: studying and building scalable ways to help people learn computer programming and data science. I know that’s an unsatisfying answer, so check out my publications page for details. It shows the outputs of all of our past projects, along with supplemental resources and various summaries: http://pgbovine.net/publications.htm

In addition to advising on research projects, I try hard to carve out time to continue working on Python Tutor, which is probably how most people in the Python community know about me. Very briefly, it’s a web-based tool that helps people overcome a fundamental barrier to learning programming: understanding what happens as the computer runs each line of code. Using Python Tutor, you can write code in your web browser, see it visualized step by step, and get live help from volunteers. So far, over 3.5 million people in over 180 countries have used Python Tutor to visualize over 50 million pieces of code, often as a supplement to textbooks, lectures, and online tutorials. Finally, despite its Pythonic name, it actually supports five other popular languages: Java, JavaScript, Ruby, C, and C++.

Which Python libraries are your favorite (core or 3rd party)?

I haven’t thought too much about this. A meta-answer to this question is to just install Anaconda, which makes tons of popular 3rd-party libraries available on your machine without going through the enormous pain of installing dependencies (especially on Windows). Another meta-answer is that I should just parse all of my old Python code (using the ast module!), then make a charts of all of the modules I’ve imported, and see how frequently I’ve used each one.

Thinking about it a bit more, maybe I’d say the bdb module from the standard library.

I wouldn’t say it’s my “favorite” as in “wow I love bdb so much!” but rather it’s an essential part of Python Tutor since that’s what enables me to hook into every step of code execution to visualize its run-time state. Without bdb, there would be no Python Tutor. So I suppose it’s my favorite since it enabled me to create my most popular project to date. So yeah, long live bdb!

Is there anything else you’d like to say?

Stepping away from the computer can increase your IQ by 50 points (scientifically proven … ok, maybe not). Especially when writing code, it’s tempting to just keep hammering away at a bug or feature stubbornly and twiddling bits because you think that the very next run will be the correct one, and then you can dig yourself deeper and deeper into a hole of fatigue. Seriously, just step away from the computer for a while; take a walk, go get some exercise, watch silly YouTube videos on your phone, go out to run some errands. And I guarantee that when you come back refreshed, you’ll suddenly see the solution to your coding problem so clearly that you couldn’t believe you didn’t see it before.

Also, on a related note, as I mentioned earlier, I’ve been trying to minimize my computer time while I’m not working. I’ve been a lot happier and more relaxed when I associate my computer purely with work, and when I’m not working, I don’t go near it. (I still have my phone where I can browse the web, watch YouTube, etc., so it’s not like I’m totally disconnected.)

October 22, 2018 05:05 AM UTC


Full Stack Python

Fresh Tutorials on Full Stack Python

There are a bunch of new tutorials on Full Stack Python that were written since the last time I sent out an email newsletter. These range from getting started with some popular open source projects to integrating third party APIs to build authentication into Flask applications:

Got questions or comments about  Full Stack Python? Send me an email or  submit an issue ticket on GitHub  to let me know how to improve the site as I continue to fill in the table of contents  with new pages and  new tutorials.

October 22, 2018 04:00 AM UTC


Podcast.__init__

Of Checklists, Ethics, and Data with Emily Miller and Peter Bull

As data science becomes more widespread and has a bigger impact on the lives of people, it is important that those projects and products are built with a conscious consideration of ethics. Keeping ethical principles in mind throughout the lifecycle of a data project helps to reduce the overall effort of preventing negative outcomes from the use of the final product. Emily Miller and Peter Bull of Driven Data have created Deon to improve the communication and conversation around ethics among and between data teams. It is a Python project that generates a checklist of common concerns for data oriented projects at the various stages of the lifecycle where they should be considered. In this episode they discuss their motivation for creating the project, the challenges and benefits of maintaining such a checklist, and how you can start using it today.

Summary

As data science becomes more widespread and has a bigger impact on the lives of people, it is important that those projects and products are built with a conscious consideration of ethics. Keeping ethical principles in mind throughout the lifecycle of a data project helps to reduce the overall effort of preventing negative outcomes from the use of the final product. Emily Miller and Peter Bull of Driven Data have created Deon to improve the communication and conversation around ethics among and between data teams. It is a Python project that generates a checklist of common concerns for data oriented projects at the various stages of the lifecycle where they should be considered. In this episode they discuss their motivation for creating the project, the challenges and benefits of maintaining such a checklist, and how you can start using it today.

Preface

Interview

Keep In Touch

Picks

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

October 22, 2018 02:11 AM UTC

October 21, 2018


"Morphex's Blogologue"

A surveillance app (Python tree here I come!)

So, lately I've been taking it very easy on the activities front, since I've had a court-related matter to get to see the kids.

So I've had some ideas, urges etc. to do technical stuff, artsy stuff etc. - and today I wrote myself a script that will run on a PC, or a demo board like Raspberry Pi, Orange Pi etc. - and take pictures on a webcam, and merge these pictures taken over time into a video, and mail them.

I started with Python (2), but figured that now is the time to take the plunge to Python 3 and don't look back.

I have to say, Python 3 isn't difficult, there are some changes in conventions etc. but nothing big.

So I was able to be productive in Python 3 straight away, and wrote this script:

https://github.com/morphex/surveil

Which will do what I mentioned above. And here's the source code tree as it was when I wrote this post:

https://github.com/morphex/surveil/tree/1cb5ceed7657e2635cca...

This video was generated from this script:

http://blogologue.com/surveil_out.webm

And impressively enough, each image the video was generated from, was 55-78 KB, while the entire video with 18 images was ~120 KB. I guess VP9 is effective at compressing, at the same time, a lot of the objects in the video are static.

October 21, 2018 08:50 PM UTC


Django Weblog

DjangoCon Europe 2019 Announcement &amp; Call for volunteers

We are happy to announce that the 2019 DjangoCon Europe will be held in Copenhagen, Denmark. An early announcement has been posted on https://2019.djangocon.eu/ and more details will follow. We are a small local group who are eager to engage with more people to make this happen!

There is a lot to do, but it's very much worth it – DjangoCon Europe is an extremely friendly, open, inclusive, and informative (for beginners and advanced users alike) conference.

Here are some themes and examples of activities responsibilities that we seek help with:

During a November kick-off meeting in Copenhagen (TBA), we will do the final plan of teams and tentative time schedules for all the preparations. Not least, our internal communication tools.

Join us regardless of your prior experience: this is also an opportunity to learn! In other words, you don't have to be an expert to join in. Neither are we experts in hosting such a big event... yet!

Your location prior to the event is not significant (we can do all things that need to be done in Copenhagen itself) – the only important thing is that you have the energy and free time to help organize a wonderful DjangoCon Europe. All teams will be coordinating through online channels, even though some may also meet and work in Copenhagen if possible. The official language of all these prior activities will be English, as well as the conference itself.

Drop us a line and say hello: 2019@djangocon.eu

For general updates about the conference, follow @djangoconeurope on Twitter. To keep updated with our preparations in Copenhagen, the kick-off meeting and further physical and virtual organizing meetings, follow @djangocph or reach out to us on Freenode IRC #djangocph.

Emil Kjer, Benjamin Bach, Sarah Braun, Víðir Valberg Guðmundsson, Sean Powell, Thomas Steen Rasmussen

October 21, 2018 06:15 PM UTC


Tim Golden

Teaching with Mu

At the club I help to run I started this week what I hope will be a regular coding session using Python and, of course, the Mu editor. I'm competing directly with the regular football slot, so I'm not expecting to get many lads. Yesterday I had three Year 8s.

They were at different levels of beginnerhood and I had the usual range of problems getting everything up and running, starting from the fact that the room I'd hoped to use was occupied by older boys as an overflow changing room. Anyhow, we had a good session, created a simple fizz-buzz game and spotted a couple of minor bugs in Mu itself. With their parents' permission I hope to get them onto Github so they can follow the progress of "their" bugs.

/images/coding-1033276_no_faces.thumbnail.jpg

Really through force of circumstances we ended up doing a sort of paired programming. What you can see in the picture is everyone using the one computer (my laptop). We attached a screen and a keyboard and the lads on my right (your left) are using those, while the other lad is using the laptop itself. I'm glad to say that they quickly got used to the discipline of leaving the keyboard for the pilot only with the various co-pilots just chipping in to spot problems. It ended up being not unlike the Sabotage activity which @teknoteacher advocates.

In spite of the Raspberry Pi on the table, we're not using an RPi. The lad on my left is already a Linux user and the lad on my right was keen to understand how he could use Linux and whether a RPi could replace his somewhat underpowered laptop. Depending on how things look I might start using an RPi next week.

I'd hoped to try out the Kano Pixel Kit which we were given at the recent Mu Moot, but I didn't have enough time to prep up for that, so hopefully next time...

October 21, 2018 03:44 PM UTC


Catalin George Festila

Python 2.7 : Python geocoding without key.

Today I will come with a simple example about geocoding.
I used JSON and requests python modules and python version 2.7.
About geocoding I use this service provide by datasciencetoolkit.
You can use this service free and you don't need to register to get a key.
Let's see the python script:

import requests
import json

url = u'http://www.datasciencetoolkit.org/maps/api/geocode/json'
par = {
u'sensor': False,
u'address': u'London'
}

my = requests.get(
url,
par
)
json_out = json.loads(my.text)

if json_out['status'] == 'OK':
print([r['geometry']['location'] for r in json_out['results']])
I run this script and I test with google map to see if this works well.
This is output and working well with the geocoding service:

October 21, 2018 02:45 PM UTC

Windows - test Django version 2.1.1 .

I used python version 3.6.4 to test the last Django framework version.
Add your python to the path environment variable under Windows O.S.
Create your working folder:

C:\Python364>mkdir mywebsite
Go to the folder to install all you need:
C:\Python364>cd mywebsite
Use a virtual environment using the virtualenv command:
C:\Python364\mywebsite>python -m venv myvenv
C:\Python364\mywebsite>myvenv\Scripts\activate
(myvenv) C:\Python364\mywebsite>python -m pip install --upgrade pip
(myvenv) C:\Python364\mywebsite>pip3.6 install django
Collecting django
...
If you try to run again this command you will see the version of Django:
(myvenv) C:\Python364\mywebsite>pip3.6 install django
Requirement already satisfied: django in c:\python364\mywebsite\myvenv\lib\
site-packages (2.1.1)
Requirement already satisfied: pytz in c:\python364\mywebsite\myvenv\lib\
site-packages (from django) (2018.5)
You need to run the django-admin command:
(myvenv) C:\Python364\mywebsite>cd myvenv
(myvenv) C:\Python364\mywebsite\myvenv>cd Scripts
(myvenv) C:\Python364\mywebsite\myvenv\Scripts>django-admin.exe startproject mysite
(myvenv) C:\Python364\mywebsite\myvenv\Scripts>dir my*
(myvenv) C:\Python364\mywebsite\myvenv\Scripts>cd mysite
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite&
Make change to settings file:
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite>cd mysite
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite\mysite>notepad settings.py
Change UTC timezone:
TIME_ZONE = 'Europe/Paris'
Change host:
ALLOWED_HOSTS = ['192.168.0.185','mysite.com']
The next step is to use this commands:
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite\mysite>cd ..
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite>python manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying admin.0003_logentry_add_action_flag_choices... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying auth.0009_alter_user_last_name_max_length... OK
Applying sessions.0001_initial... OK
Let's try this steps with the browser:
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite>python manage.py runserver
192.168.0.185:8080
Performing system checks...

System check identified no issues (0 silenced).
September 07, 2018 - 16:30:13
Django version 2.1.1, using settings 'mysite.settings'
Starting development server at http://192.168.0.185:8080/
Quit the server with CTRL-BREAK.
[07/Sep/2018 16:30:16] "GET / HTTP/1.1" 200 16348
[07/Sep/2018 16:30:21] "GET / HTTP/1.1" 200 16348
This is the result:

Let's start Django application named myblog and add to settings.py :
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite>python manage.py startapp
myblog

(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite>dir
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite>cd mysite
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite\mysite>notepad settings.py
Search into settings.py this line and add 'myblog', , see:
# Application definition

INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'myblog',
]
Let's change models.py from myblog folder:
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite\mysite>cd ..
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite>cd myblog
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite\myblog>notepad models.py
Add this source code:
from django.db import models
# Create your models here.
from django.utils import timezone
from django.contrib.auth.models import User

class Post(models.Model):
author = models.ForeignKey(User,on_delete=models.PROTECT)
title = models.CharField(max_length=200)
text = models.TextField()
create_date = models.DateTimeField(default=timezone.now)
published_date = models.DateTimeField(blank=True, null=True)

def publish(self):
self.publish_date = timezone.now()
self.save()
def __str__(self):
return self.title
Go and run this commands manage.py for model Post with makemigrations myblog and migrate
myblog :
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite\myblog>cd ..
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite>python manage.py
makemigrations myblog
Migrations for 'myblog':
myblog\migrations\0001_initial.py
- Create model Post
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite>python manage.py migrate
myblog
Operations to perform:
Apply all migrations: myblog
Running migrations:
Applying myblog.0001_initial... OK
Add this source code to admin.py from myblog folder:
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite>cd myblog
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite\myblog>notepad admin.py
Let's test again:
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite\myblog>cd ..
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite>python manage.py runserver
192.168.0.185:8080
Performing system checks...

System check identified no issues (0 silenced).
September 07, 2018 - 17:19:00
Django version 2.1.1, using settings 'mysite.settings'
Starting development server at http://192.168.0.185:8080/
Quit the server with CTRL-BREAK.
Check the admin interface with add admin word to link, see: http://192.168.0.185:8080/admin

If you see some errors this will be fixed later.
Let's make a superuser with this command:
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite>python manage.py 
createsuperuser
Username (leave blank to use 'catafest'): catafest
Email address: catafest@yahoo.com
Password:
Password (again):
This password is too short. It must contain at least 8 characters.
Bypass password validation and create user anyway? [y/N]: y
Superuser created successfully.
Run again this command and log in with your user and password:
(myvenv) C:\Python364\mywebsite\myvenv\Scripts\mysite>python manage.py runserver
192.168.0.185:8080
This is the result with users and posts.

Click on Add button to add your post.
The result is this:

I don't make settings for URL and view. This will be changed by users.

October 21, 2018 02:45 PM UTC

Python Qt5 - menu example.

This simple tutorial is about PyQt5 and menu window example.
I have a similar example with Qt4 on this blog.
The main reason for this tutorial comes from the idea of simplicity and reuse the source code from PyQt4 and PyQt5.
I do not know if there are significant changes to the Qt5 base IU. However, it is good to check on the official pages. Let's look at the example with comments specific to source code lines:

# -*- coding: utf-8 -*-
"""
Created on Thu Apr 26 17:20:02 2018

@author: catafest
"""
import sys
from PyQt5.QtWidgets import QMainWindow, QAction, qApp, QApplication, QDesktopWidget
from PyQt5.QtGui import QIcon

class Example(QMainWindow):
#init the example class to draw the window application
def __init__(self):
super().__init__()
self.initUI()
#create the def center to select the center of the screen
def center(self):
# geometry of the main window
qr = self.frameGeometry()
# center point of screen
cp = QDesktopWidget().availableGeometry().center()
# move rectangle's center point to screen's center point
qr.moveCenter(cp)
# top left of rectangle becomes top left of window centering it
self.move(qr.topLeft())
#create the init UI to draw the application
def initUI(self):
#create the action for the exit application with shortcut and icon
#you can add new action for File menu and any actions you need
exitAct = QAction(QIcon('exit.png'), '&Exit', self)
exitAct.setShortcut('Ctrl+Q')
exitAct.setStatusTip('Exit application')
exitAct.triggered.connect(qApp.quit)
#create the status bar for menu
self.statusBar()
#create the menu with the text File , add the exit action
#you can add many items on menu with actions for each item
menubar = self.menuBar()
fileMenu = menubar.addMenu('&File')
fileMenu.addAction(exitAct)
#resize the window application
self.resize(640, 480)
#draw on center of the screen
self.center()
#add title on windows application
self.setWindowTitle('Simple menu')
#show the application
self.show()
#close the UI class

if __name__ == '__main__':
#create the application
app = QApplication(sys.argv)
#use the UI with new class
ex = Example()
#run the UI
sys.exit(app.exec_())
The result of this code is this:

October 21, 2018 02:36 PM UTC