skip to navigation
skip to content

Planet Python

Last update: May 28, 2017 07:47 AM

May 27, 2017

Simple is Better Than Complex

How to Configure Mailgun To Send Emails in a Django Project

In this tutorial you will learn how to setup a Django project to send emails using the Mailgun service.

Previously I’ve published in the blog a post about how to configure SendGrid to send emails. It’s a great service, but they don’t offer free plans anymore, nowadays it’s just a 30 days free trail. So, I thought about sharing the whole email setup with a better option to get started.

Mailgun is great and super easy to setup. The first 10,000 emails you send are always free. The only downside is that if you don’t provide a payment information (even though you are only going to use the first 10,000 free emails), there will be some limitations, such as requiring to configure “Authorized Recipients” for custom domains, which pretty much makes it useless, unless you know beforehand the email addresses you will be sending emails.

Anyway, let’s get started.

Initial Setup

Go to and create a free account. Sign in with your Mailgun account, click on Domains and then Add New Domain.

Add New Domain Button Screen Shot

I will setup the Mailgun service for a domain I own, “”. For the setup, it’s advised to use the “mg” subdomain, so you will need to provide the Domain Name like this:

From now on, always change with your domain name.

Add New Domain Screen Shot

Click on Add Domain.

Domain Verification & DNS

To perform the next steps, you will need to access the DNS provider of your domain. Normally it’s managed by the service/website you registered your domain name. In my name, I registered the “” domain using Namecheap.

The next steps should be more or less the name. Try to find something that says “manage”, “DNS records”, “Advanced DNS” or something similar.

DNS Records For Sending

In the Mailgun website you will see the following instructions:

DNS Records For Sending Screen Shot

Add the DNS records accordingly in your DNS provider:

Namecheap Advanced DNS TXT Records Screen Shot

Namecheap Advanced DNS MX Records Screen Shot

DNS Records For Tracking

In a similar way, add now a CNAME for tracking opens, clicks etc. You will see those instructions:

DNS Records For Tracking Screen Shot

Follow them accordingly:

Namecheap Advanced DNS CNAME Record Screen Shot

Remember, in the previous screenshot you are supposed to do in your DNS provider!

Wait For Your Domain To Verify

Now it’s a matter of patience. We gotta wait for the DNS to propagate. Sometimes it can take an eternity to propagate. But my experience with brand new domains is that it usually happens very quickly. Wait like 5 minutes and give it a shot.

Click on Continue to Domain Overview:

Continue to Domain Overview Button Screen Shot

You will now see something like this:

Domain Overview Screen Shot

Click on Check DNS Records Now and see if Mailgun can verify your domain (remember, this process can take up to 48 hours!).

If the verification was successful, you will see the screen below:

Active Domain Screen Shot

Configuring Django to Send Emails

To configure you Django Project, add the following parameters to your

EMAIL_HOST_PASSWORD = 'mys3cr3tp4ssw0rd'

Note that we have some sensitive informations here, such as the EMAIL_HOST_PASSWORD. You should not put it directly to your file or commit it to a public repository. Instead use environment variables or use the Python library Python Decouple. I have also written a tutorial on how to use Python Decouple.

Here is a very simple snippet to send an email:

from django.core.mail import send_mail

send_mail('subject', 'body of the message', '', [''])

And here is how the email will look like, displaying properly your domain:

Email Sent

If you need to keep reading about the basic email functions, check my previous article about email: How to Send Email in a Django App.

May 27, 2017 05:45 PM

Gocept Weblog

Move documentation from to

Today we migrated the documentation of zodb.py3migrate from to

This requires a directory – for this example I name it redir – containing a file named index.html with the following content:

 <meta http-equiv="refresh"
       content="0; url=" />
    <a href="">
      Redirect to

To upload it to I called:

py27 upload_docs --upload-dir=redir

Now points to read the docs.

Credits: The HTML was taken from the Trello board of the avocado-framework.

May 27, 2017 12:38 PM

Catalin George Festila

Using Python for .NET the clr python module - part 001 .

Python for .NET is available as a source release and as a Windows installer for various versions of Python and the common language runtime from the Python for .NET website .
Let's install it under Windows 10.

C:\Python27\Scripts>pip install pythonnet
Collecting pythonnet
Downloading pythonnet-2.3.0-cp27-cp27m-win32.whl (58kB)
100% |################################| 61kB 740kB/s
Installing collected packages: pythonnet
Successfully installed pythonnet-2.3.0
Now I will show you how to use form and buttons.
First you need to run the python code into python script files.
First example is simple:
import clr


from System.Windows.Forms import Application, Form

class IForm(Form):

def __init__(self):
self.Text = 'Simple'
self.Width = 640
self.Height = 480

The next example come with one button and tooltips for form and button:
import clr


from System.Windows.Forms import Application, Form
from System.Windows.Forms import Button, ToolTip
from System.Drawing import Point, Size

class IForm(Form):

def __init__(self):
self.Text = 'Tooltips'
self.Size = Size(640, 480)

tooltip = ToolTip()
tooltip.SetToolTip(self, "This is a Form")

button = Button()
button.Parent = self
button.Text = "Button"
button.Location = Point(50, 70)

tooltip.SetToolTip(button, "This is a Button")

This is the result of this python script.

Another example is how to see the interfaces that are part of a .NET assembly:
>>> import System.Collections
>>> interfaces = [entry for entry in dir(System.Collections)
... if entry.startswith('I')]
>>> for entry in interfaces:
... print entry

May 27, 2017 10:35 AM

Talk Python to Me

#113 Dedicated AI chips and running old Python faster at Intel

<more></mWhere do you run your Python code? No, not Python 3, Python 2, PyPy or the other implementations. I'm thinking waaaaay lower than that. This week we are talking about the actual chips that execute our code. <br/> <br/> We catch up with David Stewart and meet Suresh Srinivas, and Sergey Maidanov from Intel. We talk about how they are working at the silicon level to make even Python 2 run faster and touch on dedicated AI chips that go beyond just what is possible with GPU-computation.<br/> <br/> Links from the show:<br/> <br/> <div style="font-size: .85em;"><b>Intel Distribution for Python</b>: <a href="" target="_blank"></a><br/> <b>Intel Commits To Nervana Roadmap For AI</b>: <a href="" target="_blank"></a><br/> <b>David Stewart</b>: <a href="" target="_blank"></a><br/> <b>David on Twitter</b>: <a href="" target="_blank">@davest</a><br/> <b>Suresh Srinivas</b>: <a href="" target="_blank"></a><br/> <b>Sergey Maidanov</b>: <a href="" target="_blank"></a><br/> <br/> <b>Sponsored Links</b><br/> <b>Hired</b>: <a href="" target="_blank"></a><br/> <b>Talk Python Courses</b>: <a href="" target="_blank"></a><br/></div>

May 27, 2017 08:00 AM

Nigel Babu

Pycon Pune 2017

I haven’t attended a Pycon since 2013. Now that I started writing this post, I’ve realized it’s been nearly 4 years since and Python is the language I use the most. The last Pycon was a great place to meet people and make friends. Among others, I recall clearly that I met Sankarshan, my current manager, for the first time there. Pycon Pune is also the first time I’m speaking at a single track event. There’s something scary about so many people paying attention to you and making sure they’re not bored.

The venue for the event was gorgeous (as evidenced by the group picture that nearly looks photoshopped!) and the event was well organized, I have to say. My only critical feedback is a space outside of the main conference area for a hallway track. The auditorium had air conditioning and everyone went in thanks to it. If we had a little bit of space with power and air conditioning that you could use if you wanted to have a conversation, that would be highly beneficial. I like attending large events, but sometimes, the introvert in me takes over and I want to spend more time either alone or with less interaction. Linuxcon EU was great about this, going so far as to have a quiet space, which I found useful.

I had trepeditions about my talk. It wasn’t exactly about solving a problem with Python. It was about problems I’ve faced throughout my career and how I’ve seen other projects solve them. Occasionally, those problems or solutions were related to Python, sometimes they were related to my work on Gluster, and often to Mozilla. I’m glad it was well recived and I had a lot of conversations with people after the talk about the pains they face at their own organization. I’ll be the first to admit that I don’t practice what I preach. We’re still working on getter our release management to a better place.

Some of the memorable sessions include - Hanza’s keynote about his open source life, Katie’s talk about accessibility, Dr. Terri’s talk about security, Noufal’s talk about CFFI. All videos should be online on the Pycon Pune channel, including mine.

May 27, 2017 05:20 AM

Weekly Python StackOverflow Report

(lxxv) stackoverflow python report

These are the ten most rated questions at Stack Overflow last week.
Between brackets: [question score / answers count]
Build date: 2017-05-27 03:23:02 GMT

  1. Combine 2 pandas dataframes according to boolean Vector - [9/3]
  2. The accessing time of a numpy array is impacted much more by the last index compared to the second last - [8/4]
  3. Alexa request validation in python - [8/2]
  4. Broadcast 1D array against 2D array for lexsort : Permutation for sorting each column independently when considering yet another vector - [8/2]
  5. Is there a Python csv file writer that can match data.table's fwrite speed? - [8/0]
  6. Detecting C types limits ("limits.h") in python? - [7/1]
  7. Python bug: null byte in input prompt - [7/0]
  8. How to I factorize a list of tuples? - [6/5]
  9. How does isinstance work for List? - [6/1]
  10. How do i move the offset of the 'index' method of 'list' - [5/6]

May 27, 2017 03:23 AM

Sandipan Dey

Some Image Processing, Information and Coding Theory with Python

Some of the following problems appeared in the exercises in the coursera course Image Processing (by Northwestern University). The following descriptions of the problems are taken directly from the assignment’s description. 1. Some Information and Coding Theory Computing the Entropy of an Image The next figure shows the problem statement. Although it was originally implemented … Continue reading Some Image Processing, Information and Coding Theory with Python

May 27, 2017 12:09 AM

May 26, 2017


Enthought at National Instruments’ NIWeek 2017: An Inside Look

This week I had the distinct privilege of representing Enthought at National Instruments‘ 23rd annual user conference, NIWeek 2017. National Instruments is a leader in test, measurement, and control solutions, and we share many common customers among our global scientific and engineering user base.

NIWeek kicked off on Monday with Alliance Day, where my colleague Andrew Collette and I went on stage to receive the LabVIEW Tools Network 2017 Product of the Year Award for Enthought’s Python Integration Toolkit, which provides a bridge between Python and LabVIEW, allowing you to create VI’s (virtual instruments) that make Python function and object method calls. Since its release last year, the Python Integration Toolkit has opened up access to a broad range of new capabilities for LabVIEW users,  by combining the best of Python with the best of LabVIEW. It was also inspiring to hear about the advances being made by other National Instruments partners. Congratulations to the award winners in other categories (Wineman Technology, Bloomy, and Moore Good Ideas)!

On Wednesday, Andrew gave a presentation titled “Building and Deploying Python-Powered LabVIEW Applications” to a standing-room only crowd.  He gave some background on the relative strengths of Python and LabVIEW (some of which is covered in our March 2017 webinar “Using Python and LabVIEW to Rapidly Solve Engineering Problems“) and then showcased some of the capabilities provided by the toolkit, such as plotting data acquisition results live to a web server using plotly, which is always a crowd-pleaser (you can learn more about that in the blog post “Using Plotly from LabVIEW via Python”).  Other demos included making use of the Python scikit-learn library for machine learning, (you can see Enthought’s CEO Eric Jones run that demo here, during the 2016 NIWeek keynotes.)

For a mechanical engineer like me, attending NIWeek is a bit like giving a kid a holiday in a candy shop.  There was much to admire on the expo floor, with all kinds of mechatronic gizmos and gadgets.  I was most interested by the lightning-fast video and image processing possible with NI’s FPGA systems, like the part sorting system shown below.  Really makes me want to play around with nifpga.

Another thing really gaining traction is the implementation of machine learning for a number of applications. I attended one talk titled “Deep Learning With LabVIEW and Acceleration on FPGAs” that demonstrated image classification using a neural network and talked about strategies to reduce the code size to get it to fit on an FPGA.

Finally, of course, I was really excited by all of the activity in the Industrial Internet of Things (IIoT), which is an area of core focus for Enthought.  We have been in the big data analytics game for a long time, and writing software for hard science is in our company DNA. But this year especially, starting with the AIChE 2017 Spring Meeting and now at NIWeek 2017, it has been really energizing to meet with industry leaders and see some of the amazing things that are being implemented in the IIoT.  National Instruments has been a leader in the test and measurement sector for a long time, and they have been pioneers in IIoT.  Now it is easy to download and install an interface to Amazon S3 for LabVIEW, and just like that, your sensor is now a connected sensor … and your data is ready for analysis in Enthought’s Canopy Data platform.

After immersion in NIWeek, I guess you could say, I’ve been “LabVIEWed”:

The post Enthought at National Instruments’ NIWeek 2017: An Inside Look appeared first on Enthought Blog.

May 26, 2017 08:44 PM

Continuum Analytics News

Let’s Talk PyCon 2017 - Thoughts from the Anaconda Team

Friday, May 26, 2017
Peter Wang
Chief Technology Officer & Co-Founder

We’re not even halfway through the year, but 2017 has already been filled to the brim with dynamic presentations and action-packed conferences. This past week, the Anaconda team was lucky enough to attend PyCon 2017 in Portland, OR - the largest annual gathering for the community that uses and develops Python. We came, we saw, we programmed, we networked, we spoke, we ate, we laughed, and we learned. Myself and some of our team members at the conference shared some details on their experiences - take a look and, if you attended, share your thoughts in the comment section below, or on Twitter @ContinuumIO

Did anything surprise you at PyCon? 

“I was surprised how many attendees were using Python for data. I missed last year's PyCon, and so comparing against PyCon 2015, there was a huge growth in the last two years. During Katy Huff's keynote, she asked how many people in the audience had degrees in science, and something like 40% of the people raised their hands. In the past, this was not the case - PyCon had a lot more "traditional" software developers.” - Peter Wang, CTO & co-founder, Anaconda

“Yes - how diverse the community is. Looking at the session topics provides an indicator about this, but having had somewhere between 60-80 interactions at the Anaconda booth, there was a huge range of discussions all the way from "Tell me more about data science" to "I've been using Anaconda for years and am a huge fan" or "conda saved my life.” I also saw a huge range of roles and backgrounds in attendees from enterprise, government, military, academic, students, and independent consultants. It was great to see a number of large players here: Facebook/Instagram, LinkedIn, Microsoft, Google,and Intel were all highly visible, supporting the community.” - Stephen Kearns, Product Marketing Manager, Anaconda

“What really struck me this year was how heavy the science and data science angles were from speakers, topics, exhibitors, and attendees.  The Thursday and Friday morning keynotes were Science + Python (Jake Vanderplas and Katy Huff), then the Sunday closing keynote was about containers and Kubernetes (Kelsey Hightower).” - Ian Stokes-Rees, Computational Scientist, Anaconda 

What was the most popular topic people were buzzing about? Was this surprising to you? 

“There's definitely a good feeling about the transition to Python 3 really happening, which has been a point of angst in the Python community for several years. To me, the sense of closure around this was palpable, in that people could spend their emotional energy talking about other things and not griping about ‘Python 2 vs. 3.’” - Peter Wang

“The talks! So great to see how fast the videos for the talks were getting posted.” - Stephen Kearns 

Did you attend any talks? Did any of them stand out? 

“Jake Vanderplas presented a well-researched and well-structured talk on the Python visualization landscape. The keynotes were all excellent. I appreciated the Instagram folks sharing their Python 3 migration story with everyone.” - Peter Wang

“There were some at-capacity tutorials by me on “Data Science Apps with Anaconda,” showing off our new Anaconda Project deployment capability and “Accelerating your Python Data Science code with Dask and Numba.” - Ian Stokes-Rees

How was the buzz around Anaconda at PyCon? 

“Awesome - we exhausted our entire supply of Anaconda Crew T-Shirts by the end of the second day. A conference first!” - Ian Stokes-Rees 

“It was great, and very positive. Lots of people were very interested in our various open source projects, but we also got a lot of interest from attendees in our enterprise offerings: commercially-supported Anaconda, our premium training, and the Anaconda Enterprise Data Science platform. In previous years, there were not as many people who I would characterize as "potential customers,” and this was a very positive change for us. I also think that it is a sign that the PyCon attendee audience is also changing, to include more people from the data science and machine learning ecosystem.” - Peter Wang

“Anaconda had lots of partnership engagement opportunities at the show, specifically with Intel, Microsoft and ESRI. It was exciting to hear Intel talk about how they’re using Anaconda as the channel for delivering optimized high performance Python, and great to see Microsoft giving SQL Server demonstrations of server-side Python using Anaconda. Lastly, great to hear that ESRI is increasing its Python interfaces to ArcGIS and have started to make the ArcGIS Python package available as a conda package from Anaconda Cloud.” - Ian Stokes-Rees


May 26, 2017 04:58 PM


Nikola v7.8.6 is out!

On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v7.8.6. It fixes some bugs and adds new features.

What is Nikola?

Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown — and can even turn Jupyter (IPython) Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).

Find out more at the website:


Install using pip install Nikola or download tarballs on GitHub and PyPI.

Or if you prefer, Snapcraft packages are now built automatically, and Nikola v7.8.6 will be available in the stable channel.



  • Guess file format from file name on new_post (Issue #2798)
  • Use BaguetteBox as lightbox in base theme (Issue #2777)
  • New deduplicate_ids filter, for preventing duplication of HTML id attributes (Issue #2570)
  • Ported gallery image layout to base theme (Issue #2775)
  • Better error handling when posts can't be parsed (Issue #2771)
  • Use .theme files to store theme metadata (Issue #2758)
  • New add_header_permalinks filter, for Sphinx-style header links (Issue #2636)
  • Added alternate links for gallery translations (Issue #993)


  • Use locale.getdefaultlocale() for better locale guessing (credit: @madduck)
  • Save dependencies for template hooks properly (using .__doc__ or .template_registry_identifier for callables)
  • Enable larger panorama thumbnails (Issue #2780)
  • Disable archive_rss link handler, which was useless because no such RSS was ever generated (Issue #2783)
  • Ignore files ending wih "bak" (Issue #2740)
  • Use page.tmpl by default, which is inherited from story.tmpl (Issue #1891)


  • Limit Jupyter support to notebook >= 4.0.0 (it already was in requirements-extras.txt; Issue #2733)

May 26, 2017 01:49 PM

EuroPython Society

EuroPython 2017: Full session list online

After the final review round, we are now happy to announce the complete list of more than 200 accepted sessions.


EuroPython 2017 Session List

Here’s what we have on offer:

for a total of 203 sessions, arranged in 5 tracks from Monday, July 10, thru Friday, July 14, in addition to the Beginners’ Day and Django Girls workshops on Sunday, July 9, and the Sprints on the weekend July 15-16.

Please see the session list for details and abstracts. In case you wonder what   poster, interactive and help desk sessions are, please check the call for proposals

Additional help desk slots available

We have 5 additional help desk slots available. If you are interested in arranging one, please see our Call for Proposals for details and contact to submit your proposal. Organizers of help desks are eligible for a 25% ticket discount.

Schedule to be announced next week

Our program work group is now working hard on scheduling all these sessions. We expect to announce the final schedule by the end of next week.

We will use the same conference schedule layout as in previous years:

A typical conference day will open the venue at 08:30, have the first session around 09:00 and end at 18:30. Lunch breaks are scheduled for around 13:15. Please note that we don’t serve breakfast.

Aside: If you haven’t done yet, please get your EuroPython 2017 ticket soon. We will switch to on-desk rates in June, which will cost around 30% more than the regular rates.


EuroPython 2017 Team
EuroPython Society
EuroPython 2017 Conference

May 26, 2017 11:26 AM


EuroPython 2017: Full session list online

After the final review round, we are now happy to announce the complete list of more than 200 accepted sessions.


EuroPython 2017 Session List

Here’s what we have on offer:

for a total of 203 sessions, arranged in 5 tracks from Monday, July 10, thru Friday, July 14, in addition to the Beginners’ Day and Django Girls workshops on Sunday, July 9, and the Sprints on the weekend July 15-16.

Please see the session list for details and abstracts. In case you wonder what   poster, interactive and help desk sessions are, please check the call for proposals

Additional help desk slots available

We have 5 additional help desk slots available. If you are interested in arranging one, please see our Call for Proposals for details and contact to submit your proposal. Organizers of help desks are eligible for a 25% ticket discount.

Schedule to be announced next week

Our program work group is now working hard on scheduling all these sessions. We expect to announce the final schedule by the end of next week.

We will use the same conference schedule layout as in previous years:

A typical conference day will open the venue at 08:30, have the first session around 09:00 and end at 18:30. Lunch breaks are scheduled for around 13:15. Please note that we don’t serve breakfast.

Aside: If you haven’t done yet, please get your EuroPython 2017 ticket soon. We will switch to on-desk rates in June, which will cost around 30% more than the regular rates.


EuroPython 2017 Team
EuroPython Society
EuroPython 2017 Conference

May 26, 2017 11:18 AM

Django Weekly

DjangoWeekly Issue 41 - Django Admin Customisation Video, Deployment, Pros and Cons of Django

Worthy Read

Django's admin is a great tool but it isn't always the easiest or friendliest to set up and customize. The ModelAdmin class has a lot of attributes and methods to understand and come to grips with. On top of these attributes, the admin's inlines, custom actions, custom media, and more mean that, really, you can do anything you need with the admin...if you can figure out how. The docs are good but leave a lot to experimentation and the code is notoriously dense. In this tutorial, you'll learn the basics of setting up the admin so you can get your job done. Then we'll dive deeper and see how advanced features like autocomplete, Markdown editors, image editors, and others would be added to make the admin really shine.

We help companies like Airbnb, Pfizer, and Artsy find great developers. Let us find your next great hire. Get started today.

In this tutorial, you will learn how to deploy a Django application with PostgreSQL, Nginx, Gunicorn on a Red Hat Enterprise Linux (RHEL) version 7.3. For testing purpose I’m using an Amazon EC2 instance running RHEL 7.3.

It helps to have an understanding of why upgrading the backend should be considered a necessary part of any website upgrade project. We offer 3 reasons, focusing on our specialty of Django-based websites. Increases security, reduces development and maintenance costs, and ensures support for future growth.

Know when and why code breaks: Users finding bugs? Searching logs for errors? Find + fix broken code fast!


The most commonly suggested solution for long running processes is to use Celery. I suspect that if you need scalabilty or high volume, etc… Celery is the best solution. That said, I have been down the Celery rabbit hole more than once. It has never been pleasant. Since my needs are more modest, maybe there is a better alternative?

If you are using rate limiting with Django Rest Framework you probably already know that it provides some pretty simple methods for setting global rate limits using DEFAULT_THROTTLE_RATES. You can also set rate limits for specific views using the throttle_classes property on class-based views or the @throttle_classes decorator for function based views.

Django’s postgres extensions support data types like DateRange which is super useful when you want to query your database against dates, however they have no form field to expose this into HTML. Handily Django 1.11 has made it super easy to write custom widgets with complex HTML.



drf-swagger-customization - 4 Stars, 0 Fork
This is a django app which you can modify and improve your autogenerated swagger documentation from your drf API.

Django-REST-Boilerplate - 0 Stars, 0 Fork
Boilerplate for Django projects using Django REST Framework.

May 26, 2017 09:00 AM

Import Python

ImportPython Issue 126 - Pycon US Videos, PYPI, SQLAlchemy, Debugging, Mocking and more

Worthy Read

Videos of the just concluded Pycon US 2017.

This is a short post on how to get download statistics about any package from PyPI. Though there have been efforts in that direction from sites like pypi ranking but this post finds a better solution. Google has been generous enough to donate it’s Big Query capacity to the Python Software Foundation. You can access the pypi downloads table through the Big Query console. I ran a sample query to find out how my personal package arachne has been doing on PyPI.

The breadth of SQLAlchemy’s SQL rendering engine, DBAPI integration, transaction integration, and schema description services are documented here. In contrast to the ORM’s domain-centric mode of usage, the SQL Expression Language provides a schema-centric usage paradigm.

Users finding bugs? Searching logs for errors? Find + fix broken code fast!

The various meanings and naming conventions around single and double underscores (“dunder”) in Python, how name mangling works and how it affects your own Python classes.

In Python, all object types inherit from one master object, declared as PyObject . This master object has all of the information Python needs to process a pointer to an object as an actual object.

So we had a production case for months together, where the python process was stuck for indefinitely long time (even days) with absolutely zero activity but the process was listed as active and running by linux. A restart would fix the problem (as always) and the job would be live and kicking. Finally after sometime, I have found the root cause, so I thought I would share it. For the purpose of the blog I’m going to simulate the behavior of my application in a sample python script.

We help companies like Airbnb, Pfizer, and Artsy find great developers. Let us find your next great hire. Get started today.

Back in April, Google announced that it will be shipping Headless Chrome in Chrome 59. Since the respective flags are already available on Chrome Canary, the Duo Labs team thought it would be fun to test things out and also provide a brief introduction to driving Chrome using Selenium and Python.

Elizabeth is a Python library, which helps generate mock data for various purposes. The library was written with the use of tools from the standard Python library, and therefore, it doesn’t have any side dependencies. Currently the library supports 30 languages and 19 class providers, supplying various data.
mock is a collection of Python boilerplates for getting started quickly and right-footed.



speech recognition

PyCon JP 2017 is Now Accepting Poster-Session Proposals! PyCon JP 2017 is a perfect opportunity to connect with a wide range of people. Poster sessions allow you to make the most of that opportunity.



Using Atom IDE.

Python uses global to reference to module-global variables. There are no program-global variables in Python.


Python's (pip's) requirements.txt file is the equivalent to package.json in the JavaScript / Node.js world.  This requirements.txt file isn't as pretty as package.json but it not only defines a version but goes a step further, providing a sha hash to compare against to ensure package integrity:


Bangalore, Karnataka, India


baselines - 241 Stars, 28 Fork
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms

semilive - 92 Stars, 3 Fork
A Sublime Text plugin for "Live" coding

IPpy - 41 Stars, 3 Fork
Parallel testing of IP addresses and domains in python

content-downloader - 30 Stars, 10 Fork
Python package to download files on any topic in bulk.

v2ex-terminal - 27 Stars, 1 Fork
browse v2ex by a terminal

logging-spinner - 17 Stars, 0 Fork
Display spinners (in CLI) through Python standard logging.

aws-batch-genomics - 14 Stars, 4 Fork
Software sets up and runs an genome sequencing analysis workflow using AWS Batch and AWS Step Functions.

twitter-bot - 5 Stars, 0 Fork
Python Bot that Tweets quote and like Tweets.

jsonfeedvalidator - 4 Stars, 0 Fork
JSON Feed Validator

handcart - 3 Stars, 1 Fork
Command-line tools for project-oriented, human-sized Wikidata import

slacky - 0 Stars, 0 Fork
Slack client on the terminal with a GUI.This is a weekend project that started for me as a way to learn how to write old style command line interfaces. Slack is a tool a lot of programmers use today so I thought a lot of you would have interest in contributing to this effor.

May 26, 2017 08:55 AM

Catalin George Festila

OpenGL and OpenCV with python 2.7 - part 005.

In this tutorial I will show you how to mount OpenCV in the Windows 10 operating system with any python version.
You can use the same steps for other versions of python.
Get the wheel binary package opencv_python- from here.


C:\Python27>cd Scripts

C:\Python27\Scripts>pip install opencv_python-
Processing c:\python27\scripts\opencv_python-
Requirement already satisfied: numpy>=1.11.1 in c:\python27\lib\site-packages (from opencv-python==
Installing collected packages: opencv-python
Successfully installed opencv-python-

Python 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 20:42:59) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Let's test it with default source code:

>>> import cv2
>>> dir(cv2)
Now we can test this python script example with PyQt4 python module and cv2.resize function very easy.
The example load a image with PyQt4 python module.
from PyQt4.QtGui import QApplication, QWidget, QVBoxLayout, QImage, QPixmap, QLabel, QPushButton, QFileDialog
import cv2
import sys
app = QApplication([])
window = QWidget()
layout = QVBoxLayout(window)
display = QLabel()
width = 600
height = 400
display.setMinimumSize(width, height)
button = QPushButton('Load', window)

def read_image():
path = QFileDialog.getOpenFileName(window)
if path:
print str(path)
picture = cv2.imread(str(path))
if picture is not None:
print width, height
picture = cv2.resize(picture, (width, height))
image = QImage(picture.tobytes(), # The content of the image
picture.shape[1], # The width (number of columns)
picture.shape[0], # The height (number of rows)
QImage.Format_RGB888) # The image is stored in 3*8-bit format


See the result for this python script:

May 26, 2017 08:46 AM

Robin Parmar

Arduino IDE: Best practices and gotchas

Programming for the Arduino is designed to be easy for beginners. The Integrated Development Environment (IDE) provides a safe place to write code, and handles the make and compiler steps that are required to create processor instructions from your C++ code.

This is fine for trivial applications and school exercises. But as soon as you try to use structured code (including classes and custom libraries) on a larger project, mysterious errors and roadblocks become the order of the day.

This article will consider best practices for working within the IDE. I will document a number of common errors and their workarounds. My perspective is of an experienced Python coder who finds C++ full of needless obfuscation. But we can make it work!

Why not switch?

On encountering limitations with the Arduino IDE, the natural thing to do is switch to a mature development environment. For example, you could use Microsoft Visual Studio by way of Visual Micro, a plugin that enables Arduino coding. Or, use Eclipse with one of several available plugins: Sloeber, PlatformIO, or AVR-eclipse.

But there are cases when it is advantageous to stick with the Arduino IDE. For example, I might be working on a team with other less-experienced developers. While I might wish to carry the cognitive burden of Eclipse plus plugins plus project management, they might not.

Or I could be in a teaching environment where my code must be developed with the same tools my students will be using.

Language features... and what's missing

The Arduino IDE gives you many basic C++ language features plus hardware-specific functions. Control structures, values, and data types are documented in the Reference.

But you don't get modern features such as the Standard Template Library (STL). If you want to use stacks, queues, lists, vectors, etc. you must install a library. Start with those by Marc Jacobi (last updated 2 years ago) and Andy Brown (updated 1 year ago). I am sure there are plenty of articles discussing the relative merits of these or other solutions.

You also don't get new and delete operators, and there's good reason. Dynamic memory management is discouraged on microprocessor boards, since RAM and other resources are limited. There are libraries that add these to your toolkit, but the IDE encourages us to use C++ as though it was little more than plain vanilla C. It can be frustrating, but my advice is to adapt.

Code structure

As you know, when using the Arduino IDE you start coding with a sketch that is your application's entry point. As an example, I'll use project.ino.

Inside this file are always two functions, setup() and loop(). These take no parameters and return no values. There's not much you can do with them... except populate them with your code. These functions are part of an implicit code structure that could be written as follows:

void main() {

// declaration section

setup(); // initialisation (runs once)

while (true) {
loop(); // process-oriented code (runs forever)

In the IDE you never see the main() function and neither can you manipulate it.

Declaration section

The declaration section comes at the top of your project.ino. It is effectively outside any code block. Yes, even though it is in an implicit main() function. This means that only declarations and initializations are valid here. You cannot call methods of a class, nor access properties. This is our first rule:

Rule 1. The declaration section should contain only includes, initialisations of variables, and instantiations of classes.

This restriction can result in subtle errors when using classes. The declaration section is naturally where you will be instantiating classes you wish to use throughout the sketch. This means that the same restrictions just stated must apply to each and every class constructor. For this reason, you cannot use instances or methods of other classes in a constructor. No, not even for built-in libraries like Serial or Wire. Because the order of instantiation of classes is non-deterministic. All instances must be constructed before any instances are used.

Rule 2. A class constructor should have no arguments and do nothing but set default values for any properties.

Follow the example of the library classes for your own custom classes. Provide a begin() method that does take needed parameters and performs any initialization tasks. In other words, begin() should do everything you might otherwise expect the constructor to do. Call this method in the setup() block.

By the way, this solves another problem. A class that might be passed to another class requires a constructor that takes no parameters. Normally you would provide this in addition to another constructor template. But if you follow the rule two, this condition is already met.

Care with instantiation

The next discussion will prevent a syntax error. When instantiating a class with a constructor, you would normally do something like the following, assuming class Foo is defined elsewhere.

const byte pin = 10;
Foo bar(pin);

void setup() {}

void loop() {
int result =;
But following our previous rule, constructors will never have arguments. You might quite naturally write this instead:

const byte pin = 10;
Foo bar();

void setup() {
void loop() {
int result =;
This generates the error "request for member 'read' in 'bar' which is of non-class type 'Foo'. That appears nonsensical, because Foo is most definitely a class. Spot the syntax error?

To the compiler, bar() looks like a function call. You need to rewrite that line as:

Foo bar;

Abandoning the sketch

Before you even get to this point of sophistication in your code, you will be seeing all sorts of mystifying compiler output. "Error: 'foo' has not been declared" for a foo that most certainly has been declared. "Error: 'foo' does not name a type" for a foo that is definitively a type. And so on.

These errors occur because the compiler is generating function prototypes for you, automatically, even if you don't need them. These prototypes will even over-ride your own perfectly good code. The only thing to do is abandon the sketch! Move to the lifeboats! Compiler error! Danger, Will Robinson!


Do the following:

1. Create a new .cpp file, ensuring it is not named the same as your sketch, and also not named main.cpp. These are both name conflicts. As an example, let's call it primary.cpp.

2. Copy all the code from project.ino to primary.cpp.

3. Add #include <Arduino.h> to the top of primary.cpp, before your other includes. This ensures that your code can access the standard prototypes.

4. In project.ino leave only your #include statements. Delete everything else.

This will solve all those mysterious issues. You can now prototype your own functions and classes without the IDE getting in your way. You will, however, need to remember that every time you add an #include to primary.cpp, you need to also add it to project.ino. But it's a small price to pay.

Rule 3. Use a top-level C++ file instead of a sketch file.

Simple includes

It's easy to get confused about include files. But all an include represents is a copy and paste operation. The referenced code is inserted at the point where the include directive is located.

Here are the rules.

1. You need a .h header file for each .cpp code file.

2. The .cpp should have only one include, that being its corresponding header (a file with the same name but different extension).

3. The header file must have all the includes necessary for the correct running of the .h and .cpp code. And, in the correct order, if there are dependencies.

4. A header guard is required for each .h file. This prevents the header from being included in your project multiple times. It doesn't matter what variable name you choose for the test, so long as it is unique.
#ifndef LIB_H
#define LIB_H

// everything else

5. If you have any sort of complex chaining, with circular pointer referencing, you may have to use forward referencing. But you should be avoiding this complexity in the sort of projects likely to run on an Arduino. So I won't count this rule in our next meta-rule.

Rule 4. Follow the four rules of correct header use.

Using libraries

The IDE limits how you use libraries to the very simplest case. Libraries get installed in one standard location across all your projects. You can put a library nowhere else. Why might you want to?

I am currently developing three modules as part of a single project. The code for each module is in its own folder. They have shared library code that I would like to put in a parallel folder, so I would have a folder hierarchy something like this:

Then I could easily archive "myproject" into a ZIP file to share with the rest of the team.

Can I do this? No. It is not possible, since relative paths cannot be used in the IDE. And absolute paths are evil.

Rule 5. There is no rule to help manage libraries. Sorry.

Final Words

I have personally wasted dozens of hours before discovering these tips and working methods. It has been an enormous process of trial-and-error. If you are lucky enough to read this article first, you will never know the pain.

I have a donate button in the sidebar, in case you wish to thank me with a coffee.

In turn I'd like to thank Nick Gammon for an article I wish I'd read a bit sooner.

If there's interest, I might follow up with some words about general C++ syntax and issues that are not so Arduino-centric.

May 26, 2017 04:17 AM

Full Stack Python

Responsive Bar Charts with Bokeh, Flask and Python 3

Bokeh is a powerful open source Python library that allows developers to generate JavaScript data visualizations for their web applications without writing any JavaScript. While learning a JavaScript-based data visualization library like d3.js can be useful, it's often far easier to knock out a few lines of Python code to get the job done.

With Bokeh, we can create incredibly detailed interactive visualizations, or just traditional ones like the following bar chart.

Responsive Bokeh bar chart with 64 bars.

Let's use the Flask web framework with Bokeh to create custom bar charts in a Python web app.

Our Tools

This tutorial works with either Python 2 or 3, but Python 3 is strongly recommended for new applications. I used Python 3.6.1 while writing this post. In addition to Python throughout this tutorial we will also use the following application dependencies:

If you need help getting your development environment configured before running this code, take a look at this guide for setting up Python 3 and Flask on Ubuntu 16.04 LTS

All code in this blog post is available open source under the MIT license on GitHub under the bar-charts-bokeh-flask-python-3 directory of the blog-code-examples repository. Use and abuse the source code as you like for your own applications.

Installing Bokeh and Flask

Create a fresh virtual environment for this project to isolate our dependencies using the following command in the terminal. I typically run this command within a separate venvs directory where all my virtualenvs are store.

python3 -m venv barchart

Activate the virtualenv.

source barchart/bin/activate

The command prompt will change after activating the virtualenv:

Activating our Python virtual environment on the command line.

Keep in mind that you need to activate the virtualenv in every new terminal window where you want to use the virtualenv to run the project.

Bokeh and Flask are installable into the now-activated virtualenv using pip. Run this command to get the appropriate Bokeh and Flask versions.

pip install bokeh==0.12.5 flask==0.12.2 pandas==0.20.1

After a brief download and installation period our required dependencies should be installed within our virtualenv. Look for output like the following to confirm everything worked.

Installing collected packages: six, requests, PyYAML, python-dateutil, MarkupSafe, Jinja2, numpy, tornado, bokeh, Werkzeug, itsdangerous, click, flask, pytz, pandas
  Running install for PyYAML ... done
  Running install for MarkupSafe ... done
  Running install for tornado ... done
  Running install for bokeh ... done
  Running install for itsdangerous ... done
Successfully installed Jinja2-2.9.6 MarkupSafe-1.0 PyYAML-3.12 Werkzeug-0.12.2 bokeh-0.12.5 click-6.7 flask-0.12.2 itsdangerous-0.24 numpy-1.12.1 pandas-0.20.1 python-dateutil-2.6.0 pytz-2017.2 requests-2.14.2 six-1.10.0 tornado-4.5.1

Now we can start building our web application.

Starting Our Flask App

We are going to first code a basic Flask application then add our bar chart to the rendered page.

Create a folder for your project then within it create a file named with the following initial contents:

from flask import Flask, render_template

app = Flask(__name__)

def chart(bars_count):
    if bars_count <= 0:
        bars_count = 1
    return render_template("chart.html", bars_count=bars_count)

if __name__ == "__main__":

The above code is a short one-route Flask application that defines the chart function. chart takes in an arbitrary integer as input which will later be used to define how much data we want in our bar chart. The render_template function within chart will use a template from Flask's default template engine named Jinja2 to output HTML.

The last two lines in the allow us to run the Flask application from the command line on port 5000 in debug mode. Never use debug mode for production, that's what WSGI servers like Gunicorn are built for.

Create a subdirectory within your project folder named templates. Within templates create a file name chart.html. chart.html was referenced in the chart function of our file so we need to create it before our app will run properly. Populate chart.html with the following Jinja2 markup.

<!DOCTYPE html>
    <title>Bar charts with Bokeh!</title>
    <h1>Bugs found over the past {{ bars_count }} days</h1>

chart.html's boilerplate displays the number of bars passed into the chart function via the URL.

The <h1> tag's message on the number of bugs found goes along with our sample app's theme. We will pretend to be charting the number of bugs found by automated tests run each day.

We can test our application out now.

Make sure your virtualenv is still activated and that you are in the base directory of your project where is located. Run using the python command.

$(barchart) python

Go to localhost:5000/16/ in your web browser. You should see a large message that changes when you modify the URL.

Simple Flask app without bar chart

Our simple Flask route is in place but that's not very exciting. Time to add our bar chart.

Generating the Bar Chart

We can build on the basic Flask app foundation that we just wrote with some new Python code that uses Bokeh.

Open back up and change the top of the file to include the following imports.

import random
from bokeh.models import (HoverTool, FactorRange, Plot, LinearAxis, Grid,
from bokeh.models.glyphs import VBar
from bokeh.plotting import figure
from bokeh.charts import Bar
from bokeh.embed import components
from bokeh.models.sources import ColumnDataSource
from flask import Flask, render_template

Throughout the rest of the file we will need these Bokeh imports along with the random module to generate data and our bar chart.

Our bar chart will use "software bugs found" as a theme. The data will be randomly generated each time the page is refreshed. In a real application you'd have a more stable and useful data source!

Continue modifying so the section after the imports looks like the following code.

app = Flask(__name__)

def chart(bars_count):
    if bars_count <= 0:
        bars_count = 1

    data = {"days": [], "bugs": [], "costs": []}
    for i in range(1, bars_count + 1):
        data['costs'].append(random.uniform(1.00, 1000.00))

    hover = create_hover_tool()
    plot = create_bar_chart(data, "Bugs found per day", "days",
                            "bugs", hover)
    script, div = components(plot)

    return render_template("chart.html", bars_count=bars_count,
                           the_div=div, the_script=script)

The chart function gains three new lists that are randomly generated by Python 3's super-handy random module.

chart calls two functions, create_hover_tool and create_bar_chart. We haven't written those functions yet so continue adding code below chart:

def create_hover_tool():
    # we'll code this function in a moment
    return None

def create_bar_chart(data, title, x_name, y_name, hover_tool=None,
                     width=1200, height=300):
    """Creates a bar chart plot with the exact styling for the centcom
       dashboard. Pass in data as a dictionary, desired plot title,
       name of x axis, y axis and the hover tool HTML.
    source = ColumnDataSource(data)
    xdr = FactorRange(factors=data[x_name])
    ydr = Range1d(start=0,end=max(data[y_name])*1.5)

    tools = []
    if hover_tool:
        tools = [hover_tool,]

    plot = figure(title=title, x_range=xdr, y_range=ydr, plot_width=width,
                  plot_height=height, h_symmetry=False, v_symmetry=False,
                  min_border=0, toolbar_location="above", tools=tools,
                  responsive=True, outline_line_color="#666666")

    glyph = VBar(x=x_name, top=y_name, bottom=0, width=.8,
    plot.add_glyph(source, glyph)

    xaxis = LinearAxis()
    yaxis = LinearAxis()

    plot.add_layout(Grid(dimension=0, ticker=xaxis.ticker))
    plot.add_layout(Grid(dimension=1, ticker=yaxis.ticker))
    plot.toolbar.logo = None
    plot.min_border_top = 0
    plot.xgrid.grid_line_color = None
    plot.ygrid.grid_line_color = "#999999"
    plot.yaxis.axis_label = "Bugs found"
    plot.ygrid.grid_line_alpha = 0.1
    plot.xaxis.axis_label = "Days after app deployment"
    plot.xaxis.major_label_orientation = 1
    return plot

There is a whole lot of new code above so let's break it down. The create_hover_tool function does not do anything yet, it simply returns None, which we can use if we do not want a hover tool. The hover tool is an overlay that appears when we move our mouse cursor over one of the bars or touch a bar on a touchscreen so we can see more data about the bar.

Within the create_bar_chart function we take in our generated data source and convert it into a ColumnDataSource object that is one type of input object we can pass to Bokeh functions. We specify two ranges for the chart's x and y axes.

Since we do not yet have a hover tool the tools list will remain empty. The line where we create plot using the figure function is where a lot of the magic happens. We specify all the parameters we want our graph to have such as the size, toolbar, borders and whether or not the graph should be responsive upon changing the web browser size.

We create vertical bars with the VBar object and add them to the plot using the add_glyph function that combines our source data with the VBar specification.

The last lines of the function modify the look and feel of the graph. For example I took away the Bokeh logo by specifying plot.toolbar.logo = None and added labels to both axes. I recommend keeping the bokeh.plottin documentation open to know what your options are for customizing your visualizations.

We just need a few updates to our templates/chart.html file to display the visualization. Open the file and add the folloiwng 6 lines to the file. Two of these lines are for the required CSS, two are JavaScript Bokeh files and the remaining two are the generated chart.

<!DOCTYPE html>
    <title>Bar charts with Bokeh!</title>
    <link href="" rel="stylesheet">
    <link href="" rel="stylesheet">
    <h1>Bugs found over the past {{ bars_count }} days</h1>
    {{ the_div|safe }}
    <script src=""></script>
    <script src=""></script>
    {{ the_script|safe }}

Alright, let's give our app a try with a simple chart of 4 bars. The Flask app should automatically reload when you save with the new code but if you shut down the development server fire it back up with the python command.

Open your browser to localhost:5000/4/.

Responsive Bokeh bar chart with 4 bars.

That one looks a bit sparse, so we can crank it up by 4x to 16 bars by going to localhost:5000/16/.

Responsive Bokeh bar chart with 16 bars.

Now another 4x to 128 bars with localhost:5000/128/...

Responsive Bokeh bar chart with 128 bars.

Looking good so far. But what about that hover tool to drill down into each bar for more data? We can add the hover with just a few lines of code in the create_hover_tool function.

Adding a Hover Tool

Within modify the create_hover_tool to match the following code.

def create_hover_tool():
    """Generates the HTML for the Bokeh's hover data tool on our graph."""
    hover_html = """
        <span class="hover-tooltip">$x</span>
        <span class="hover-tooltip">@bugs bugs</span>
        <span class="hover-tooltip">$@costs{0.00}</span>
    return HoverTool(tooltips=hover_html)

It may look really odd to have HTML embedded within your Python application, but that's how we specify what the hover tool should display. We use $x to show the bar's x axis, @bugs to show the "bugs" field from our data source, and $@costs{0.00} to show the "costs" field formatted as a dollar amount with exactly 2 decimal places.

Make sure you changed return None to return HoverTool(tooltips=hover_html) so we can see the results of our new function in the graph.

Head back to the browser and reload the localhost:5000/128/ page.

Responsive Bokeh bar chart with 128 bars and showing the hover tool.

Nice work! Try playing around with the number of bars in the URL and the window size to see what the graph looks like under different conditions.

The chart gets crowded with more than 100 or so bars, but you can give it a try with whatever number of bars you want. Here is what an impractical amount of 50,000 bars looks like just for the heck of it:

Responsive Bokeh bar chart with 50000 bars.

Yea, we may need to do some additional work to display more than a few hundred bars at a time.

What's next?

You just created a nifty configurable bar chart in Bokeh. Next you can modify the color scheme, change the input data source, try to create other types of charts or solve how to display very large numbers of bars.

There is a lot more than Bokeh can do, so be sure to check out the official project documentation , GitHub repository, the Full Stack Python Bokeh page or take a look at other topics on Full Stack Python.

Questions? Let me know via a GitHub issue ticket on the Full Stack Python repository, on Twitter @fullstackpython or @mattmakai.

See something wrong in this blog post? Fork this page's source on GitHub and submit a pull request.

May 26, 2017 04:00 AM

May 25, 2017

Programming Ideas With Jake

Lots of Programming Videos!

A whole bunch of videos have recently dropped from programmer conferences. Like, a LOT!

May 25, 2017 09:53 PM


How to Write a Python Class

In this post I cover learning Python classes by walking through one of our 100 days of code submissions.

May 25, 2017 01:44 PM

Python Bytes

#27 The PyCon 2017 recap and functional Python

<ul> <li>All videos available: <a href=""></a></li> <li>Lessons learned: <ul> <li>pick up swag on day one. vendors run out.</li> <li>take business cards with you and keep them on you</li> <li>Not your actual business cards unless you are representing your company.</li> <li>Cards that have your social media, github account, blog, or podcast or whatever on them.</li> <li>3x3 stickers are too big. 2x2 plenty big enough</li> <li>lightening talks are awesome, because they are a lot of ranges of speaking experience</li> <li>will definitely do that again</li> <li>try to go to the talks that are important to you, but don’t over stress about it, since they are taped. However, it would be lame if all the rooms were empty, so don’t everybody ditch.</li> <li>lastly: everyone knows Michael. </li> </ul></li> </ul> <p><strong>Michael #2:</strong> <a href=""><strong>How to Create Your First Python 3.6 AWS Lambda Function</strong></a></p> <ul> <li>Tutorial from <a href="">Full Stack Python</a></li> <li>Walks you through creating an account</li> <li>Select your Python version (3.6, yes!)</li> <li><code>def lambda_handler(event, context): …</code> # write this function, done!</li> <li>Set and read environment variables (could be connection strings and API keys)</li> </ul> <p><strong>Brian #3:</strong> <a href=""><strong>How to Publish Your Package on PYPI</strong></a></p> <ul> <li>jetbrains article <ul> <li>structure of the package</li> <li>oops. doesn't include src, see</li> <li>decent discussion of a the contents of the file (but interestingly absent is an example file)</li> <li>good discussion of .pypirc file and links to the test and production PyPi</li> <li>example of using twine to push to PyPI</li> <li>overall: good discussion, but you'll still need a decent example.</li> </ul></li> </ul> <p><strong>Michael #4:</strong> <a href=""><strong>Coconut: Simple, elegant, Pythonic functional programming</strong></a></p> <ul> <li>Coconut is a functional programming language that compiles to Python. </li> <li>Since all valid Python is valid Coconut, using Coconut will only extend and enhance what you're already capable of in Python.</li> <li><code>pip install coconut</code> <ol> <li>Some of Coconut’s major features include built-in, syntactic support for:</li> <li>Pattern-matching,</li> <li>Algebraic data-types,</li> <li>Tail call optimization,</li> <li>Partial application,</li> <li>Better lambdas,</li> <li>Parallelization primitives, and</li> <li>A whole lot more, all of which can be found in <a href="">Coconut’s detailed documentation</a>.</li> </ol></li> <li>Talk Python episode coming in a week</li> </ul> <p><strong>Brian #5:</strong> <a href=""><strong>Choose a licence</strong></a></p> <ul> <li>MIT : simple and permissive</li> <li>Apache 2.0 : something extra about patents.</li> <li>GPL v3 : this is the contagious one that requires derivitive work to also be GPL v3</li> <li>Nice list with overviews of what they all mean with color coded bullet points: <a href=""></a></li> </ul> <p><strong>Michael #6:</strong> <a href=""><strong>Python for Scientists and Engineers</strong></a></p> <ul> <li><strong>Table of contents</strong>:</li> <li><strong>Beginners Start Here:</strong> <ul> <li><a href=""><strong>Create a Word Counter in Python</strong></a></li> <li><a href=""><strong>An introduction to Numpy and Matplotlib</strong></a></li> <li><a href=""><strong>Introduction to Pandas with Practical Examples (New)</strong></a></li> </ul></li> <li><strong>Main Book</strong> <ul> <li><a href=""><strong>Image and Video Processing in Python</strong></a></li> <li><a href=""><strong>Data Analysis with Pandas</strong></a></li> <li><a href=""><strong>Audio and Digital Signal Processing (DSP)</strong></a></li> <li><a href=""><strong>Control Your Raspberry Pi From Your Phone / Tablet</strong></a></li> </ul></li> <li><strong>Machine Learning Section</strong> <ul> <li><a href=""><strong>Machine Learning with an Amazon like Recommendation Engine</strong></a></li> <li><a href=""><strong>Machine Learning For Complete Beginners:</strong></a> <em><em></em></em>Learn how to predict how many Titanic survivors using machine learning. No previous knowledge needed!</li> <li><a href=""><strong>Cross Validation and Model Selection</strong></a>: In which we look at cross validation, and how to choose between different machine learning algorithms. Working with the Iris flower dataset and the Pima diabetes dataset.</li> </ul></li> <li><strong>Natural Language Processing</strong> <ul> <li><a href=""><strong>Introduction to NLP and Sentiment Analysis</strong></a></li> <li><a href=""><strong>Natural Language Processing with NTLK</strong></a></li> <li><a href=""><strong>Intro to NTLK, Part 2</strong></a></li> <li><a href=""><strong>Build a sentiment analysis program</strong></a></li> <li><a href=""><strong>Sentiment Analysis with Twitter</strong></a></li> <li><a href=""><strong>Analysing the Enron Email Corpus</strong></a>: The Enron Email corpus has half a million files spread over 2.5 GB. When looking at data this size, the question is, where do you even start?</li> <li><a href=""><strong>Build a Spam Filter using the Enron Corpus</strong></a></li> </ul></li> </ul> <p><strong>In other news</strong>:</p> <ul> <li><a href="">Python Testing with pytest</a> Beta release and initial feedback is going very well.</li> </ul>

May 25, 2017 08:00 AM

Experienced Django

Return of pylint

Until last fall I was working in python 2 (due to some limitations at work) and was very happy to have the Syntastic module in my Vim configuration to flag error each time I save a python file.  This was great, especially after writing in C/C++ for years where there is no official standard format and really poor tools to enforce coding standards.

Then, last fall when I started on Django, I made the decision to move to Python 3.  I quickly discovered that pylint is very version-dependent and running the python2.7 version of pylint against Python3 code was not going to work.

I wasn’t particularly familiar with virtualenv at the time, so I gave up and moved on with other things at the time.  I finally got back to fixing this and thus getting pylint and flake8 running again on my code.


I won’t cover the details of how to install Syntastic as it depends on how you manage your plugins in Vim and is well documented.  I will only point out here that Syntastic isn’t a checker by itself, it’s merely a plugin to run various checkers for you directly in Vim.  It run checkers for many languages, but I’m only using it for Python currently as the C code I use for work is so ugly that it will never pass.

Switching versions

The key to getting pylint to run against different versions of python is to not install pylint on a global level, but rather to install it in each virtualenv.  This seems obvious now that I’m more familiar with virtualenv, but I’ll admit it wasn’t at the time I first ran into the problem.

The other key to getting this to work is to only initiate Vim from inside the virtualenv.  This hampers my overall workflow a bit, as I tend to have gVim up and running for the long-term and just add files in new tabs as I go.  To get pylint to work properly, I’ll need to restart Vim when I switch python versions (at a minimum).  This shouldn’t be too much of a problem, however, as I’m doing less and less python2x coding these days.

Coding Style Thoughts

As I beat my head against horrible C code on a daily basis at work, I find myself appreciating more-and-more the idea of PEP-8 and having good tools for coding style enforcement.  While I frequently find some of the rules odd (two spaces here, but only one space there?) I really find it comforting to have a tool which runs, and runs quickly, to keep the code looking consistent.  Now if I could only get that kind of tool for C…….


May 25, 2017 01:22 AM

Daniel Bader

In Love, War, and Open-Source: Never Give Up

In Love, War, and Open-Source: Never Give Up

I’ll never forget launching my first open-source project and sharing it publicly on Reddit…

I had spent a couple of days at my parents’ place over Christmas that year and decided to use some of my spare time to work on a Python library I christened schedule.

The idea behind schedule was very simple and had a narrow focus (I find that that that’s always a good idea for libraries by the way):

Developers would use it like a timer to periodically call a function inside their Python programs.

The kicker was that schedule used a funky “natural sounding” syntax to specify the timer interval. For example, if you wanted to run a function every 10 minutes you’d do this:


Or, if you wanted to run a particular task every day at 10:30 in the morning, you’d do this:


Because I was so frustrated with Cron’s syntax I thought this approach was really cool. And so I decided this would be the first Python module I’d release as open-source.

I cleaned up the code and spent some time coming up with a nice README file—because that’s really the first thing that your potential users will see when they check out your library.

Once I had my module available on PyPI and the source code on GitHub I decided to call some attention to the project. The same night I posted a link to the repository to Reddit and a couple of other sites.

I still remember that I had shaky hands when I clicked the “submit” button…

It’s scary to put your work out there for the whole world to judge! Also, I didn’t know what to expect.

Would people call me stupid for writing a “simple” library like that?

Would they think my code wasn’t good enough?

Would they find all kinds of bugs and publicly shame me for them? I felt almost a physical sense of dread about pushing the “submit” button on Reddit that night!

The next morning I woke up and immediately checked my email. Were there any comments? Yes, about twenty or so!

I started reading through all of them, faster and faster—

And of course my still frightful mind immediately zoomed in on the negative ones, like

“Cool idea, but not particularly useful”,


“The documentation is not enough”,


“Not a big fan of the pseudo-english syntax. Way too clever and gimmicky.”

At this point I was starting to feel a little discouraged… I’d never really shared my code publicly before and to be honest I my skin receiving criticism on it was paper thin. After all, this was just something I wrote in a couple of hours and gave away for free.

The comment that really made my stomach churn was one from a well known member of the Python community:

“And another library with global state :-( … Such an API should not even exist. It sets a bad example.”

Ouch, that stung. I really looked up to that person and had used some of their libraries in other projects… It was almost like my worst fears we’re confirmed and we’re now playing out in front of me!

I’d never be able to get another job as a Python developer after this…

At the time I didn’t see the positive and supportive comments in that discussion thread. I didn’t see the almost 70 upvotes. I didn’t see the valuable lessons hidden in the seemingly rude comments. I dwelled on the negative and felt terrible and depressed that whole day.

So how do you think this story ends?

Did I delete the schedule repo, switched careers and never looked at Reddit again?


schedule now has almost 3,000 stars on GitHub and is among the top 70 Python repositories (out of more than 215,000). When PyPI’s download statistics we’re still working I saw that it got several thousand downloads per month. I get emails every week from people asking questions about it or thanking me for writing it…

Isn’t that crazy!? How’s that possible after all of these disheartening comments?

My answer is “I don’t know”—and I also don’t think that schedule is a particularly great library that deserves all this attention, by the way.

But, it seems to solve a problem for some people. It also seems to have a polarizing effect on developers who see it—some love it, some hate it.

Today I’m glad I shipped schedule that night.

Glad because it was helpful to so many people over the years and glad because it helped me develop a thicker skin when it comes to sharing and launching things publicly.

I’m partly writing this meandering post because not very long ago I found this comment buried in my Reddit message history:

As someone who has posted a number of projects and blog posts in r/Python, just wanted to drop you a line and encourage that you don’t let the comments in your thread get you down. You see all those upvotes?

Those are people that like your library, but don’t really have a comment to make in the thread proper. My biggest issue with /r/Python is that it tends towards cynicism and sometimes cruelty rather than encouragement and constructive criticism.

Keep up the great work,


Wow! What a positive and encouraging comment!

Back when I felt discouraged by all of these negative comments I must’ve missed it. But reading it a few years later made me re-live that whole situation and it showed me how much I’d grown as a developer and as a person in the meantime.

If you find yourself in a similar situation, maybe feeling bogged down by the developer community who can be unfiltered and pretty rude sometimes, don’t get discouraged.

Even if some people don’t like what you did there can be thousands who love your work.

It’s a big pond, and sometimes the best ideas are polarizing.

The only way to find out is to ship, ship, ship.

May 25, 2017 12:00 AM

May 24, 2017

Filipe Saraiva

LaKademy 2017

LaKademy 2017 group photo

Some weeks ago we had the fifth edition of the KDE Latin-America summit, LaKademy. Since the first edition, KDE community in Latin-America has grown up and now we has several developers, translators, artists, promoters, and more people from here involved in KDE activities.

This time LaKademy was held in Belo Horizonte, a nice city known for the amazing cachaça, cheese, home made beers, cheese, hills, and of course, cheese. The city is very cosmopolitan, with several options of activities and gastronomy, while the people is gentle. I would like to back to Belo Horizonte, maybe in my next vacation.

LaKademy activites were held in CEFET, an educational technological institute. During the days of LaKademy there were political demonstrations and a general strike in the country, consequence of the current political crisis here in Brazil. Despite I support the demonstrations, I was in Belo Horizonte for event. So I focused in the tasks while in my mind I was side-by-side with the workers on the streets.

Like in past editions I worked a lot with Cantor, the mathematical software I am the maintainer. This time the main tasks performed were an extensive set of reviews: revisions in pending patches, in the bug management system in order to close very old (and invalid) reports, and in the task management workboard, specially to ping developers with old tasks without any comment in the last year.

There were some work to implement new features as well. I finished a backends refactoring in order to provide a recommended version of the programming language for each backend in Cantor. How each programming language has its own planning and scheduling, it is common some programming language version not be correctly supported in a Cantor backend (Sage, I am thinking you). This feature presents a “recommended” version of the programming language supported for the Cantor backend, meaning that version was tested and it will work correctly with Cantor. It is more like a workaround in order to maintain the sanity of the developer while he try to support 11 different programming languages.

Other feature I worked but it is not finished is a option to select different LaTeX processors in Cantor. Currently there are several LaTeX processors available (like pdflatex, pdftex, luatex, xetex, …), some of them with several additional features. This option will increased the versatility of Cantor and will allow the use of moderns processors and their features in the software.

I addition to these tasks I fixed some bugs and helped Fernando Telles, my past SoK student, with some tasks in Cantor.

(Like in past editions)², in LaKademy 2017 I also worked in other set of tasks related to the management and promotion of KDE Brazil. I investigated how to bring back our unified feed with Brazilian blogs posts as in the old Planet KDE Português, utilized to send updates about KDE in Brazil to our social networks. Fred implemented the solution. So I updated this feed in social networks, updated our e-mail contact utilized in this networks, and started a bootstrap version of LaKademy website (but the team is migrating to WordPress, I think it will not be used). I also did a large revision in the tasks of KDE Brazil workboard, migrated past year from the TODO website. Besides all this we had the promo meeting to discuss our actions in Latin-America – all the tasks were documented in the workboard.

Of course, just as we worked intensely in those days, we also had a lot of fun between a push and other. LaKademy is also a opportunity to find old friends and make new ones. It is amazing see again the KDE fellows, and I invite the newcomers to stay with us and go to next LaKademy editions!

This year we had a problem that we must to address in next edition – all the participants were Brazilians. We need to think about how to integrate people from other Latin-America countries in LaKademy. It would be bad if the event become only an Akademy-BR.

Filipe and Chicão

So, I give my greetings to the community and put myself in the mission to continue to work in order to grown the Latin-America as an important player to the development and future of KDE.

May 24, 2017 08:37 PM


EuroPython 2017: Social event tickets available

After trainings and talks, EuroPython is going (Coco)nuts! Join us for the EuroPython social event in Rimini, which will be held in the Coconuts Club on Thursday, July 13th. 

Tickets for the social event are not included in the conference ticket. They are now available in our ticket store (listed under ‘Goodies’) for the price of 25 €. The social event ticket includes an aperitivo buffet of Italian specialties, a choice of two drinks and a reserved area in the club from 19:00 to 22:00. The club will open to the general public after that. 


Leave the conference tickets fields blank if you only want to purchase social event tickets.

Take this opportunity to network and socialize with other Python attendees and buy your social event ticket now on the registration page.



EuroPython 2017 Team
EuroPython Society
EuroPython 2017 Conference

May 24, 2017 02:43 PM


Enthought Receives 2017 Product of the Year Award From National Instruments LabVIEW Tools Network

Python Integration Toolkit for LabVIEW recognized for extending LabVIEW connectivity and bringing the power of Python to applications in Test, Measurement and the Industrial Internet of Things (IIoT)

AUSTIN, TX – May 24, 2017 Enthought, a global leader in scientific and analytic computing solutions, was honored this week by National Instruments with the LabVIEW Tools Network Platform Connectivity 2017 Product of the Year Award for its Python Integration Toolkit for LabVIEW.

Python Integration Toolkit for LabVIEWFirst released at NIWeek 2016, the Python Integration Toolkit enables fast, two-way communication between LabVIEW and Python. With seamless access to the Python ecosystem of tools, LabVIEW users are able to do more with their data than ever before. For example, using the Toolkit, a user can acquire data from test and measurement tools with LabVIEW, perform signal processing or apply machine learning algorithms in Python, display it in LabVIEW, then share results using a Python-enabled web dashboard.


Click to see the webinar “Using Python and LabVIEW to Rapidly Solve Engineering Problems” to learn more about adding capabilities such as machine learning by extending LabVIEW applications with Python.

“Python is ideally suited for scientists and engineers due to its simple, yet powerful syntax and the availability of an extensive array of open source tools contributed by a user community from industry and R&D,” said Dr. Tim Diller, Director, IIoT Solutions Group at Enthought. “The Python Integration Toolkit for LabVIEW unites the best elements of two major tools in the science and engineering world and we are honored to receive this award.”

Key benefits of the Python Integration Toolkit for LabVIEW from Enthought:

“Add-on software from our third-party developers is an integral part of the NI ecosystem, and we’re excited to recognize Enthought for its achievement with the Python Integration Toolkit for LabVIEW,” said Matthew Friedman, senior group manager of the LabVIEW Tools Network at NI.

The Python Integration Toolkit is available for download via the LabVIEW Tools Network, and also includes the Enthought Canopy analysis environment and Python distribution. Enthought’s training, support and consulting resources are also available to help LabVIEW users maximize their value in leveraging Python.

For more information on Enthought’s Python Integration Toolkit for LabVIEW, visit


Additional Resources

Product Information

Python Integration Toolkit for LabVIEW product page

Download a free trial of the Python Integration Toolkit for LabVIEW


Webinar: Using Python and LabVIEW to Rapidly Solve Engineering Problems | Enthought
April 2017

Webinar: Introducing the New Python Integration Toolkit for LabVIEW from Enthought
September 2016

About Enthought

Enthought is a global leader in scientific and analytic software, consulting, and training solutions serving a customer base comprised of some of the most respected names in the oil and gas, manufacturing, financial services, aerospace, military, government, biotechnology, consumer products and technology industries. The company was founded in 2001 and is headquartered in Austin, Texas, with additional offices in Cambridge, United Kingdom and Pune, India. For more information visit and connect with Enthought on Twitter, LinkedIn, Google+, Facebook and YouTube.

About NI

Since 1976, NI ( has made it possible for engineers and scientists to solve the world’s greatest engineering challenges with powerful platform-based systems that accelerate productivity and drive rapid innovation. Customers from a wide variety of industries – from healthcare to automotive and from consumer electronics to particle physics – use NI’s integrated hardware and software platform to improve the world we live in.

About the LabVIEW Tools Network

The LabVIEW Tools Network is the NI app store equipping engineers and scientists with certified, third-party add-ons and apps to complete their systems. Developed by industry experts, these cutting-edge technologies expand the power of NI software and modular hardware. Each third-party product is reviewed to meet specific guidelines and ensure compatibility. With hundreds of products available, the LabVIEW Tools Network is part of a rich ecosystem extending the NI Platform to help customers positively impact our world. Learn more about the LabVIEW Tools Network at

LabVIEW, National Instruments, NI and and NIWeek are trademarks of National Instruments. Enthought, Canopy and Python Integration Toolkit for LabVIEW are trademarks of Enthought, Inc.

Media Contact

Courtenay Godshall, VP, Marketing, +1.512.536.1057,

The post Enthought Receives 2017 Product of the Year Award From National Instruments LabVIEW Tools Network appeared first on Enthought Blog.

May 24, 2017 01:42 PM