skip to navigation
skip to content

Planet Python

Last update: May 29, 2017 10:47 AM

May 29, 2017


PyBites

Code Challenge 21 - Electricity Cost Calculation App

Hi Pythonistas, a new week, a new 'bite' of Python coding! This week we will get you to create a simple app to calculate the monetary cost of using an electrical device. Enjoy!

May 29, 2017 09:00 AM

Code Challenge 20 - Object Oriented Programming Fun - Review

It's review time again. Wow: challenge #20 already! We can't believe we have worked through so many already. We also keep receiving amazing PRs, awesome!

May 29, 2017 07:00 AM

May 28, 2017


PyBites

Twitter digest 2017 week 21

Every weekend we share a curated list of 15 cool things (mostly Python) that we found / tweeted throughout the week.

May 28, 2017 06:01 PM


Polyglot.Ninja()

Django REST Framework: JSON Web Tokens (JWT)

(This post is a part of a tutorial series on Building REST APIs in Django)

Our last post was about Authentication and Permissions and we covered the available methods of authentication in Django REST Framework. In that post, we learned how to use the built in Token based authentication in DRF. In this post, we will learn more about JSON Web Tokens aka JWT and we will see if JWT can be a better authentication mechanism for securing our REST APIs.

Understanding JSON Web Tokens (JWTs)

We have actually written a detailed blog post about JSON Web Tokens earlier. In case you have missed it, you probably should read it first. We have also described how to use JWT with Flask – reading that one might also help better understand how things work. And of course, we will briefly cover the idea of JWT in this post as well.

If we want to put it simply – you take some data in JSON format, you hash it with a secret and you get a string that you use as a token. You (your web app actually) pass this token to the user when s/he logs in. The user takes the token and on subsequent requests, passes it back in the “Authorization” header. The web app now takes this token back, “decodes” it back to the original JSON payload. It can now read the stored data (identity of the user, token expiry and other data which was embedded in the JSON). While decoding, the same secret is used, so third party attackers can’t just forge a JWT. We would want our token to be small in size, so the JSON payload is usually intentionally kept small. And of course, it should not contain any sensitive information like user password.

JWT vs DRF’s Token Based Authentication

So in our last blog post, we saw Django REST Framework includes a token based authentication system which can generate a token for the user. That works fine, right? Why would we want to switch to JSON Web Tokens instead of that?

Let’s first see how DRF generates the tokens:

    def generate_key(self):
        return binascii.hexlify(os.urandom(20)).decode()

It’s just random. The token generated can not be anyway related to the user that it belongs to. So how does it associate a token with an user? It stores the token and a reference to the user in a table in database. Here comes the first point – while using DRF’s token based auth, we need to query database on every request (unless of course we have cached that token which). But what if we have multiple application servers? Now we need all our application servers to connect to the same database or same cache server. How will that scale when the project gets really really big? What if we want to provide single sign on across multiple services? We will need to maintain a central auth service where other services request to verify a token. Can JWT simplify these for us?

JWT is just an encoded (read – hashed / signed) JSON data. As long as any webservice has access to the secret used in signing the data, it can also decode and read the embedded data. It doesn’t need any database calls. You can generate the token from one service and other services can read and verify it just fine. It’s more efficient and simply scales better.

JWT in Django REST Framework

DRF does not directly support JWTs out of the box. But there’s an excellent package that adds support for it. Let’s see how easily we can integrate JWT in our REST APIs.

Install and Configure

Let’s first install the package using pip –

pip install djangorestframework-jwt

That should install the package. Now we need to add rest_framework_jwt.authentication.JSONWebTokenAuthentication to the default authentication classes in REST Framework settings.

REST_FRAMEWORK = {
    'DEFAULT_AUTHENTICATION_CLASSES': (
        'rest_framework_jwt.authentication.JSONWebTokenAuthentication',
        'rest_framework.authentication.BasicAuthentication',
        'rest_framework.authentication.SessionAuthentication',
        'rest_framework.authentication.TokenAuthentication',
    )
}

We added it to the top of the list. Next, we just have to add it’s built in view to our urlpatterns.

from rest_framework_jwt.views import obtain_jwt_token

urlpatterns = router.urls + [
    url(r'^jwt-auth/', obtain_jwt_token),
]
Obtain a Token

The obtain_jwt_token view will check the user credentials and provide a JWT if everything goes alright. Let’s try it.

$ curl --request POST \
  --url http://localhost:8000/api/jwt-auth/ \
  --header 'content-type: application/json' \
  --data '{"username": "test_user", "password": "awesomepwd"}'

{"token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VyX2lkIjoyLCJlbWFpbCI6IiIsInVzZXJuYW1lIjoidGVzdF91c2VyIiwiZXhwIjoxNDk1OTkyOTg2fQ.sWSzdiBNNcXDqhcdcjWKjwpPsVV7tCIie-uit_Yz7W0"}

Awesome, everything worked just fine. We have got our token too. What do we do next? We use this token to access a secured resource.

Using the obtained JWT

We need to pass the token in the form of JWT <token> as the value of the Authorization header. Here’s a sample curl request:

$ curl -H "Content-Type: application/json" -H "Authorization: JWT eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VyX2lkIjoyLCJlbWFpbCI6IiIsInVzZXJuYW1lIjoidGVzdF91c2VyIiwiZXhwIjoxNDk1OTkyOTg2fQ.sWSzdiBNNcXDqhcdcjWKjwpPsVV7tCIie-uit_Yz7W0" -X GET  http://localhost:8000/api/subscribers/

[{"id":1,"name":"Abu Ashraf Masnun","age":29,"email":"masnun@polyglot.ninja"},{"id":2,"name":"Abu Ashraf Masnun","age":29,"email":"masnun@polyglot.ninja"},{"id":3,"name":"Abu Ashraf Masnun","age":29,"email":"masnun@polyglot.ninja"},{"id":4,"name":"Abu Ashraf Masnun","age":29,"email":"masnun@polyglot.ninja"}]

So our token worked fine! Cool!

Where to go next?

Now that you have seen how simple and easy it is to add JSON Web Token based authentication to Django REST Framework, you probably should dive deeper into the package documentation. Specially these topics might be interesting –

In the future, we shall try to cover more about Django, Django REST Framework and Python in general. If you liked the content, please subscribe to the mailing list so we can notify you when we post new contents.

The post Django REST Framework: JSON Web Tokens (JWT) appeared first on Polyglot.Ninja().

May 28, 2017 05:52 PM


Sandipan Dey

Some more Image Processing: Ostu’s Method, Hough Transform and Motion-based Segmentation with Python and OpenCV

Some of the following problems appeared in the lectures and the exercises in the coursera course Image Processing (by NorthWestern University). Some of the following descriptions of the problems are taken from the exercise’s description. 1. Ostu’s method for automatic thresholding to get binary images We need to find a thershold to binarize an image, … Continue reading Some more Image Processing: Ostu’s Method, Hough Transform and Motion-based Segmentation with Python and OpenCV

May 28, 2017 09:03 AM

May 27, 2017


Simple is Better Than Complex

How to Configure Mailgun To Send Emails in a Django Project

In this tutorial you will learn how to setup a Django project to send emails using the Mailgun service.

Previously I’ve published in the blog a post about how to configure SendGrid to send emails. It’s a great service, but they don’t offer free plans anymore, nowadays it’s just a 30 days free trail. So, I thought about sharing the whole email setup with a better option to get started.

Mailgun is great and super easy to setup. The first 10,000 emails you send are always free. The only downside is that if you don’t provide a payment information (even though you are only going to use the first 10,000 free emails), there will be some limitations, such as requiring to configure “Authorized Recipients” for custom domains, which pretty much makes it useless, unless you know beforehand the email addresses you will be sending emails.

Anyway, let’s get started.


Initial Setup

Go to www.mailgun.com and create a free account. Sign in with your Mailgun account, click on Domains and then Add New Domain.

Add New Domain Button Screen Shot

I will setup the Mailgun service for a domain I own, “www.bottlenose.co”. For the setup, it’s advised to use the “mg” subdomain, so you will need to provide the Domain Name like this:

mg.bottlenose.co

From now on, always change bottlenose.co with your domain name.

Add New Domain Screen Shot

Click on Add Domain.


Domain Verification & DNS

To perform the next steps, you will need to access the DNS provider of your domain. Normally it’s managed by the service/website you registered your domain name. In my name, I registered the “www.bottlenose.co” domain using Namecheap.

The next steps should be more or less the name. Try to find something that says “manage”, “DNS records”, “Advanced DNS” or something similar.

DNS Records For Sending

In the Mailgun website you will see the following instructions:

DNS Records For Sending Screen Shot

Add the DNS records accordingly in your DNS provider:

Namecheap Advanced DNS TXT Records Screen Shot

Namecheap Advanced DNS MX Records Screen Shot

DNS Records For Tracking

In a similar way, add now a CNAME for tracking opens, clicks etc. You will see those instructions:

DNS Records For Tracking Screen Shot

Follow them accordingly:

Namecheap Advanced DNS CNAME Record Screen Shot

Remember, in the previous screenshot you are supposed to do in your DNS provider!


Wait For Your Domain To Verify

Now it’s a matter of patience. We gotta wait for the DNS to propagate. Sometimes it can take an eternity to propagate. But my experience with brand new domains is that it usually happens very quickly. Wait like 5 minutes and give it a shot.

Click on Continue to Domain Overview:

Continue to Domain Overview Button Screen Shot

You will now see something like this:

Domain Overview Screen Shot

Click on Check DNS Records Now and see if Mailgun can verify your domain (remember, this process can take up to 48 hours!).

If the verification was successful, you will see the screen below:

Active Domain Screen Shot


Configuring Django to Send Emails

To configure you Django Project, add the following parameters to your settings.py:

EMAIL_HOST = 'smtp.mailgun.org'
EMAIL_PORT = 587
EMAIL_HOST_USER = 'postmaster@mg.bottlenose.co'
EMAIL_HOST_PASSWORD = 'mys3cr3tp4ssw0rd'
EMAIL_USE_TLS = True

Note that we have some sensitive informations here, such as the EMAIL_HOST_PASSWORD. You should not put it directly to your settings.py file or commit it to a public repository. Instead use environment variables or use the Python library Python Decouple. I have also written a tutorial on how to use Python Decouple.

Here is a very simple snippet to send an email:

from django.core.mail import send_mail

send_mail('subject', 'body of the message', 'noreply@bottlenose.co', ['vitor@freitas.com'])

And here is how the email will look like, displaying properly your domain:

Email Sent

If you need to keep reading about the basic email functions, check my previous article about email: How to Send Email in a Django App.

May 27, 2017 05:45 PM


Gocept Weblog

Move documentation from pythonhosted.org to readthedocs.io

Today we migrated the documentation of zodb.py3migrate from pythonhosted.org to zodbpy3migrate.readthedocs.io.

This requires a directory – for this example I name it redir – containing a file named index.html with the following content:

<html>
<head>
 <title>zodb.py3migrate</title>
 <meta http-equiv="refresh"
       content="0; url=http://zodbpy3migrate.rtfd.io" />
</head>
<body>
  <p>
    <a href="http://zodbpy3migrate.rtfd.io">
      Redirect to zodbpy3migrate.rtfd.io
    </a>
  </p>
</body>
</html>

To upload it to pythonhosted.org I called:

py27 setup.py upload_docs --upload-dir=redir

Now pythonhosted.org/zodb.py3migrate points to read the docs.

Credits: The HTML was taken from the Trello board of the avocado-framework.


May 27, 2017 12:38 PM


Catalin George Festila

Using Python for .NET the clr python module - part 001 .

Python for .NET is available as a source release and as a Windows installer for various versions of Python and the common language runtime from the Python for .NET website .
Let's install it under Windows 10.

C:\Python27\Scripts>pip install pythonnet
Collecting pythonnet
Downloading pythonnet-2.3.0-cp27-cp27m-win32.whl (58kB)
100% |################################| 61kB 740kB/s
Installing collected packages: pythonnet
Successfully installed pythonnet-2.3.0
Now I will show you how to use form and buttons.
First you need to run the python code into python script files.
First example is simple:
import clr

clr.AddReference("System.Windows.Forms")

from System.Windows.Forms import Application, Form

class IForm(Form):

def __init__(self):
self.Text = 'Simple'
self.Width = 640
self.Height = 480
self.CenterToScreen()

Application.Run(IForm())
The next example come with one button and tooltips for form and button:
import clr

clr.AddReference("System.Windows.Forms")
clr.AddReference("System.Drawing")

from System.Windows.Forms import Application, Form
from System.Windows.Forms import Button, ToolTip
from System.Drawing import Point, Size

class IForm(Form):

def __init__(self):
self.Text = 'Tooltips'
self.CenterToScreen()
self.Size = Size(640, 480)

tooltip = ToolTip()
tooltip.SetToolTip(self, "This is a Form")

button = Button()
button.Parent = self
button.Text = "Button"
button.Location = Point(50, 70)

tooltip.SetToolTip(button, "This is a Button")


Application.Run(IForm())
This is the result of this python script.

Another example is how to see the interfaces that are part of a .NET assembly:
>>> import System.Collections
>>> interfaces = [entry for entry in dir(System.Collections)
... if entry.startswith('I')]
>>> for entry in interfaces:
... print entry
...
ICollection
IComparer
IDictionary
IDictionaryEnumerator
IEnumerable
IEnumerator
IEqualityComparer
IHashCodeProvider
IList
IStructuralComparable
IStructuralEquatable

May 27, 2017 10:35 AM


Talk Python to Me

#113 Dedicated AI chips and running old Python faster at Intel

<more></mWhere do you run your Python code? No, not Python 3, Python 2, PyPy or the other implementations. I'm thinking waaaaay lower than that. This week we are talking about the actual chips that execute our code. <br/> <br/> We catch up with David Stewart and meet Suresh Srinivas, and Sergey Maidanov from Intel. We talk about how they are working at the silicon level to make even Python 2 run faster and touch on dedicated AI chips that go beyond just what is possible with GPU-computation.<br/> <br/> Links from the show:<br/> <br/> <div style="font-size: .85em;"><b>Intel Distribution for Python</b>: <a href="http://software.intel.com/en-us/python-distribution" target="_blank">software.intel.com/en-us/python-distribution</a><br/> <b>Intel Commits To Nervana Roadmap For AI</b>: <a href="https://www.forbes.com/sites/moorinsights/2016/11/21/intel-commits-to-nervana-roadmap-for-ai-first-new-architecture-in-decades/#744a961d8551" target="_blank">forbes.com</a><br/> <b>David Stewart</b>: <a href="http://evangelists.intel.com/bio/David_Stewart" target="_blank">evangelists.intel.com/bio/David_Stewart</a><br/> <b>David on Twitter</b>: <a href="https://twitter.com/davest" target="_blank">@davest</a><br/> <b>Suresh Srinivas</b>: <a href="https://www.linkedin.com/in/suresh-srinivas-8460b03/" target="_blank">linkedin.com</a><br/> <b>Sergey Maidanov</b>: <a href="https://www.linkedin.com/in/sergey-maidanov-3b90016/" target="_blank">linkedin.com</a><br/> <br/> <b>Sponsored Links</b><br/> <b>Hired</b>: <a href="https://hired.com/?utm_source=podcast&utm_medium=talkpythontome&utm_term=cat-tech-software&utm_content=2k&utm_campaign=q1-16-episodesbanner" target="_blank">hired.com/talkpythontome</a><br/> <b>Talk Python Courses</b>: <a href="https://training.talkpython.fm/" target="_blank">training.talkpython.fm</a><br/></div>

May 27, 2017 08:00 AM


Nigel Babu

Pycon Pune 2017

I haven’t attended a Pycon since 2013. Now that I started writing this post, I’ve realized it’s been nearly 4 years since and Python is the language I use the most. The last Pycon was a great place to meet people and make friends. Among others, I recall clearly that I met Sankarshan, my current manager, for the first time there. Pycon Pune is also the first time I’m speaking at a single track event. There’s something scary about so many people paying attention to you and making sure they’re not bored.

The venue for the event was gorgeous (as evidenced by the group picture that nearly looks photoshopped!) and the event was well organized, I have to say. My only critical feedback is a space outside of the main conference area for a hallway track. The auditorium had air conditioning and everyone went in thanks to it. If we had a little bit of space with power and air conditioning that you could use if you wanted to have a conversation, that would be highly beneficial. I like attending large events, but sometimes, the introvert in me takes over and I want to spend more time either alone or with less interaction. Linuxcon EU was great about this, going so far as to have a quiet space, which I found useful.

I had trepeditions about my talk. It wasn’t exactly about solving a problem with Python. It was about problems I’ve faced throughout my career and how I’ve seen other projects solve them. Occasionally, those problems or solutions were related to Python, sometimes they were related to my work on Gluster, and often to Mozilla. I’m glad it was well recived and I had a lot of conversations with people after the talk about the pains they face at their own organization. I’ll be the first to admit that I don’t practice what I preach. We’re still working on getter our release management to a better place.

Some of the memorable sessions include - Hanza’s keynote about his open source life, Katie’s talk about accessibility, Dr. Terri’s talk about security, Noufal’s talk about CFFI. All videos should be online on the Pycon Pune channel, including mine.

May 27, 2017 05:20 AM


Weekly Python StackOverflow Report

(lxxv) stackoverflow python report

These are the ten most rated questions at Stack Overflow last week.
Between brackets: [question score / answers count]
Build date: 2017-05-27 03:23:02 GMT


  1. Combine 2 pandas dataframes according to boolean Vector - [9/3]
  2. The accessing time of a numpy array is impacted much more by the last index compared to the second last - [8/4]
  3. Alexa request validation in python - [8/2]
  4. Broadcast 1D array against 2D array for lexsort : Permutation for sorting each column independently when considering yet another vector - [8/2]
  5. Is there a Python csv file writer that can match data.table's fwrite speed? - [8/0]
  6. Detecting C types limits ("limits.h") in python? - [7/1]
  7. Python bug: null byte in input prompt - [7/0]
  8. How to I factorize a list of tuples? - [6/5]
  9. How does isinstance work for List? - [6/1]
  10. How do i move the offset of the 'index' method of 'list' - [5/6]

May 27, 2017 03:23 AM

May 26, 2017


Enthought

Enthought at National Instruments’ NIWeek 2017: An Inside Look

This week I had the distinct privilege of representing Enthought at National Instruments‘ 23rd annual user conference, NIWeek 2017. National Instruments is a leader in test, measurement, and control solutions, and we share many common customers among our global scientific and engineering user base.

NIWeek kicked off on Monday with Alliance Day, where my colleague Andrew Collette and I went on stage to receive the LabVIEW Tools Network 2017 Product of the Year Award for Enthought’s Python Integration Toolkit, which provides a bridge between Python and LabVIEW, allowing you to create VI’s (virtual instruments) that make Python function and object method calls. Since its release last year, the Python Integration Toolkit has opened up access to a broad range of new capabilities for LabVIEW users,  by combining the best of Python with the best of LabVIEW. It was also inspiring to hear about the advances being made by other National Instruments partners. Congratulations to the award winners in other categories (Wineman Technology, Bloomy, and Moore Good Ideas)!

On Wednesday, Andrew gave a presentation titled “Building and Deploying Python-Powered LabVIEW Applications” to a standing-room only crowd.  He gave some background on the relative strengths of Python and LabVIEW (some of which is covered in our March 2017 webinar “Using Python and LabVIEW to Rapidly Solve Engineering Problems“) and then showcased some of the capabilities provided by the toolkit, such as plotting data acquisition results live to a web server using plotly, which is always a crowd-pleaser (you can learn more about that in the blog post “Using Plotly from LabVIEW via Python”).  Other demos included making use of the Python scikit-learn library for machine learning, (you can see Enthought’s CEO Eric Jones run that demo here, during the 2016 NIWeek keynotes.)

For a mechanical engineer like me, attending NIWeek is a bit like giving a kid a holiday in a candy shop.  There was much to admire on the expo floor, with all kinds of mechatronic gizmos and gadgets.  I was most interested by the lightning-fast video and image processing possible with NI’s FPGA systems, like the part sorting system shown below.  Really makes me want to play around with nifpga.

Another thing really gaining traction is the implementation of machine learning for a number of applications. I attended one talk titled “Deep Learning With LabVIEW and Acceleration on FPGAs” that demonstrated image classification using a neural network and talked about strategies to reduce the code size to get it to fit on an FPGA.

Finally, of course, I was really excited by all of the activity in the Industrial Internet of Things (IIoT), which is an area of core focus for Enthought.  We have been in the big data analytics game for a long time, and writing software for hard science is in our company DNA. But this year especially, starting with the AIChE 2017 Spring Meeting and now at NIWeek 2017, it has been really energizing to meet with industry leaders and see some of the amazing things that are being implemented in the IIoT.  National Instruments has been a leader in the test and measurement sector for a long time, and they have been pioneers in IIoT.  Now it is easy to download and install an interface to Amazon S3 for LabVIEW, and just like that, your sensor is now a connected sensor … and your data is ready for analysis in Enthought’s Canopy Data platform.

After immersion in NIWeek, I guess you could say, I’ve been “LabVIEWed”:

The post Enthought at National Instruments’ NIWeek 2017: An Inside Look appeared first on Enthought Blog.

May 26, 2017 08:44 PM


Continuum Analytics News

Let’s Talk PyCon 2017 - Thoughts from the Anaconda Team

Friday, May 26, 2017
Peter Wang
Chief Technology Officer & Co-Founder

We’re not even halfway through the year, but 2017 has already been filled to the brim with dynamic presentations and action-packed conferences. This past week, the Anaconda team was lucky enough to attend PyCon 2017 in Portland, OR - the largest annual gathering for the community that uses and develops Python. We came, we saw, we programmed, we networked, we spoke, we ate, we laughed, and we learned. Myself and some of our team members at the conference shared some details on their experiences - take a look and, if you attended, share your thoughts in the comment section below, or on Twitter @ContinuumIO

Did anything surprise you at PyCon? 

“I was surprised how many attendees were using Python for data. I missed last year's PyCon, and so comparing against PyCon 2015, there was a huge growth in the last two years. During Katy Huff's keynote, she asked how many people in the audience had degrees in science, and something like 40% of the people raised their hands. In the past, this was not the case - PyCon had a lot more "traditional" software developers.” - Peter Wang, CTO & co-founder, Anaconda

“Yes - how diverse the community is. Looking at the session topics provides an indicator about this, but having had somewhere between 60-80 interactions at the Anaconda booth, there was a huge range of discussions all the way from "Tell me more about data science" to "I've been using Anaconda for years and am a huge fan" or "conda saved my life.” I also saw a huge range of roles and backgrounds in attendees from enterprise, government, military, academic, students, and independent consultants. It was great to see a number of large players here: Facebook/Instagram, LinkedIn, Microsoft, Google,and Intel were all highly visible, supporting the community.” - Stephen Kearns, Product Marketing Manager, Anaconda

“What really struck me this year was how heavy the science and data science angles were from speakers, topics, exhibitors, and attendees.  The Thursday and Friday morning keynotes were Science + Python (Jake Vanderplas and Katy Huff), then the Sunday closing keynote was about containers and Kubernetes (Kelsey Hightower).” - Ian Stokes-Rees, Computational Scientist, Anaconda 

What was the most popular topic people were buzzing about? Was this surprising to you? 

“There's definitely a good feeling about the transition to Python 3 really happening, which has been a point of angst in the Python community for several years. To me, the sense of closure around this was palpable, in that people could spend their emotional energy talking about other things and not griping about ‘Python 2 vs. 3.’” - Peter Wang

“The talks! So great to see how fast the videos for the talks were getting posted.” - Stephen Kearns 

Did you attend any talks? Did any of them stand out? 

“Jake Vanderplas presented a well-researched and well-structured talk on the Python visualization landscape. The keynotes were all excellent. I appreciated the Instagram folks sharing their Python 3 migration story with everyone.” - Peter Wang

“There were some at-capacity tutorials by me on “Data Science Apps with Anaconda,” showing off our new Anaconda Project deployment capability and “Accelerating your Python Data Science code with Dask and Numba.” - Ian Stokes-Rees

How was the buzz around Anaconda at PyCon? 

“Awesome - we exhausted our entire supply of Anaconda Crew T-Shirts by the end of the second day. A conference first!” - Ian Stokes-Rees 

“It was great, and very positive. Lots of people were very interested in our various open source projects, but we also got a lot of interest from attendees in our enterprise offerings: commercially-supported Anaconda, our premium training, and the Anaconda Enterprise Data Science platform. In previous years, there were not as many people who I would characterize as "potential customers,” and this was a very positive change for us. I also think that it is a sign that the PyCon attendee audience is also changing, to include more people from the data science and machine learning ecosystem.” - Peter Wang

“Anaconda had lots of partnership engagement opportunities at the show, specifically with Intel, Microsoft and ESRI. It was exciting to hear Intel talk about how they’re using Anaconda as the channel for delivering optimized high performance Python, and great to see Microsoft giving SQL Server demonstrations of server-side Python using Anaconda. Lastly, great to hear that ESRI is increasing its Python interfaces to ArcGIS and have started to make the ArcGIS Python package available as a conda package from Anaconda Cloud.” - Ian Stokes-Rees

 

May 26, 2017 04:58 PM


Nikola

Nikola v7.8.6 is out!

On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v7.8.6. It fixes some bugs and adds new features.

What is Nikola?

Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown — and can even turn Jupyter (IPython) Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).

Find out more at the website: https://getnikola.com/

Downloads

Install using pip install Nikola or download tarballs on GitHub and PyPI.

Or if you prefer, Snapcraft packages are now built automatically, and Nikola v7.8.6 will be available in the stable channel.

Changes

Features

  • Guess file format from file name on new_post (Issue #2798)
  • Use BaguetteBox as lightbox in base theme (Issue #2777)
  • New deduplicate_ids filter, for preventing duplication of HTML id attributes (Issue #2570)
  • Ported gallery image layout to base theme (Issue #2775)
  • Better error handling when posts can't be parsed (Issue #2771)
  • Use .theme files to store theme metadata (Issue #2758)
  • New add_header_permalinks filter, for Sphinx-style header links (Issue #2636)
  • Added alternate links for gallery translations (Issue #993)

Bugfixes

  • Use locale.getdefaultlocale() for better locale guessing (credit: @madduck)
  • Save dependencies for template hooks properly (using .__doc__ or .template_registry_identifier for callables)
  • Enable larger panorama thumbnails (Issue #2780)
  • Disable archive_rss link handler, which was useless because no such RSS was ever generated (Issue #2783)
  • Ignore files ending wih "bak" (Issue #2740)
  • Use page.tmpl by default, which is inherited from story.tmpl (Issue #1891)

Other

  • Limit Jupyter support to notebook >= 4.0.0 (it already was in requirements-extras.txt; Issue #2733)

May 26, 2017 01:49 PM


EuroPython Society

EuroPython 2017: Full session list online

After the final review round, we are now happy to announce the complete list of more than 200 accepted sessions.

image

EuroPython 2017 Session List

Here’s what we have on offer:

for a total of 203 sessions, arranged in 5 tracks from Monday, July 10, thru Friday, July 14, in addition to the Beginners’ Day and Django Girls workshops on Sunday, July 9, and the Sprints on the weekend July 15-16.

Please see the session list for details and abstracts. In case you wonder what   poster, interactive and help desk sessions are, please check the call for proposals

Additional help desk slots available

We have 5 additional help desk slots available. If you are interested in arranging one, please see our Call for Proposals for details and contact program@europython.eu to submit your proposal. Organizers of help desks are eligible for a 25% ticket discount.

Schedule to be announced next week

Our program work group is now working hard on scheduling all these sessions. We expect to announce the final schedule by the end of next week.

We will use the same conference schedule layout as in previous years:

A typical conference day will open the venue at 08:30, have the first session around 09:00 and end at 18:30. Lunch breaks are scheduled for around 13:15. Please note that we don’t serve breakfast.

Aside: If you haven’t done yet, please get your EuroPython 2017 ticket soon. We will switch to on-desk rates in June, which will cost around 30% more than the regular rates.

Enjoy,

EuroPython 2017 Team
EuroPython Society
EuroPython 2017 Conference

May 26, 2017 11:26 AM


EuroPython

EuroPython 2017: Full session list online

After the final review round, we are now happy to announce the complete list of more than 200 accepted sessions.

image

EuroPython 2017 Session List

Here’s what we have on offer:

for a total of 203 sessions, arranged in 5 tracks from Monday, July 10, thru Friday, July 14, in addition to the Beginners’ Day and Django Girls workshops on Sunday, July 9, and the Sprints on the weekend July 15-16.

Please see the session list for details and abstracts. In case you wonder what   poster, interactive and help desk sessions are, please check the call for proposals

Additional help desk slots available

We have 5 additional help desk slots available. If you are interested in arranging one, please see our Call for Proposals for details and contact program@europython.eu to submit your proposal. Organizers of help desks are eligible for a 25% ticket discount.

Schedule to be announced next week

Our program work group is now working hard on scheduling all these sessions. We expect to announce the final schedule by the end of next week.

We will use the same conference schedule layout as in previous years:

A typical conference day will open the venue at 08:30, have the first session around 09:00 and end at 18:30. Lunch breaks are scheduled for around 13:15. Please note that we don’t serve breakfast.

Aside: If you haven’t done yet, please get your EuroPython 2017 ticket soon. We will switch to on-desk rates in June, which will cost around 30% more than the regular rates.

Enjoy,

EuroPython 2017 Team
EuroPython Society
EuroPython 2017 Conference

May 26, 2017 11:18 AM


Django Weekly

DjangoWeekly Issue 41 - Django Admin Customisation Video, Deployment, Pros and Cons of Django

Worthy Read

Django's admin is a great tool but it isn't always the easiest or friendliest to set up and customize. The ModelAdmin class has a lot of attributes and methods to understand and come to grips with. On top of these attributes, the admin's inlines, custom actions, custom media, and more mean that, really, you can do anything you need with the admin...if you can figure out how. The docs are good but leave a lot to experimentation and the code is notoriously dense. In this tutorial, you'll learn the basics of setting up the admin so you can get your job done. Then we'll dive deeper and see how advanced features like autocomplete, Markdown editors, image editors, and others would be added to make the admin really shine.
admin

We help companies like Airbnb, Pfizer, and Artsy find great developers. Let us find your next great hire. Get started today.
sponsor

In this tutorial, you will learn how to deploy a Django application with PostgreSQL, Nginx, Gunicorn on a Red Hat Enterprise Linux (RHEL) version 7.3. For testing purpose I’m using an Amazon EC2 instance running RHEL 7.3.
installation

It helps to have an understanding of why upgrading the backend should be considered a necessary part of any website upgrade project. We offer 3 reasons, focusing on our specialty of Django-based websites. Increases security, reduces development and maintenance costs, and ensures support for future growth.
core-django

Know when and why code breaks: Users finding bugs? Searching logs for errors? Find + fix broken code fast!
sponsor

core-django

The most commonly suggested solution for long running processes is to use Celery. I suspect that if you need scalabilty or high volume, etc… Celery is the best solution. That said, I have been down the Celery rabbit hole more than once. It has never been pleasant. Since my needs are more modest, maybe there is a better alternative?
redis

If you are using rate limiting with Django Rest Framework you probably already know that it provides some pretty simple methods for setting global rate limits using DEFAULT_THROTTLE_RATES. You can also set rate limits for specific views using the throttle_classes property on class-based views or the @throttle_classes decorator for function based views.
DRF

Django’s postgres extensions support data types like DateRange which is super useful when you want to query your database against dates, however they have no form field to expose this into HTML. Handily Django 1.11 has made it super easy to write custom widgets with complex HTML.
postgres
,
DateRange

DRF


Projects

drf-swagger-customization - 4 Stars, 0 Fork
This is a django app which you can modify and improve your autogenerated swagger documentation from your drf API.

Django-REST-Boilerplate - 0 Stars, 0 Fork
Boilerplate for Django projects using Django REST Framework.

May 26, 2017 09:00 AM


Import Python

ImportPython Issue 126 - Pycon US Videos, PYPI, SQLAlchemy, Debugging, Mocking and more

Worthy Read

Videos of the just concluded Pycon US 2017.
pyconus
,
pycon

This is a short post on how to get download statistics about any package from PyPI. Though there have been efforts in that direction from sites like pypi ranking but this post finds a better solution. Google has been generous enough to donate it’s Big Query capacity to the Python Software Foundation. You can access the pypi downloads table through the Big Query console. I ran a sample query to find out how my personal package arachne has been doing on PyPI.
bigquery
,
pipy

The breadth of SQLAlchemy’s SQL rendering engine, DBAPI integration, transaction integration, and schema description services are documented here. In contrast to the ORM’s domain-centric mode of usage, the SQL Expression Language provides a schema-centric usage paradigm.
SQLAlchemy

Users finding bugs? Searching logs for errors? Find + fix broken code fast!
sponsor

The various meanings and naming conventions around single and double underscores (“dunder”) in Python, how name mangling works and how it affects your own Python classes.
core-python

In Python, all object types inherit from one master object, declared as PyObject . This master object has all of the information Python needs to process a pointer to an object as an actual object.
PyObject

So we had a production case for months together, where the python process was stuck for indefinitely long time (even days) with absolutely zero activity but the process was listed as active and running by linux. A restart would fix the problem (as always) and the job would be live and kicking. Finally after sometime, I have found the root cause, so I thought I would share it. For the purpose of the blog I’m going to simulate the behavior of my application in a sample python script.
debugging

We help companies like Airbnb, Pfizer, and Artsy find great developers. Let us find your next great hire. Get started today.
sponsor

Back in April, Google announced that it will be shipping Headless Chrome in Chrome 59. Since the respective flags are already available on Chrome Canary, the Duo Labs team thought it would be fun to test things out and also provide a brief introduction to driving Chrome using Selenium and Python.
chromium
,
headless

Elizabeth is a Python library, which helps generate mock data for various purposes. The library was written with the use of tools from the standard Python library, and therefore, it doesn’t have any side dependencies. Currently the library supports 30 languages and 19 class providers, supplying various data.
mock

Python-boilerplate.com is a collection of Python boilerplates for getting started quickly and right-footed.
boilerplate

gensim

scraping

speech recognition

PyCon JP 2017 is Now Accepting Poster-Session Proposals! PyCon JP 2017 is a perfect opportunity to connect with a wide range of people. Poster sessions allow you to make the most of that opportunity.
pyconjp

pycon

idioms

Using Atom IDE.
atom

Python uses global to reference to module-global variables. There are no program-global variables in Python.
core-python
,
global

interview

Python's (pip's) requirements.txt file is the equivalent to package.json in the JavaScript / Node.js world.  This requirements.txt file isn't as pretty as package.json but it not only defines a version but goes a step further, providing a sha hash to compare against to ensure package integrity:
pip
,
nodejs
,
requirement.txt


Jobs

Bangalore, Karnataka, India



Projects

baselines - 241 Stars, 28 Fork
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms

semilive - 92 Stars, 3 Fork
A Sublime Text plugin for "Live" coding

IPpy - 41 Stars, 3 Fork
Parallel testing of IP addresses and domains in python

content-downloader - 30 Stars, 10 Fork
Python package to download files on any topic in bulk.

v2ex-terminal - 27 Stars, 1 Fork
browse v2ex by a terminal

logging-spinner - 17 Stars, 0 Fork
Display spinners (in CLI) through Python standard logging.

aws-batch-genomics - 14 Stars, 4 Fork
Software sets up and runs an genome sequencing analysis workflow using AWS Batch and AWS Step Functions.

twitter-bot - 5 Stars, 0 Fork
Python Bot that Tweets quote and like Tweets.

jsonfeedvalidator - 4 Stars, 0 Fork
JSON Feed Validator

handcart - 3 Stars, 1 Fork
Command-line tools for project-oriented, human-sized Wikidata import

slacky - 0 Stars, 0 Fork
Slack client on the terminal with a GUI.This is a weekend project that started for me as a way to learn how to write old style command line interfaces. Slack is a tool a lot of programmers use today so I thought a lot of you would have interest in contributing to this effor.

May 26, 2017 08:55 AM


Catalin George Festila

OpenGL and OpenCV with python 2.7 - part 005.

In this tutorial I will show you how to mount OpenCV in the Windows 10 operating system with any python version.
You can use the same steps for other versions of python.
Get the wheel binary package opencv_python-3.2.0.7-cp27-cp27m-win32.whl from here.

C:\Python27>

C:\Python27>cd Scripts

C:\Python27\Scripts>pip install opencv_python-3.2.0.7-cp27-cp27m-win32.whl
Processing c:\python27\scripts\opencv_python-3.2.0.7-cp27-cp27m-win32.whl
Requirement already satisfied: numpy>=1.11.1 in c:\python27\lib\site-packages (from opencv-python==3.2.0.7)
Installing collected packages: opencv-python
Successfully installed opencv-python-3.2.0.7

C:\Python27\Scripts>python
Python 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 20:42:59) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Let's test it with default source code:

>>> import cv2
>>> dir(cv2)
['', 'ACCESS_FAST', 'ACCESS_MASK', 'ACCESS_READ', 'ACCESS_RW', 'ACCESS_WRITE',
'ADAPTIVE_THRESH_GAUSSIAN_C', 'ADAPTIVE_THRESH_MEAN_C', 'AGAST_FEATURE_DETECTOR_AGAST_5_8',
'AGAST_FEATURE_DETECTOR_AGAST_7_12D', 'AGAST_FEATURE_DETECTOR_AGAST_7_12S',
'AGAST_FEATURE_DETECTOR_NONMAX_SUPPRESSION', 'AGAST_FEATURE_DETECTOR_OAST_9_16',
...
Now we can test this python script example with PyQt4 python module and cv2.resize function very easy.
The example load a image with PyQt4 python module.
from PyQt4.QtGui import QApplication, QWidget, QVBoxLayout, QImage, QPixmap, QLabel, QPushButton, QFileDialog
import cv2
import sys
app = QApplication([])
window = QWidget()
layout = QVBoxLayout(window)
window.setLayout(layout)
display = QLabel()
width = 600
height = 400
display.setMinimumSize(width, height)
layout.addWidget(display)
button = QPushButton('Load', window)
layout.addWidget(button)

def read_image():
path = QFileDialog.getOpenFileName(window)
if path:
print str(path)
picture = cv2.imread(str(path))
if picture is not None:
print width, height
picture = cv2.resize(picture, (width, height))
image = QImage(picture.tobytes(), # The content of the image
picture.shape[1], # The width (number of columns)
picture.shape[0], # The height (number of rows)
QImage.Format_RGB888) # The image is stored in 3*8-bit format
display.setPixmap(QPixmap.fromImage(image.rgbSwapped()))
else:
display.setPixmap(QPixmap())

button.clicked.connect(read_image)
window.show()

app.exec_()
See the result for this python script:

May 26, 2017 08:46 AM


Robin Parmar

Arduino IDE: Best practices and gotchas

Programming for the Arduino is designed to be easy for beginners. The Integrated Development Environment (IDE) provides a safe place to write code, and handles the make and compiler steps that are required to create processor instructions from your C++ code.

This is fine for trivial applications and school exercises. But as soon as you try to use structured code (including classes and custom libraries) on a larger project, mysterious errors and roadblocks become the order of the day.

This article will consider best practices for working within the IDE. I will document a number of common errors and their workarounds. My perspective is of an experienced Python coder who finds C++ full of needless obfuscation. But we can make it work!

Why not switch?

On encountering limitations with the Arduino IDE, the natural thing to do is switch to a mature development environment. For example, you could use Microsoft Visual Studio by way of Visual Micro, a plugin that enables Arduino coding. Or, use Eclipse with one of several available plugins: Sloeber, PlatformIO, or AVR-eclipse.

But there are cases when it is advantageous to stick with the Arduino IDE. For example, I might be working on a team with other less-experienced developers. While I might wish to carry the cognitive burden of Eclipse plus plugins plus project management, they might not.

Or I could be in a teaching environment where my code must be developed with the same tools my students will be using.

Language features... and what's missing

The Arduino IDE gives you many basic C++ language features plus hardware-specific functions. Control structures, values, and data types are documented in the Reference.

But you don't get modern features such as the Standard Template Library (STL). If you want to use stacks, queues, lists, vectors, etc. you must install a library. Start with those by Marc Jacobi (last updated 2 years ago) and Andy Brown (updated 1 year ago). I am sure there are plenty of articles discussing the relative merits of these or other solutions.

You also don't get new and delete operators, and there's good reason. Dynamic memory management is discouraged on microprocessor boards, since RAM and other resources are limited. There are libraries that add these to your toolkit, but the IDE encourages us to use C++ as though it was little more than plain vanilla C. It can be frustrating, but my advice is to adapt.

Code structure

As you know, when using the Arduino IDE you start coding with a sketch that is your application's entry point. As an example, I'll use project.ino.

Inside this file are always two functions, setup() and loop(). These take no parameters and return no values. There's not much you can do with them... except populate them with your code. These functions are part of an implicit code structure that could be written as follows:

void main() {

// declaration section

setup(); // initialisation (runs once)

while (true) {
loop(); // process-oriented code (runs forever)
}
}

In the IDE you never see the main() function and neither can you manipulate it.

Declaration section

The declaration section comes at the top of your project.ino. It is effectively outside any code block. Yes, even though it is in an implicit main() function. This means that only declarations and initializations are valid here. You cannot call methods of a class, nor access properties. This is our first rule:

Rule 1. The declaration section should contain only includes, initialisations of variables, and instantiations of classes.

This restriction can result in subtle errors when using classes. The declaration section is naturally where you will be instantiating classes you wish to use throughout the sketch. This means that the same restrictions just stated must apply to each and every class constructor. For this reason, you cannot use instances or methods of other classes in a constructor. No, not even for built-in libraries like Serial or Wire. Because the order of instantiation of classes is non-deterministic. All instances must be constructed before any instances are used.

Rule 2. A class constructor should have no arguments and do nothing but set default values for any properties.

Follow the example of the library classes for your own custom classes. Provide a begin() method that does take needed parameters and performs any initialization tasks. In other words, begin() should do everything you might otherwise expect the constructor to do. Call this method in the setup() block.

By the way, this solves another problem. A class that might be passed to another class requires a constructor that takes no parameters. Normally you would provide this in addition to another constructor template. But if you follow the rule two, this condition is already met.

Care with instantiation

The next discussion will prevent a syntax error. When instantiating a class with a constructor, you would normally do something like the following, assuming class Foo is defined elsewhere.

const byte pin = 10;
Foo bar(pin);

void setup() {}

void loop() {
int result = bar.read();
}
But following our previous rule, constructors will never have arguments. You might quite naturally write this instead:

const byte pin = 10;
Foo bar();

void setup() {
bar.begin(pin);
}
void loop() {
int result = bar.read();
}
This generates the error "request for member 'read' in 'bar' which is of non-class type 'Foo'. That appears nonsensical, because Foo is most definitely a class. Spot the syntax error?

To the compiler, bar() looks like a function call. You need to rewrite that line as:

Foo bar;

Abandoning the sketch

Before you even get to this point of sophistication in your code, you will be seeing all sorts of mystifying compiler output. "Error: 'foo' has not been declared" for a foo that most certainly has been declared. "Error: 'foo' does not name a type" for a foo that is definitively a type. And so on.

These errors occur because the compiler is generating function prototypes for you, automatically, even if you don't need them. These prototypes will even over-ride your own perfectly good code. The only thing to do is abandon the sketch! Move to the lifeboats! Compiler error! Danger, Will Robinson!

Ahem.

Do the following:

1. Create a new .cpp file, ensuring it is not named the same as your sketch, and also not named main.cpp. These are both name conflicts. As an example, let's call it primary.cpp.

2. Copy all the code from project.ino to primary.cpp.

3. Add #include <Arduino.h> to the top of primary.cpp, before your other includes. This ensures that your code can access the standard prototypes.

4. In project.ino leave only your #include statements. Delete everything else.

This will solve all those mysterious issues. You can now prototype your own functions and classes without the IDE getting in your way. You will, however, need to remember that every time you add an #include to primary.cpp, you need to also add it to project.ino. But it's a small price to pay.

Rule 3. Use a top-level C++ file instead of a sketch file.

Simple includes

It's easy to get confused about include files. But all an include represents is a copy and paste operation. The referenced code is inserted at the point where the include directive is located.

Here are the rules.

1. You need a .h header file for each .cpp code file.

2. The .cpp should have only one include, that being its corresponding header (a file with the same name but different extension).

3. The header file must have all the includes necessary for the correct running of the .h and .cpp code. And, in the correct order, if there are dependencies.

4. A header guard is required for each .h file. This prevents the header from being included in your project multiple times. It doesn't matter what variable name you choose for the test, so long as it is unique.
#ifndef LIB_H
#define LIB_H

// everything else

#endif
5. If you have any sort of complex chaining, with circular pointer referencing, you may have to use forward referencing. But you should be avoiding this complexity in the sort of projects likely to run on an Arduino. So I won't count this rule in our next meta-rule.

Rule 4. Follow the four rules of correct header use.

Using libraries

The IDE limits how you use libraries to the very simplest case. Libraries get installed in one standard location across all your projects. You can put a library nowhere else. Why might you want to?

I am currently developing three modules as part of a single project. The code for each module is in its own folder. They have shared library code that I would like to put in a parallel folder, so I would have a folder hierarchy something like this:

/myproject
/module-1
/module-2
/module-3
/common
Then I could easily archive "myproject" into a ZIP file to share with the rest of the team.

Can I do this? No. It is not possible, since relative paths cannot be used in the IDE. And absolute paths are evil.

Rule 5. There is no rule to help manage libraries. Sorry.

Final Words

I have personally wasted dozens of hours before discovering these tips and working methods. It has been an enormous process of trial-and-error. If you are lucky enough to read this article first, you will never know the pain.

I have a donate button in the sidebar, in case you wish to thank me with a coffee.

In turn I'd like to thank Nick Gammon for an article I wish I'd read a bit sooner.

If there's interest, I might follow up with some words about general C++ syntax and issues that are not so Arduino-centric.

May 26, 2017 04:17 AM


Full Stack Python

Responsive Bar Charts with Bokeh, Flask and Python 3

Bokeh is a powerful open source Python library that allows developers to generate JavaScript data visualizations for their web applications without writing any JavaScript. While learning a JavaScript-based data visualization library like d3.js can be useful, it's often far easier to knock out a few lines of Python code to get the job done.

With Bokeh, we can create incredibly detailed interactive visualizations, or just traditional ones like the following bar chart.

Responsive Bokeh bar chart with 64 bars.

Let's use the Flask web framework with Bokeh to create custom bar charts in a Python web app.

Our Tools

This tutorial works with either Python 2 or 3, but Python 3 is strongly recommended for new applications. I used Python 3.6.1 while writing this post. In addition to Python throughout this tutorial we will also use the following application dependencies:

If you need help getting your development environment configured before running this code, take a look at this guide for setting up Python 3 and Flask on Ubuntu 16.04 LTS

All code in this blog post is available open source under the MIT license on GitHub under the bar-charts-bokeh-flask-python-3 directory of the blog-code-examples repository. Use and abuse the source code as you like for your own applications.

Installing Bokeh and Flask

Create a fresh virtual environment for this project to isolate our dependencies using the following command in the terminal. I typically run this command within a separate venvs directory where all my virtualenvs are store.

python3 -m venv barchart

Activate the virtualenv.

source barchart/bin/activate

The command prompt will change after activating the virtualenv:

Activating our Python virtual environment on the command line.

Keep in mind that you need to activate the virtualenv in every new terminal window where you want to use the virtualenv to run the project.

Bokeh and Flask are installable into the now-activated virtualenv using pip. Run this command to get the appropriate Bokeh and Flask versions.

pip install bokeh==0.12.5 flask==0.12.2 pandas==0.20.1

After a brief download and installation period our required dependencies should be installed within our virtualenv. Look for output like the following to confirm everything worked.

Installing collected packages: six, requests, PyYAML, python-dateutil, MarkupSafe, Jinja2, numpy, tornado, bokeh, Werkzeug, itsdangerous, click, flask, pytz, pandas
  Running setup.py install for PyYAML ... done
  Running setup.py install for MarkupSafe ... done
  Running setup.py install for tornado ... done
  Running setup.py install for bokeh ... done
  Running setup.py install for itsdangerous ... done
Successfully installed Jinja2-2.9.6 MarkupSafe-1.0 PyYAML-3.12 Werkzeug-0.12.2 bokeh-0.12.5 click-6.7 flask-0.12.2 itsdangerous-0.24 numpy-1.12.1 pandas-0.20.1 python-dateutil-2.6.0 pytz-2017.2 requests-2.14.2 six-1.10.0 tornado-4.5.1

Now we can start building our web application.

Starting Our Flask App

We are going to first code a basic Flask application then add our bar chart to the rendered page.

Create a folder for your project then within it create a file named app.py with the following initial contents:

from flask import Flask, render_template


app = Flask(__name__)


@app.route("/<int:bars_count>/")
def chart(bars_count):
    if bars_count <= 0:
        bars_count = 1
    return render_template("chart.html", bars_count=bars_count)


if __name__ == "__main__":
    app.run(debug=True)

The above code is a short one-route Flask application that defines the chart function. chart takes in an arbitrary integer as input which will later be used to define how much data we want in our bar chart. The render_template function within chart will use a template from Flask's default template engine named Jinja2 to output HTML.

The last two lines in the allow us to run the Flask application from the command line on port 5000 in debug mode. Never use debug mode for production, that's what WSGI servers like Gunicorn are built for.

Create a subdirectory within your project folder named templates. Within templates create a file name chart.html. chart.html was referenced in the chart function of our app.py file so we need to create it before our app will run properly. Populate chart.html with the following Jinja2 markup.

<!DOCTYPE html>
<html>
  <head>
    <title>Bar charts with Bokeh!</title>
  </head>
  <body>
    <h1>Bugs found over the past {{ bars_count }} days</h1>
  </body>
</html>

chart.html's boilerplate displays the number of bars passed into the chart function via the URL.

The <h1> tag's message on the number of bugs found goes along with our sample app's theme. We will pretend to be charting the number of bugs found by automated tests run each day.

We can test our application out now.

Make sure your virtualenv is still activated and that you are in the base directory of your project where app.py is located. Run app.py using the python command.

$(barchart) python app.py

Go to localhost:5000/16/ in your web browser. You should see a large message that changes when you modify the URL.

Simple Flask app without bar chart

Our simple Flask route is in place but that's not very exciting. Time to add our bar chart.

Generating the Bar Chart

We can build on the basic Flask app foundation that we just wrote with some new Python code that uses Bokeh.

Open app.py back up and change the top of the file to include the following imports.

import random
from bokeh.models import (HoverTool, FactorRange, Plot, LinearAxis, Grid,
                          Range1d)
from bokeh.models.glyphs import VBar
from bokeh.plotting import figure
from bokeh.charts import Bar
from bokeh.embed import components
from bokeh.models.sources import ColumnDataSource
from flask import Flask, render_template

Throughout the rest of the file we will need these Bokeh imports along with the random module to generate data and our bar chart.

Our bar chart will use "software bugs found" as a theme. The data will be randomly generated each time the page is refreshed. In a real application you'd have a more stable and useful data source!

Continue modifying app.py so the section after the imports looks like the following code.

app = Flask(__name__)


@app.route("/<int:bars_count>/")
def chart(bars_count):
    if bars_count <= 0:
        bars_count = 1

    data = {"days": [], "bugs": [], "costs": []}
    for i in range(1, bars_count + 1):
        data['days'].append(i)
        data['bugs'].append(random.randint(1,100))
        data['costs'].append(random.uniform(1.00, 1000.00))

    hover = create_hover_tool()
    plot = create_bar_chart(data, "Bugs found per day", "days",
                            "bugs", hover)
    script, div = components(plot)

    return render_template("chart.html", bars_count=bars_count,
                           the_div=div, the_script=script)

The chart function gains three new lists that are randomly generated by Python 3's super-handy random module.

chart calls two functions, create_hover_tool and create_bar_chart. We haven't written those functions yet so continue adding code below chart:

def create_hover_tool():
    # we'll code this function in a moment
    return None


def create_bar_chart(data, title, x_name, y_name, hover_tool=None,
                     width=1200, height=300):
    """Creates a bar chart plot with the exact styling for the centcom
       dashboard. Pass in data as a dictionary, desired plot title,
       name of x axis, y axis and the hover tool HTML.
    """
    source = ColumnDataSource(data)
    xdr = FactorRange(factors=data[x_name])
    ydr = Range1d(start=0,end=max(data[y_name])*1.5)

    tools = []
    if hover_tool:
        tools = [hover_tool,]

    plot = figure(title=title, x_range=xdr, y_range=ydr, plot_width=width,
                  plot_height=height, h_symmetry=False, v_symmetry=False,
                  min_border=0, toolbar_location="above", tools=tools,
                  responsive=True, outline_line_color="#666666")

    glyph = VBar(x=x_name, top=y_name, bottom=0, width=.8,
                 fill_color="#e12127")
    plot.add_glyph(source, glyph)

    xaxis = LinearAxis()
    yaxis = LinearAxis()

    plot.add_layout(Grid(dimension=0, ticker=xaxis.ticker))
    plot.add_layout(Grid(dimension=1, ticker=yaxis.ticker))
    plot.toolbar.logo = None
    plot.min_border_top = 0
    plot.xgrid.grid_line_color = None
    plot.ygrid.grid_line_color = "#999999"
    plot.yaxis.axis_label = "Bugs found"
    plot.ygrid.grid_line_alpha = 0.1
    plot.xaxis.axis_label = "Days after app deployment"
    plot.xaxis.major_label_orientation = 1
    return plot

There is a whole lot of new code above so let's break it down. The create_hover_tool function does not do anything yet, it simply returns None, which we can use if we do not want a hover tool. The hover tool is an overlay that appears when we move our mouse cursor over one of the bars or touch a bar on a touchscreen so we can see more data about the bar.

Within the create_bar_chart function we take in our generated data source and convert it into a ColumnDataSource object that is one type of input object we can pass to Bokeh functions. We specify two ranges for the chart's x and y axes.

Since we do not yet have a hover tool the tools list will remain empty. The line where we create plot using the figure function is where a lot of the magic happens. We specify all the parameters we want our graph to have such as the size, toolbar, borders and whether or not the graph should be responsive upon changing the web browser size.

We create vertical bars with the VBar object and add them to the plot using the add_glyph function that combines our source data with the VBar specification.

The last lines of the function modify the look and feel of the graph. For example I took away the Bokeh logo by specifying plot.toolbar.logo = None and added labels to both axes. I recommend keeping the bokeh.plottin documentation open to know what your options are for customizing your visualizations.

We just need a few updates to our templates/chart.html file to display the visualization. Open the file and add the folloiwng 6 lines to the file. Two of these lines are for the required CSS, two are JavaScript Bokeh files and the remaining two are the generated chart.

<!DOCTYPE html>
<html>
  <head>
    <title>Bar charts with Bokeh!</title>
    <link href="http://cdn.pydata.org/bokeh/release/bokeh-0.12.5.min.css" rel="stylesheet">
    <link href="http://cdn.pydata.org/bokeh/release/bokeh-widgets-0.12.0.min.css" rel="stylesheet">
  </head>
  <body>
    <h1>Bugs found over the past {{ bars_count }} days</h1>
    {{ the_div|safe }}
    <script src="http://cdn.pydata.org/bokeh/release/bokeh-0.12.5.min.js"></script>
    <script src="http://cdn.pydata.org/bokeh/release/bokeh-widgets-0.12.5.min.js"></script>
    {{ the_script|safe }}
  </body>
</html>

Alright, let's give our app a try with a simple chart of 4 bars. The Flask app should automatically reload when you save app.py with the new code but if you shut down the development server fire it back up with the python app.py command.

Open your browser to localhost:5000/4/.

Responsive Bokeh bar chart with 4 bars.

That one looks a bit sparse, so we can crank it up by 4x to 16 bars by going to localhost:5000/16/.

Responsive Bokeh bar chart with 16 bars.

Now another 4x to 128 bars with localhost:5000/128/...

Responsive Bokeh bar chart with 128 bars.

Looking good so far. But what about that hover tool to drill down into each bar for more data? We can add the hover with just a few lines of code in the create_hover_tool function.

Adding a Hover Tool

Within app.py modify the create_hover_tool to match the following code.

def create_hover_tool():
    """Generates the HTML for the Bokeh's hover data tool on our graph."""
    hover_html = """
      <div>
        <span class="hover-tooltip">$x</span>
      </div>
      <div>
        <span class="hover-tooltip">@bugs bugs</span>
      </div>
      <div>
        <span class="hover-tooltip">$@costs{0.00}</span>
      </div>
    """
    return HoverTool(tooltips=hover_html)

It may look really odd to have HTML embedded within your Python application, but that's how we specify what the hover tool should display. We use $x to show the bar's x axis, @bugs to show the "bugs" field from our data source, and $@costs{0.00} to show the "costs" field formatted as a dollar amount with exactly 2 decimal places.

Make sure you changed return None to return HoverTool(tooltips=hover_html) so we can see the results of our new function in the graph.

Head back to the browser and reload the localhost:5000/128/ page.

Responsive Bokeh bar chart with 128 bars and showing the hover tool.

Nice work! Try playing around with the number of bars in the URL and the window size to see what the graph looks like under different conditions.

The chart gets crowded with more than 100 or so bars, but you can give it a try with whatever number of bars you want. Here is what an impractical amount of 50,000 bars looks like just for the heck of it:

Responsive Bokeh bar chart with 50000 bars.

Yea, we may need to do some additional work to display more than a few hundred bars at a time.

What's next?

You just created a nifty configurable bar chart in Bokeh. Next you can modify the color scheme, change the input data source, try to create other types of charts or solve how to display very large numbers of bars.

There is a lot more than Bokeh can do, so be sure to check out the official project documentation , GitHub repository, the Full Stack Python Bokeh page or take a look at other topics on Full Stack Python.

Questions? Let me know via a GitHub issue ticket on the Full Stack Python repository, on Twitter @fullstackpython or @mattmakai.

See something wrong in this blog post? Fork this page's source on GitHub and submit a pull request.

May 26, 2017 04:00 AM

May 25, 2017


Programming Ideas With Jake

Lots of Programming Videos!

A whole bunch of videos have recently dropped from programmer conferences. Like, a LOT!

May 25, 2017 09:53 PM


PyBites

How to Write a Python Class

In this post I cover learning Python classes by walking through one of our 100 days of code submissions.

May 25, 2017 06:37 PM


Python Bytes

#27 The PyCon 2017 recap and functional Python

<ul> <li>All videos available: <a href="https://www.youtube.com/channel/UCrJhliKNQ8g0qoE_zvL8eVg">https://www.youtube.com/channel/UCrJhliKNQ8g0qoE_zvL8eVg</a></li> <li>Lessons learned: <ul> <li>pick up swag on day one. vendors run out.</li> <li>take business cards with you and keep them on you</li> <li>Not your actual business cards unless you are representing your company.</li> <li>Cards that have your social media, github account, blog, or podcast or whatever on them.</li> <li>3x3 stickers are too big. 2x2 plenty big enough</li> <li>lightening talks are awesome, because they are a lot of ranges of speaking experience</li> <li>will definitely do that again</li> <li>try to go to the talks that are important to you, but don’t over stress about it, since they are taped. However, it would be lame if all the rooms were empty, so don’t everybody ditch.</li> <li>lastly: everyone knows Michael. </li> </ul></li> </ul> <p><strong>Michael #2:</strong> <a href="https://www.fullstackpython.com/blog/aws-lambda-python-3-6.html"><strong>How to Create Your First Python 3.6 AWS Lambda Function</strong></a></p> <ul> <li>Tutorial from <a href="https://www.fullstackpython.com/">Full Stack Python</a></li> <li>Walks you through creating an account</li> <li>Select your Python version (3.6, yes!)</li> <li><code>def lambda_handler(event, context): …</code> # write this function, done!</li> <li>Set and read environment variables (could be connection strings and API keys)</li> </ul> <p><strong>Brian #3:</strong> <a href="https://blog.jetbrains.com/pycharm/2017/05/how-to-publish-your-package-on-pypi/"><strong>How to Publish Your Package on PYPI</strong></a></p> <ul> <li>jetbrains article <ul> <li>structure of the package</li> <li>oops. doesn't include src, see https://pythonbytes.fm/22</li> <li>decent discussion of a the contents of the setup.py file (but interestingly absent is an example setup.py file)</li> <li>good discussion of .pypirc file and links to the test and production PyPi</li> <li>example of using twine to push to PyPI</li> <li>overall: good discussion, but you'll still need a decent example.</li> </ul></li> </ul> <p><strong>Michael #4:</strong> <a href="http://coconut-lang.org/"><strong>Coconut: Simple, elegant, Pythonic functional programming</strong></a></p> <ul> <li>Coconut is a functional programming language that compiles to Python. </li> <li>Since all valid Python is valid Coconut, using Coconut will only extend and enhance what you're already capable of in Python.</li> <li><code>pip install coconut</code> <ol> <li>Some of Coconut’s major features include built-in, syntactic support for:</li> <li>Pattern-matching,</li> <li>Algebraic data-types,</li> <li>Tail call optimization,</li> <li>Partial application,</li> <li>Better lambdas,</li> <li>Parallelization primitives, and</li> <li>A whole lot more, all of which can be found in <a href="http://coconut.readthedocs.io/en/master/DOCS.html">Coconut’s detailed documentation</a>.</li> </ol></li> <li>Talk Python episode coming in a week</li> </ul> <p><strong>Brian #5:</strong> <a href="https://choosealicense.com/"><strong>Choose a licence</strong></a></p> <ul> <li>MIT : simple and permissive</li> <li>Apache 2.0 : something extra about patents.</li> <li>GPL v3 : this is the contagious one that requires derivitive work to also be GPL v3</li> <li>Nice list with overviews of what they all mean with color coded bullet points: <a href="https://choosealicense.com/licenses/">https://choosealicense.com/licenses/</a></li> </ul> <p><strong>Michael #6:</strong> <a href="http://pythonforengineers.com/python-for-scientists-and-engineers/"><strong>Python for Scientists and Engineers</strong></a></p> <ul> <li><strong>Table of contents</strong>:</li> <li><strong>Beginners Start Here:</strong> <ul> <li><a href="http://pythonforengineers.com/create-a-word-counter-in-python/"><strong>Create a Word Counter in Python</strong></a></li> <li><a href="http://pythonforengineers.com/an-introduction-to-numpy-and-matplotlib/"><strong>An introduction to Numpy and Matplotlib</strong></a></li> <li><a href="http://pythonforengineers.com/introduction-to-pandas/"><strong>Introduction to Pandas with Practical Examples (New)</strong></a></li> </ul></li> <li><strong>Main Book</strong> <ul> <li><a href="http://pythonforengineers.com/image-and-video-processing-in-python/"><strong>Image and Video Processing in Python</strong></a></li> <li><a href="http://pythonforengineers.com/data-analysis-with-pandas/"><strong>Data Analysis with Pandas</strong></a></li> <li><a href="http://pythonforengineers.com/audio-and-digital-signal-processingdsp-in-python/"><strong>Audio and Digital Signal Processing (DSP)</strong></a></li> <li><a href="http://pythonforengineers.com/control-your-raspberry-pi-from-your-phone-tablet/"><strong>Control Your Raspberry Pi From Your Phone / Tablet</strong></a></li> </ul></li> <li><strong>Machine Learning Section</strong> <ul> <li><a href="http://pythonforengineers.com/machine-learning-with-an-amazon-like-recommendation-engine/"><strong>Machine Learning with an Amazon like Recommendation Engine</strong></a></li> <li><a href="http://pythonforengineers.com/machine-learning-for-complete-beginners/"><strong>Machine Learning For Complete Beginners:</strong></a> <em><em></em></em>Learn how to predict how many Titanic survivors using machine learning. No previous knowledge needed!</li> <li><a href="http://pythonforengineers.com/cross-validation-and-model-selection/"><strong>Cross Validation and Model Selection</strong></a>: In which we look at cross validation, and how to choose between different machine learning algorithms. Working with the Iris flower dataset and the Pima diabetes dataset.</li> </ul></li> <li><strong>Natural Language Processing</strong> <ul> <li><a href="http://pythonforengineers.com/natural-language-processing-and-sentiment-analysis-with-python/"><strong>Introduction to NLP and Sentiment Analysis</strong></a></li> <li><a href="http://pythonforengineers.com/introduction-to-nltk-natural-language-processing-with-python/"><strong>Natural Language Processing with NTLK</strong></a></li> <li><a href="http://pythonforengineers.com/intro-to-nltk-part-2/"><strong>Intro to NTLK, Part 2</strong></a></li> <li><a href="http://pythonforengineers.com/build-a-sentiment-analysis-app-with-movie-reviews/"><strong>Build a sentiment analysis program</strong></a></li> <li><a href="http://pythonforengineers.com/practice-session-sentiment-analysis-with-twitter/"><strong>Sentiment Analysis with Twitter</strong></a></li> <li><a href="http://pythonforengineers.com/analysing-the-enron-email-corpus/"><strong>Analysing the Enron Email Corpus</strong></a>: The Enron Email corpus has half a million files spread over 2.5 GB. When looking at data this size, the question is, where do you even start?</li> <li><a href="http://pythonforengineers.com/build-a-spam-filter/"><strong>Build a Spam Filter using the Enron Corpus</strong></a></li> </ul></li> </ul> <p><strong>In other news</strong>:</p> <ul> <li><a href="https://pragprog.com/book/bopytest/python-testing-with-pytest">Python Testing with pytest</a> Beta release and initial feedback is going very well.</li> </ul>

May 25, 2017 08:00 AM


Experienced Django

Return of pylint

Until last fall I was working in python 2 (due to some limitations at work) and was very happy to have the Syntastic module in my Vim configuration to flag error each time I save a python file.  This was great, especially after writing in C/C++ for years where there is no official standard format and really poor tools to enforce coding standards.

Then, last fall when I started on Django, I made the decision to move to Python 3.  I quickly discovered that pylint is very version-dependent and running the python2.7 version of pylint against Python3 code was not going to work.

I wasn’t particularly familiar with virtualenv at the time, so I gave up and moved on with other things at the time.  I finally got back to fixing this and thus getting pylint and flake8 running again on my code.

Syntastic

I won’t cover the details of how to install Syntastic as it depends on how you manage your plugins in Vim and is well documented.  I will only point out here that Syntastic isn’t a checker by itself, it’s merely a plugin to run various checkers for you directly in Vim.  It run checkers for many languages, but I’m only using it for Python currently as the C code I use for work is so ugly that it will never pass.

Switching versions

The key to getting pylint to run against different versions of python is to not install pylint on a global level, but rather to install it in each virtualenv.  This seems obvious now that I’m more familiar with virtualenv, but I’ll admit it wasn’t at the time I first ran into the problem.

The other key to getting this to work is to only initiate Vim from inside the virtualenv.  This hampers my overall workflow a bit, as I tend to have gVim up and running for the long-term and just add files in new tabs as I go.  To get pylint to work properly, I’ll need to restart Vim when I switch python versions (at a minimum).  This shouldn’t be too much of a problem, however, as I’m doing less and less python2x coding these days.

Coding Style Thoughts

As I beat my head against horrible C code on a daily basis at work, I find myself appreciating more-and-more the idea of PEP-8 and having good tools for coding style enforcement.  While I frequently find some of the rules odd (two spaces here, but only one space there?) I really find it comforting to have a tool which runs, and runs quickly, to keep the code looking consistent.  Now if I could only get that kind of tool for C…….

 

May 25, 2017 01:22 AM