skip to navigation
skip to content

Planet Python

Last update: June 28, 2016 10:49 AM

June 28, 2016


Python Insider

Python 2.7.12 released

The Python 2.7.x series has a new bugfix release, Python 2.7.12, available for download.

June 28, 2016 12:26 AM

June 27, 2016


Erik Marsja

Best Python libraries for Psychology researchers

python-logo-master-v3-TM

Python is gaining popularity in many field of science. This means that there also are many applications and libraries specifically for use in Psychological research. For instance, there are packages for collecting data & analysing brain imaging data. In this post, I have collected some useful Python packages for researchers within the field of Psychology and Neuroscience. I have used and tested some of them but others I have yet to try.

Experiment building applications/libraries

Importing expyrimentImporting expyriment

Expyriment is a Python library in which makes the programming of Psychology experiments a lot easier than using Python. It contains classes and methods for creating fixation cross’, visual stimuli, collecting responses, etc.

Modular Psychophysics  is a collection of tools that aims to implement a modular approach to Psychophysics. It enables us to write experiments in different languages. As far as I understand, you can use both MATLAB and R to control your experiments. That is, the timeline of the experiment can be carried out in another language (e.g., MATLAB).

However, it seems like the experiments are created using Python. Your experiments can be run both locally and over networks. Have yet to test this out.

OpenSesame  is a Python application. Using OpenSesame one can create Psychology experiments. It has a graphical user interface (GUI) that allows the user to drag and drop objects on a timeline. More advanced experimental designs can be implemented using inline Python scripts.

PsychoPy is also a Python application for creating Psychology experiments. It comes packed with a GUI but the API can also be used for writing Python scripts. I have written a bit more thoroughly about PsychoPy: PsychoPy.

PsychoPy GUI for a drag-and-drop creation of experiments.PsychoPy GUI for a drag-and-drop creation of experiments.

I have written more extensively on Expyriment, PsychoPy, Opensesame, and some other libraries for creating experiment in my post Python apps and libraries for creating experiments.

Data analysis

Psychology and Neuroscience

PsyUtils “The psyutils package is a collection of utility functions useful for generating visual stimuli and analysing the results of psychophysical experiments. It is a work in progress, and changes as I work. It includes various helper functions for dealing with images, including creating and applying various filters in the frequency domain.”

Psisignifit is a toolbox that allows you to fit psychometric functions. Further, hypotheses about psychometric data can be tested. Psisignfit allows for full Bayesian analysis of psychometric functions that includes Bayesian model selection and goodness of fit evaluation among other great things. Note, Psisigni

Pygaze is a Python library for eye-tracking data. Pygaze can, through plugins, be used from within OpenSesame.

General Recognition Theory (GRT) is a fork of a MATLAB toolbox. GRT is ” a multi-dimensional version of signal detection theory.” (see link for more information).

MNE is a library designed for processing electroencephalography (EEG) and magnetoencephalography (MEG) data. Collected data can be preprocessed and denoised. Time-frequency analysis and statistical testing can be carried out. MNE can also be used to apply some machine learning algorithms. Although, mainly focused on EEG and MEG data some of the statistical tests in this library can probably be used to analyse behavioural data (e.g., ANOVA).

Kabuki is a Python library for effortless creation of hierarchical Bayesian models. It uses the library PyMC. Using Kabuki you will get formatted summary statistics, posterior plots, and many more. There is, further, a function to generate data from a formulated model. It seems that there is an intention to add more commonly used statistical tests (i.e., Bayesian ANOVA) in the future!

NIPY: “Welcome to NIPY. We are a community of practice devoted to the use of the Python programming language in the analysis of neuroimaging data”. Here different packages for brain imaging data can be found.

General

Although, many of the above libraries probably can be used within other research fields there are also more libraries for pure statistics & visualisation.

Descriptive and parametric statistics

PyMVPA is a Python library for MultiVariate Pattern Analysis. It enables statistical learning analyses of big data.

Pandas is a Python library for fast, flexible and expressive data structures. Researchers and analysists with an R background will find Pandas data frame objects very similar to Rs. Data can be manipulated, summarised, and some descriptive analysis can be carried out (e.g., see Descriptive Statistics Using Python for some examples using Pandas).Analysing Psychological data using Python

Statsmodels is a Python library for data exploration, estimation of statistical models, and statistical tests. An extensive list of descriptive statistics, statistical tests, plotting functions, and result statistics are available for different types of data and each estimator. Among many methods regression, generalized linear, and non-parametric tests can be carried out using statsmodels.

 

Pyvttbl is a library for creating Pivot tables. One can further process data and carry out statistical computations using Pyvttbl. Sadly, it seems like it is not updated anymore and is not compatible with other packages (e.g., Pandas). If you are interested in how to carry out repeated measures ANOVAs in Python this is a package that enables these kind of analysis (e.g., see Repeated Measures ANOVA using Python and Two-way ANOVA for Repeated Measures using Python).

Visualisation

There are many Python libraries for visualisation of data. Below are the ones I have worked with. Note, pandas and statsmodels also provides methods for plotting data. All three libraries are compatible with Pandas which makes data manipulation and visualisation very easy.

Python visualisation using SeabornBoxplot made using Seaborn.

Matplotlib is a package for creating two-dimensional plots.

Seaborn is a library based on Matplotlib. Using seaborn you can create ready-to-publish graphis (e.g., see the Figure below for a boxplot on some response time data).

Ggplot is a visualisation library based on the R package Ggplot2. That is, if you are familiar with R and Ggplot2 transitioning to Python and the package Ggplot will be easy.

Many of the libraries for analysis and visualisation can be installed separately and, more or less, individually . I do however recommend that you install a scientific Python distribution. This way you will get all you need (and much more) by one click (e.g., Pandas, Matplotlib, NumPy, Statsmodels, Seaborn). I suggest you have a look at the distributions Anaconda or Python(x, y). Note, that installing Python(x, y) will give you the Python IDE Spyder.

The last Python package for Psychology I am going to list is PsychoPy_ext. Although, PsychoPy_ext may be considered a library for building experiments it seems like the general aim behind the package is to make reproducible research easier. That is, analysis and plotting of the experiments can be carried out. I also think it is interesting that there seems to be a way to autorun experiments (a neat function I know that e-prime have, for instance).

That was it, if you happen to know any other Python libraries or applications with a focus on Psychology (e.g., psychophysics) or for statistical methods.

The post Best Python libraries for Psychology researchers appeared first on Erik Marsja.

June 27, 2016 09:23 PM


Continuum Analytics News

Anaconda Fusion: A Portal to Open Data Science for Excel

Posted Monday, June 27, 2016

Excel has been business analysts’ go-to program for years. It works well, and its familiarity makes it the currency of the realm for many applications.

But, in a bold new world of predictive analytics and Big Data, Excel feels cut off from the latest technologies and limited in the scope of what it can actually take on.

Fortunately for analysts across the business world, a new tool has arrived to change the game — Anaconda Fusion.

A New Dimension of Analytics

The interdimensional portal has been a staple of classic science fiction for decades. Characters step into a hole in space and emerge instantly in an entirely different setting — one with exciting new opportunities and challenges.

Now, Data Science has a portal of its own. The latest version of Anaconda Fusion, an Open Data Science (ODS) integration for Microsoft Excel, links the familiar world of spreadsheets (and the business analysts that thrive there) to the “alternate dimension” of Open Data Science that is reinventing analytics.

With Anaconda Fusion and other tools from Anaconda, business analysts and data scientists can share work — like charts, tables, formulas and insights — across Excel and ODS languages such as Python easily, erasing the partition that once divided them.

Jupyter (formerly IPython) is a popular approach to sharing across the scientific computing community, with notebooks combining  code, visualizations and comments all in one document. With Anaconda Enterprise Notebooks, this is now available under a governed environment, providing the collaborative locking, version control, notebook differencing and searching needed to operate in the enterprise. Since Anaconda Fusion, like the entire Anaconda ecosystem, integrates seamlessly with Anaconda Enterprise Notebooks, businesses can finally empower Excel gurus to collaborate effectively with the entire Data Science team.

Now, business analysts can exploit the ease and brilliance of Python libraries without having to write any code. Packages such as scikit-learn and pandas drive machine learning initiatives, enabling predictive analytics and data transformations, while plotting libraries, like Bokeh, provide rich interactive visualizations.

With Anaconda Fusion, these tools are available within the familiar Excel environment—without the need to know Python. Contextually-relevant visualizations generated from Python functions are easily embedded into spreadsheets, giving business analysts the ability to make sense of, manipulate and easily interpret data scientists’ work. 

A Meeting of Two Cultures

Anaconda Fusion is connecting two cultures from across the business spectrum, and the end result creates enormous benefits for everyone.

Business analysts can leverage the power, flexibility and transparency of Python for data science using the Excel they are already comfortable with. This enables functionality far beyond Excel, but also can teach business analysts to use Python in the most natural way: gradually, on the job, as needed and in a manner that is relevant to their context. Given that the world is moving more and more toward using Python as a lingua franca for analytics, this benefit is key.

On the other side of the spectrum, Python-using data scientists can now expose data models or interactive graphics in a well-managed way, sharing them effectively with Excel users. Previously, sharing meant sending static images or files, but with Anaconda Fusion, Excel workbooks can now include a user interface to models and interactive graphics, eliminating the clunky overhead of creating and sending files.

It’s hard to overstate how powerful this unification can be. When two cultures learn to communicate more effectively, it results in a cross-pollination of ideas. New insights are generated, and synergistic effects occur.

The Right Tools

The days of overloaded workarounds are over. With Anaconda Fusion, complex and opaque Excel macros can now be replaced with the transparent and powerful functions that Python users already know and love.

The Python programming community places a high premium on readability and clarity. Maybe that’s part of why it has emerged as the fourth most popular programming language used today. Those traits are now available within the familiar framework of Excel.

Because Python plays so well with web technologies, it’s also simple to transform pools of data into shareable interactive graphics — in fact, it's almost trivially easy. Simply email a web link to anyone, and they will have a beautiful graphics interface powered by live data. This is true even for the most computationally intense cases — Big Data, image recognition, automatic translation and other domains. This is transformative for the enterprise.

Jump Into the Portal

The glowing interdimensional portal of Anaconda Fusion has arrived, and enterprises can jump in right way. It’s a great time to unite the experience and astuteness of business analysts with the power and flexibility of Python-powered analytics.

To learn more, you can watch our Anaconda Fusion webinar on-demand, or join our Anaconda Fusion Innovators Program to get early access to exclusive features -- free and open to anyone. You can also contact us with any questions about how Anaconda Fusion can help improve the way your business teams share data. 

June 27, 2016 03:36 PM


Doug Hellmann

grp — UNIX Group Database — PyMOTW 3

The grp module can be used to read information about UNIX groups from the group database (usually /etc/group ). The read-only interface returns tuple-like objects with named attributes for the standard fields of a group record. Read more… This post is part of the Python Module of the Week series for Python 3. See PyMOTW.com … Continue reading grp — UNIX Group Database — PyMOTW 3

June 27, 2016 01:00 PM


Mike Driscoll

PyDev of the Week: Barry Warsaw

This week we welcome Barry Warsaw (@pumpichank) as our PyDev of the Week! Barry works on the Ubuntu operating system for Canonical and he’s the project leader for GNU Mailman. He also used to be the lead maintainer for the Jython project. If you have the time, you should check out his website. Barry’s Github page is also worth a look to see what projects he finds interesting. Let’s take a few minutes to find out more about our fellow Pythonista!

Can you tell us a little about yourself (hobbies, education, etc):

I work for Canonical on the Ubuntu Foundations team, so we take care of the plumbing layer of Ubuntu. Roughly, it’s stuff like toolchains (interpreters, compilers, language runtimes, etc.), image building, installers, and general archive goodness. I try to keep the Python stack happy and over the last few Ubuntu releases have really been concentrating on switching Ubuntu to Python 3. I’m also a Debian Developer, so a lot of the more general work on Python happens there first, and then gets imported automatically into Ubuntu. We get especially busy when new Python versions are released, ensuring that transitions go smoothly. At work I’m also responsible for the “system image updater client” which is the piece of the Ubuntu Touch flavor that performs the atomic system upgrades on Ubuntu phones and tablets.

On the side, I’m a semi-professional musician, playing bass in several local bands, doing studio work for local artists, and writing in my own home studio. I’m also a tai chi practitioner for the last 15 years or so, studying Yang style short form, sword, push hands, and more recently qi gong.

Why did you start using Python?

In 1994 I had just started working at the Corporation for National Research Initiatives in Reston, Virginia. We were doing work with mobile software agents called Knowbots. We started out using Objective-C on NeXT machines. Earlier in my career I had worked at the National Institute of Standards and Technologies, and in late 1994 some former colleagues of mine from there invited me to attend a small workshop hosted at NIST. A Dutch developer was visiting, and the topic of the workshop was this relatively new object-oriented language he’d invented.

The language of course was Python and the developer was Guido van Rossum. The workshop was fantastic and we came away with a plan to work on an Objective-C to Python bridge to support our project. One of our CNRI colleagues suggested we try to hire Guido, and as it turned out, Guido was interested in coming to work and live in the States. Once at CNRI, Guido quickly convinced us that we could do the entire project in Python, without the need for Objective-C! I had the honor of working closely with Guido for the next 8 or so years, through three companies, until he moved out to California to work for Google.

I fell in love with the Python language early on, and over those years, our little team moved a lot of the Python project infrastructure from the Netherlands to the USA. We worked on Python, and used Python in lots of projects, and I served as the release manager several times. Looking back, the Python project and community has grown unbelievable over these last 20 years, probably more than any of us would have dared to imagine, but not more than we’d hoped!

What other programming languages do you know and which is your favorite?

I’d always enjoyed learning new programming languages, and was (am?) pretty proficient at many of the old school languages like C, C++, Lisp, FORTH, Perl, and so on. But I recognized pretty quickly that Python was special. For me, it’s the most direct translation from what’s going on in my head to working code. I still enjoy occasionally dropping down into C, tweaking my Emacs Lisp, mucking about with JavaScript, or learning the new goodness such as Go. But if I’m starting a new project or looking for something fun to work on, Python will be my first choice.

What projects are you working on now?

Outside of work, I am the project leader for GNU Mailman, a mailing list management system. This is a project with a long history, and it’s fueled the Python mailing lists since the late ’90s. Although I didn’t invent it originally, I’ve been leading the project for a long time, concentrating these days on improving version 3, which modernizes everything including really nice new web interfaces for list and subscription management (Postorius) and web-based archives (HyperKitty). Postorius and HyperKitty are Django applications which are really lead by other fantastic developers on our team. The are better web developers than me, and I enjoy working on the core management engine, which of course is written in Python 3.

Which Python libraries are your favorite (core or 3rd party)?

In Mailman, we use the email library from the stdlib, which I originally wrote back in the Python 2 days. These days, it’s been very much improved and extended in Python 3 by R. David Murray and others. I’m also quite a fan of the stdlib contextlib library, and relatively recently I got a chance to dig into the new asyncio library. That’s pretty braintwisty but really cool once you get the hang of it.

For third party libraries, I really love Falcon, which is a REST API library. Mailman 3 has a distributed architecture, with components like Postorius and HyperKitty talking to the core over REST, and Falcon is what powers this for us. It’s an amazingly cool library.

In Mailman we also make heavy use of zope.interface and zope.component, two mature libraries for providing component based composition of complex programs. SQLAlchemy for the ORM layer and nose2 for testing are also favorites. I have my own little suite of flufl libraries that provide some nice utilities as well.

Where do you see Python going as a programming language?

I think Python’s future is bright. The size of Pycon certainly attests to its continued popularity, and I think there are a lot of interesting ongoing developments. We have a number of implementations of Python, and ones like PyPy show constant improvement in speed, compatibility, and innovation. I’m excited to hear about some of the work regarding JITs, optional static typing, and other performance and multi-core work going on in the community.

I’m very happy that Python 3 adoption is strong and growing. It’s no longer the case that we have to convince people to support it, but it’s more like a known bug if you *don’t*. The notable exceptions are dwindling and we’re really in the long-tail of porting to Python 3. I’ve been using it almost exclusively for many years now, and don’t miss Python 2 at all.

Is there anything else you’d like to say?

At the time of this writing, Pycon 2016 is exactly one month away, and my colleague Larry Hastings and I are once again chairing the Language Summit. Every year around this time I get so fired up about not just the amazing things happening *in* Python, and not just the incredible things people are doing *with* Python. More than anything, I can’t wait to once again hobnob with the most incredible open source community in the world. We had 20 people at the first Python workshop back in 1994, and I’ve heard that Pycon this year will host about 3000 Python enthusiasts. There’s nothing in my technical career that can compare with being in a conference center with so many amazing people, attending inspiring and mind blowing talks, capped off with 4 days of sprints with friends and co-developers. I can’t wait!

June 27, 2016 12:30 PM


Python Insider

Python 3.5.2 and Python 3.4.5 are now available

Python 3.5.2 and Python 3.4.5 are now available for download.

You can download Python 3.5.2 here, and you can download Python 3.4.5 here.

June 27, 2016 04:33 AM

June 26, 2016


Weekly Python Chat

List Comprehensions Workshop

When should you use list comprehensions?

When should you not use list comprehensions?

How do you make your list comprehensions readable?

How can you identify whether a code section might be translatable to a comprehension?

This event will be approximately 75 minutes long and will consist of:

June 26, 2016 04:00 PM


PyCon Australia

The 2016 Programme is here!

We are proud to unveil the 2016 Programme to you all. While still officially a draft, and exact timings are subject to change, the bulk of the Programme will remain as it is.

Spread across four rooms at the Melbourne Convention and Exhibition Centre, running from 9am to 6pm each day, there is a wealth of knowledge and support for you, whatever your background.

For your convenience, each day's details can be found below.

In related news, tickets have been selling steadily leading up to this point, after the initial burst. In previous years, many people have left registering until too late, and some unfortunately missed out. Don't leave it too late this time around!

  • Friday 12 August (Internet of Things, Education Summit, DjangoCon AU, and Science and Data)
  • Saturday 13 August
  • Sunday 14 August
  • Should you have submitted a proposal and are currently waitlisted, we will contact you shortly, as the final programme slots are locked in.

    June 26, 2016 12:23 PM


    Vasudev Ram

    A Pythonic ratio for pi (or, Py for pi :)

    By Vasudev Ram

    Py


    Pi image attribution

    A Python built-in method can be used to find a ratio (of two integers) that equals the mathematical constant pi. [1]

    This is how:
    from __future__ import print_function
    Doing:
    dir(0.0) # or dir(float)
    gives (some lines truncated):
    '__str__', '__sub__', '__subclasshook__', '__truediv__', '__trunc__', 
    'as_integer_ratio', 'conjugate', 'fromhex', 'hex', 'imag', 'is_integer', 'real']
    >>>
    from which we see that as_integer_ratio is a method of float objects. (Floats are objects, so they can have methods.) So:
    >>> import math

    >>> tup = math.pi.as_integer_ratio()
    >>> tup
    (884279719003555, 281474976710656)

    >>> tup[0] / tup[1]
    3.141592653589793

    >>> print(sys.version)
    3.6.0a2 (v3.6.0a2:378893423552, Jun 14 2016, 01:21:40) [MSC v.1900 64 bit (AMD64
    )]
    >>>
    I was using Python 3.6 above. If you do this in Python 2.7, the "/" causes integer division (when used with integers). So you have to multiply by a float to cause float division to happen:
    >>> print(sys.version)
    2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015, 20:40:30) [MSC v.1500 64 bit (AMD64)]

    >>> tup[0] / tup[1]
    3L
    >>> 1.0 * tup[0] / tup[1]
    3.141592653589793
    >>>
    [1] There are many Wikipedia topics related to pi.
    Also check out a few of my earlier math-related posts (including the one titled "Bhaskaracharya and the man who found zero" :)

    The second post in the series on the uses of randomness will be posted in a couple of days - sorry for the delay.

    - Vasudev Ram - Online Python training and consulting

    Signup to hear about my new courses and products.

    My Python posts     Subscribe to my blog by email

    My ActiveState recipes



    June 26, 2016 10:07 AM


    Doing Math with Python

    O'Reilly Webcast: Doing Math with Python

    I am very excited to share that I am doing a webcast this coming week with O'Reilly titled "Doing Math with Python". You can register for it on the event page.

    Here are the date and time of the webcast:

    Wed, June 29th at 7 PM, San Francisco

    Wed, June 29th at 10pm, New York

    Thu, Jun 30th at 3am - London

    Thu, Jun 30th at 7:30am - Mumbai

    Thu, Jun 30th at 10am - Beijing

    Thu, Jun 30th at 11am - Tokyo

    Thu, Jun 30th at 12pm - Sydney

    I have created a GitHub repository which will have the rough transcript, final slides and the code examples as Jupyter Notebooks.

    June 26, 2016 09:00 AM


    Full Stack Python

    Setting Up Python 3, Django & Gunicorn on Linux Mint 17.3

    Linux Mint 17.3 "Rosa" is December 2015 release of the polished and widely-used Linux distribution. This Mint release includes both Python 2.7 and 3.4 by default, but in this tutorial we will download and install the latest Python 3.5.1 version to run our Django application.

    If you want to use a different Linux distribution such as Ubuntu instead of Mint, check out the tutorial for Ubuntu 16.04 "Xenial Xerus". If Mint is your desired development environment though, let's get started!

    Tools We Need

    Our setup will use several system packages and code libraries to get up and running. Do not worry about installing these dependencies just yet, we will get to them as we progress through the tutorial. The tools and their current versions as of June 2016 are:

    If you are on Mac OS X or Windows, my recommendation is to use virtualization software such as Parallels or VirtualBox with the Linux Mint Cinnamon desktop .iso.

    We should see a desktop screen like this one when we boot up the operating system for the first time.

    Linux Mint default desktop

    Open up terminal to proceed with the configuration.

    System Packages

    We can see the Python version Linux Mint comes with, as well as where its executable is stored.

    python3 --version
    which python3
    

    The output of those two commands should be (these are not commands to run):

    Python 3.4.3
    /usr/bin/python3
    

    Output of 'python --version' and 'which python3' commands.

    We really want to use the latest Python release instead of the default 3.4 when starting a new Python project, so let's download and install 3.5.1 now.

    Run these commands in the terminal to download Python 3.5.1 source code:

    cd ~/Downloads
    wget https://www.python.org/ftp/python/3.5.1/Python-3.5.1.tgz
    

    wget Python source code output.

    Extract the Python source code:

    tar -xvf Python-3.5.1.tgz
    

    Linux Mint is not configured by default to build the Python source code. We need to update our system package lists and install several packages to make building the Python source code possible. If you have a password on your user account, enter it when prompted to allow the installation to proceed.

    sudo apt update
    sudo apt install build-essential checkinstall
    sudo apt install libreadline-gplv2-dev libncursesw5-dev libssl-dev 
    sudo apt install libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev
    sudo apt install python3-dev
    

    Once the packages are installed, we can configure and install Python from source.

    cd Python-3.5.1
    ./configure
    sudo make install
    

    Test that the installation worked properly by starting up the Python REPL:

    python3.5
    

    If the REPL starts up properly with Python 3.5.1 in the output then we're good to go.

    wget Python source code output.

    The basic system packages we need are now installed so we can proceed to our Python-specific dependencies.

    Virtual environment and pip

    Python 3.5 comes with the virtual environment and pip applications so we can use them to handle our application dependencies.

    Create a directory to store virtual environments then create a virtualenv for our Django project.

    # the tilde "~" specifies the user's home directory, like /home/matt
    cd ~
    mkdir venvs
    # specify the system python3 installation
    python3.5 -m venv djangoproj
    

    Activate the virtualenv.

    source ~/venvs/djangoproj/bin/activate
    

    We should see our prompt change so that we know the virtualenv is properly activated.

    Output from the virtualenv environment activation.

    Our virtualenv with Python 3.5.1 is activated so we can install whatever dependencies we want, such as Django and Gunicorn. Our default python command is also set to use the Python 3.5.1 installation instead of the Python 2.7 version that comes with Linux Mint.

    Django and Gunicorn

    Now we can install Django and Green Unicorn into our virtual environment.

    pip install django==1.9.7 gunicorn==19.6
    

    If there are no errors in the pip output then that is a good sign we can proceed.

    Django and Gunicorn properly install via the pip command.

    Create a new Django project named djangoproj, or whatever you want to name your project. Change into the directory for the new project.

    cd ~
    django-admin startproject djangoproj
    cd djangoproj
    

    We can run Django using the development server with the python manage.py runserver command. However, start Django up with Gunicorn instead.

    gunicorn djangoproj.wsgi
    

    Result of running gunicorn djangoproj.wsgi on the command line.

    Awesome, we can bring up our shell project in the web browser at the http://localhost:8000 or http://127.0.0.1:8000 address.

    Django project running in the Firefox web browser.

    Now you're ready for Django development!

    Ready for Development

    Those are the first few steps for beginning development with Django and Gunicorn on Linux Mint 17.3 "Rosa". If you need an even more in-depth walkthrough for deploying your Python web application to a production environment, check out the Full Stack Python Guide to Deployments book.

    To figure out what to do next for your Python project, read the topics found on the table of contents page.

    Questions? Contact me via Twitter @fullstackpython or @mattmakai. I'm also on GitHub with the username makaimc.

    See something wrong in this post? Fork this page's source on GitHub and submit a pull request.

    June 26, 2016 04:00 AM


    Podcast.__init__

    Episode 63 - Armin Ronacher

    Summary

    Armin Ronacher is a prolific contributor to the Python software ecosystem, creating such widely used projects as Flask and Jinja2. This week we got the opportunity to talk to him about how he got his start with Python and what has inspired him to create the various tools that have made our lives easier. We also discussed his experiences working in Rust and how it can interface with Python.

    Brief Introduction

    Linode Sponsor Banner

    Use the promo code podcastinit20 to get a $20 credit when you sign up!

    sentry-horizontal-black.png

    Stop hoping your users will report bugs. Sentry’s real-time tracking gives you insight into production deployments and information to reproduce and fix crashes. Use the code podcastinit at signup to get a $50 credit!

    Interview with Armin Ronacher

    Keep In Touch

    Picks

    Links

    The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

    Summary Armin Ronacher is a prolific contributor to the Python software ecosystem, creating such widely used projects as Flask and Jinja2. This week we got the opportunity to talk to him about how he got his start with Python and what has inspired him to create the various tools that have made our lives easier. We also discussed his experiences working in Rust and how it can interface with Python.Brief IntroductionHello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.I would like to thank everyone who has donated to the show. Your contributions help us make the show sustainable. For details on how to support the show you can visit our site at pythonpodcast.comLinode is sponsoring us this week. Check them out at linode.com/podcastinit and get a $20 credit to try out their fast and reliable Linux virtual servers for your next projectWe are also sponsored by Sentry this week. Stop hoping your users will report bugs. Sentry's real-time tracking gives you insight into production deployments and information to reproduce and fix crashes. Check them out at getsentry.comVisit our site to subscribe to our show, sign up for our newsletter, read the show notes, and get in touch.To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workersJoin our community! Visit discourse.pythonpodcast.com for your opportunity to find out about upcoming guests, suggest questions, and propose show ideas.Your hosts as usual are Tobias Macey and Chris PattiToday we're interviewing Armin Ronacher about his contributions to the Python community. Use the promo code podcastinit20 to get a $20 credit when you sign up! Stop hoping your users will report bugs. Sentry's real-time tracking gives you insight into production deployments and information to reproduce and fix crashes. Use the code podcastinit at signup to get a $50 credit!Interview with Armin RonacherIntroductionsHow did you get introduced to Python? - ChrisWhat was the first open source project that you created in Python? - TobiasWhat is your view of the responsibility for open source project maintainers and how do you manage a smooth handoff for projects that you no longer wish to be involved in? - TobiasYou have created a large number of successful open source libraries and tools during your career. What are some of the projects that may be less well known that you think people might find interesting? - Tobias (e.g. logbook)I notice that you recently worked on the pipsi project. Please tell us about it! - ChrisFollowing on from the last question, where would you like to see the Python packaging infrastructure go in the future? - ChrisYou have had some strong opinions of Python 2 vs Python 3. How has your position on that subject changed over time? - TobiasLet's talk about Lektor - what differentiates it from the pack, and what keeps you coming back to CMS projects? - ChrisHow has your blogging contributed to the work that you do and the success you have achieved? - TobiasLately you have been doing a fair amount of work with Rust. What was your reasoning for learning that language and how has it influenced your work with Python? - TobiasIn addition to the code you have written, you also helped to form the Pocoo organization. Can you explain what Pocoo is and what it does? What has inspired the rebranding to the Pallets project? - TobiasKeep In TouchTwitterPicksTobiasRadical CandorChrisLoverbeer BeerBrugnaThe Human Resource MachineArminBiermanufaktur LonciumMatakustix - Hai Hai HaibodnLinksPHPbbPocooPallets ProjectThe intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

    June 26, 2016 12:48 AM

    June 25, 2016


    Damián Avila

    How to pin Conda

    One interesting advance feature in Conda is the capacity to pin packages from your environments so they can not be updated at all. If you are interested in that specific version of some package and not the next one because it breaks your work completely or for some other reason, you are probably pinning that package. If you are adding the specific version in every command you run instead of pinning the package, you are doing it wrong and you should keep reading ;-)

    But, is it possible to pin Conda itself so it does not get updated every time you try to install/update something else?

    Read more… (2 min remaining to read)

    June 25, 2016 03:54 PM


    Catalin George Festila

    OpenGL and OpenCV with python 2.7 - part 002.

    I deal today with opencv and I fix some of my errors.
    One is this error I got with cv2.VideoCapture. When I  try to used with load video and createBackgroundSubtractorMOG2() i got this:

    cv2.error:   C:\builds\master_PackSlaveAddon-win64-vc12-static\opencv\modules\highgui\src\window.cpp:281:  error: (-215) size.width0 amp="" cv::imshow="" function="" i="" in="" size.height="">
    You need also to have opencv_ffmpeg310.dll and opencv_ffmpeg310_64.dll into your Windows C:\Windows\System32, this will help me to play videos.
    Now make sure you have the opencv version 3.1.0 because opencv come with some changes over python.
    C:\Python27\python
    Python 2.7.8 (default, Jun 30 2014, 16:08:48) [MSC v.1500 64 bit (AMD64)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>>import cv2
    >>>print cv2.__version__
    3.1.0

    You can take some infos from about opencv python module - cv2 with:

    >>>cv2.getBuildInformation()
    ...
    >>>cv2.getCPUTickCount()
    ...
    >>>print cv2.getNumberOfCPUs()
    ...
    >>>print cv2.ocl.haveOpenCL()
    True

    You can also see some error by disable OpenCL:

    >>>cv2.ocl.setUseOpenCL(False)
    >>>print cv2.ocl.useOpenCL()
    False

    Now will show you how to use webcam gray and color , and play one video:
    webcam color

    import numpy as np
    import cv2
    cap = cv2.VideoCapture(0)
    while(True):
        ret, frame = cap.read()
        cv2.imshow('frame',frame)
        if 0xFF & cv2.waitKey(5) == 27:
            break
    cap.release()
    cv2.destroyAllWindows()

    webcam gray

    import numpy as np
    import cv2
    cap = cv2.VideoCapture(0)
    while(True):
        ret, frame = cap.read()
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        cv2.imshow('frame',gray)
        if 0xFF & cv2.waitKey(5) == 27:
            break
    cap.release()
    cv2.destroyAllWindows()

    play video

    import cv2
    from cv2 import *
    capture = cv2.VideoCapture("avi_test_001.avi")
    while True:
        ret, img = capture.read()
        cv2.imshow('some', img)
        if 0xFF & cv2.waitKey(5) == 27:
            break
    cv2.destroyAllWindows()


    June 25, 2016 11:20 AM


    Weekly Python StackOverflow Report

    (xxv) stackoverflow python report

    These are the ten most rated questions at Stack Overflow last week.
    Between brackets: [question score / answers count]
    Build date: 2016-06-25 09:53:40 GMT


    1. Is there a Python constant for Unicode whitespace? - [13/1]
    2. Cut within a pattern using Python regex - [10/4]
    3. Meaning of '>>' in Python byte code - [10/1]
    4. Empty class size in python - [9/2]
    5. How to write a complete Python wrapper around a C Struct using Cython? - [8/1]
    6. How to write unittests for an optional dependency in a python package? - [7/2]
    7. Why does Python's float raise ValueError for some very long inputs? - [7/2]
    8. Fast algorithm to find indices where multiple arrays have the same value - [6/4]
    9. Identifying consecutive occurrences of a value - [6/3]
    10. How to obtain the right alpha value to perfectly blend two images? - [6/2]

    June 25, 2016 09:55 AM


    Catalin George Festila

    OpenGL and OpenCV with python 2.7 - part 001.

    First you need to know what version of python you use.
    C:\Python27>python
    Python 2.7.8 (default, Jun 30 2014, 16:08:48) [MSC v.1500 64 bit (AMD64)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>>

    You need also to download the OpenCV version 3.0 from here.
    Then run the executable into your folder and get cv2.pyd file from \opencv\build\python\2.7\x64 and paste to \Python27\Lib\site-packages.
    If you use then use 32 bit python version then use this path: \opencv\build\python\2.7\x86.
    Use pip to install next python modules:
    C:\Python27\Scripts>pip install PyOpenGL
    ...
    C:\Python27\Scripts>pip install numpy
    ...
    C:\Python27\Scripts>pip install matplotlib
    ...

    Let's see how is working OpenGL:
    C:\Python27>python
    Python 2.7.8 (default, Jun 30 2014, 16:08:48) [MSC v.1500 64 bit (AMD64)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import OpenGL
    >>> import numpy
    >>> import matplotlib
    >>> import cv2
    >>> from OpenGL import *
    >>> from numpy import *
    >>> from matplotlib import *
    >>> from cv2 import *

    You can also use dir(module) to see more. You can import all from GL, GLU and GLUT.
    >>> dir(OpenGL)
    ['ALLOW_NUMPY_SCALARS', 'ARRAY_SIZE_CHECKING', 'CONTEXT_CHECKING', 'ERROR_CHECKING', 'ERROR_LOGGING', 'ERROR_ON_COPY', 'FORWARD_COMPATIBLE_ONLY', 'FULL_LOGGING', 'FormatHandler', 'MODULE_ANNOTATIONS', 'PlatformPlugin', 'SIZE_1_ARRAY_UNPACK', 'STORE_POINTERS', 'UNSIGNED_BYTE_IMAGES_AS_STRING', 'USE_ACCELERATE', 'WARN_ON_FORMAT_UNAVAILABLE', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', '__version__', '_bi', 'environ_key', 'os', 'plugins', 'sys', 'version']
    >>> from OpenGL.GL import *
    >>> from OpenGL.GLU import *
    >>> from OpenGL.GLUT import *
    >>> from OpenGL.WGL import *

    If you are very good with python OpenGL module then you can import just like this example:
    >>> from OpenGL.arrays import ArrayDatatype
    >>> from OpenGL.GL import (GL_ARRAY_BUFFER, GL_COLOR_BUFFER_BIT,
    ... GL_COMPILE_STATUS, GL_FALSE, GL_FLOAT, GL_FRAGMENT_SHADER,
    ... GL_LINK_STATUS, GL_RENDERER, GL_SHADING_LANGUAGE_VERSION,
    ... GL_STATIC_DRAW, GL_TRIANGLES, GL_TRUE, GL_VENDOR, GL_VERSION,
    ... GL_VERTEX_SHADER, glAttachShader, glBindBuffer, glBindVertexArray,
    ... glBufferData, glClear, glClearColor, glCompileShader,
    ... glCreateProgram, glCreateShader, glDeleteProgram,
    ... glDeleteShader, glDrawArrays, glEnableVertexAttribArray,
    ... glGenBuffers, glGenVertexArrays, glGetAttribLocation,
    ... glGetProgramInfoLog, glGetProgramiv, glGetShaderInfoLog,
    ... glGetShaderiv, glGetString, glGetUniformLocation, glLinkProgram,
    ... glShaderSource, glUseProgram, glVertexAttribPointer)

    Most of this OpenGL need to have a valid OpenGL rendering context.
    For example you can test it with WGL ( WGL or Wiggle is an API between OpenGL and the windowing system interface of Microsoft Windows):
    >>> import OpenGL
    >>> from OpenGL import *
    >>> from OpenGL import WGL
    >>> print WGL.wglGetCurrentDC()
    None

    Now , let's see the OpenCV python module with s=one simple webcam python script:

    import numpy as np
    import cv2
    cap = cv2.VideoCapture(0)
    while(True):
    ret, frame = cap.read()
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    cv2.imshow('frame',gray)
    if cv2.waitKey(1) & 0xFF == ord('q'):
    break
    cap.release()
    cv2.destroyAllWindows()
    This is result of my webcam:



    June 25, 2016 07:51 AM


    Jamal Moir

    Creating Attractive and Informative Map Visualisations in Python with Basemap


    When doing a bit of good old data exploration and analysis you'll often want ways of visually representing this data. It helps you understand what you're working with, and allows you to present your exciting new discoveries to other people in easy-to-digest formats that even the data illiterate can understand (if those people even exist now-a-days).

    Some data has a location associated with them. Things like per city/country populations or general election votes and the like can be bound to a location somewhere. If you have data like this then being able to plot, or represent this data somehow on a map can be of massive help. It allows you to spot trends in areas and visually recognise groups in your data as well as being a fantastic method of communicating your results to people.

    If you are doing your data analysis in Python, then lucky you; representing your data on a map is a fairly simple task. In this post we will go into just how we do this and by the end will have a pretty little map to communicate our data to the masses.

    To follow along with this post, you should have basic knowledge of Pandas and Matplotlib, but if you don't, don't worry because you can go and check out my Pandas and Matplotlib tutorials to learn everything you need for this post and more.


    THE TOOL OF THE TRADE - BASEMAP

    The tool that we will be using to create our map visualisation is Matplotlib Basemap. Now this tool is actually not part of the Matplotlib package, so you'll have to install it separately.

    I hope for your sake that you are using Anaconda; if you are you can simply run the command conda install basemap. If you are not using Anaconda, then you are going to have to spend a bit of time manually installing it. You can find information on how to manually install Basemap here.


    DRAWING A BASIC MAP

    OK! Now we have Basemap installed and ready to use, we can get going and draw our first map. To draw a map in Basemap you first need to know a few things:
    • Where you want your map to be centred
    • The latitude and longitude of the lower left corner of the bounding box around the area you want to map.
    • The latitude and longitude of the upper right corner of the bounding box around the area you want to map.
    • Instead of the corners of the bounding box you can also use the width and height of the area you want to map in metres.
    Now you may be wondering 'how hell am I supposed to know all that stuff?'. Well, I've got you covered there. 

    First go to this very useful website and you will be presented with a world map. In the top left corner of the map there is a button with a cursor icon on, click that and draw a box around the area you want to map.



    At the bottom there is a box with some longitude and latitudes in. To the left of that there is a drop down menu, click that and select DublinCore; this is the easiest format to understand in my opinion and it's in a form that can be directly used in Basemap. The first two numbers labeled 'westlimit' and 'southlimit' are the latitude and longitude of your lower left corner. The other two, labelled 'eastlimit' and 'northlimit' are the latitude and longitude of your upper right corner.


    Now we have the information we need to be able to draw our map, we can get to writing some code and actually producing a basic map. Now note that in this post the data I will be using to plot points and such on a map is the 2015 England and Wales Property prices and so I will be drawing the UK. I will provide links to all the data I used in this visualisation, but by all means use your own and make a completely different map. In fact, I recommend you do!

    First we will import the packages that we will be using.

    import matplotlib.pyplot as plt
    import matplotlib.cm

    from mpl_toolkits.basemap import Basemap
    from matplotlib.patches import Polygon
    from matplotlib.collections import PatchCollection
    from matplotlib.colors import Normalize

    Next we will create a figure to draw our map on and set its size.

    fig, ax = plt.subplots(figsize=(10,20))

    We can create our map with the below code.

    m = Basemap(resolution='c', # c, l, i, h, f or None
    projection='merc',
    lat_0=54.5, lon_0=-4.36,
    llcrnrlon=-6., llcrnrlat= 49.5, urcrnrlon=2., urcrnrlat=55.2)

    Now, there are a fair amount of arguments here, but they are all pretty easy to understand. The 'resolution' argument is the quality of the map you are creating. The options are crude, low, intermediate, high or full. The higher the resolution the longer it takes to render the map, and it can take a very long time so I recommend that while you are working on your map, set it to crude, and then if and when you want to publish it set it to full.

    The 'projection' is the type of map that you want to draw. There are lots of types that you can use that all have different use cases so I recommend you take a look at the available ones here.

    The 'lat_0' and 'lon_0' are the latitude and longitude of the centre point of your map. The other arguments are the latitudes and longitudes of your bounding box corners. 'llcrnr' stands for 'lower left corner' and 'urcrnr' stands for upper right corner. Fill these in with the latitudes and longitudes that you got earlier.

    Now we just need to define how the map is to be displayed and we have our basic map.
    m.drawmapboundary(fill_color='#46bcec')
    m.fillcontinents(color='#f2f2f2',lake_color='#46bcec')
    m.drawcoastlines()

    With the drawmapboundary() function we can set the colour of the seas and oceans on our map. Here I have set it to a light blue colour. The fillcontinents() function does just as it suggests, this is the colour of land masses. I have set them to a light-grey colour and have set lakes to the same colour as I set the sea. Finally the drawcoastlines() function draws lines around the land masses.

    You should now have a map looking a bit like this. Obviously the area will be different if you chose a different place and the colours will vary too if you changed those. Also note that this map has been drawn using the crude setting.



    PLOTTING DATA POINTS ONTO A MAP

    We now have our map, but what we really want to do is to use it to communicate our data, so let's plot some points on it.

    Now as mentioned before I will be using England and Wales property price data. You can download this data here. I also have done a bit of data analysis and manipulation on this that you will also need to do if you want to produce the same map as me. I'm not going to go into what I did here as it doesn't fit the scope of this post, but the notebook that I did this all in can be found here. I will be plotting newly built houses.

    Plotting points onto a Basemap map is very easy. A few things to note about the below code though, are that my data is stored in a Pandas DataFrame called new_areas, the location of these areas are in new_areas.pos and the number of newly built houses in that area is in new_areas.count.

    def plot_area(pos):
    count = new_areas.loc[new_areas.pos == pos]['count']
    x, y = m(pos[1], pos[0])
    size = (count/1000) ** 2 + 3
    m.plot(x, y, 'o', markersize=size, color='#444444', alpha=0.8)

    new_areas.pos.apply(plot_area)

    What we are doing here is making a function that takes a position and then plots the number of new houses associated with that position onto our map represented by the size of the point. Then using apply() on our Pandas DataFrame's pos column we go through every position in our DataFrame and plot them onto our map.

    You should end up with something along the lines of this:


    Don't worry about the points in the sea, that's just because on a crude map the shape is not perfect. When we finish up and render our map with full resolution they will be safely on land.

    USING SHAPEFILES TO DRAW AREAS AND REGIONS

    Now we have a map that can transmit information, but what if we want to represent regions or specific areas on our map. For example in the we have UK counties or in the USA, states. We can do this using shapefiles. I will be drawing in England and Wales postcode boundaries using the shapefile which can be found here.

    This is actually just a one-liner; nice and simple.

    m.readshapefile('data/uk_postcode_bounds/Areas', 'areas')

    The first argument is the path to your shapefile. The second is the name that will be used to access your shapefile. Here I will be able to access the data from the shapefile using m.areas.

    You should now have a map like this:


    Again, don't worry about the shapefile not matching up with the map, it's because we have the map's resolution set to crude.


    USING DATA TO COLOUR IN AREAS

    Now we have areas drawn onto our map, wouldn't it be nice to be able to use our data to colour them in. For example in my case, the higher the number of new houses in an area, the darker the colour of the area. We'll also add a colour bar in to give people looking at the map an idea of what kind of number a colour represents.

    First we are going to create a new DataFrame for convenience that will hold all the information we need.

    df_poly = pd.DataFrame({
    'shapes': [Polygon(np.array(shape), True) for shape in m.areas],
    'area': [area['name'] for area in m.areas_info]
    })
    df_poly = df_poly.merge(new_areas, on='area', how='left')

    Here we are getting the polygons from our shapefile that we imported earlier. Also, my shapefile contained the names of each area too, which we also add to the new DataFrame. We then merge the the two DataFrames on the area column which adds the other information about the areas that we need.

    Next we need to use this information to colour in the areas.

    cmap = plt.get_cmap('Oranges')   
    pc = PatchCollection(df_poly.shapes, zorder=2)
    norm = Normalize()

    pc.set_facecolor(cmap(norm(df_poly['count'].fillna(0).values)))
    ax.add_collection(pc)

    First we create a colormap to use with our map and data. I like orange so that's what I'm going to go with, you can find other colormaps here.

    We then create a PatchCollection using the shapes from our shapefile which are now stored in the DataFrame that we previously made. The 'zorder' argument just makes sure that the patches that we are creating end up on top of the map, not underneath it.

    Next for convenience we create a variable for the function Normalize() which we then use when setting the PatchCollections facecolor. We colour the patches with our colormap that we created before and pass it our normalised new houses count data. This makes it so that now patches with high new property counts are a darker colour than those with low new property counts.

    Finally we add the PatchCollection to our map.

    That's it, we now have a map that uses our data to colour in areas. There is one more thing that we should do however. Add a colorbar, this makes it at lot easier to interpret the colours of the map and relate them to a number.

    mapper = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)

    mapper.set_array(df_poly['count'])
    plt.colorbar(mapper, shrink=0.4)

    First we create a ScalarMappable object and use the set_array() function to add our counts to it. We then pass it to Matplotlib's colorbar() function and set the shrink argument to 0.4 in order to make the colorbar smaller than the map and we are done.

    Change the maps resolution to 'f' for full and you should now have a attractive and informative map visualisation written in Python with Matplotlib and Basemap that will look something like this:



    To see all this code together in action, you can go here.

    Remember to share this post so that other people can read it too and to subscribe to this blogs mailing list, follow me on twitter and add me on Google+ so you don't miss any useful posts!

    Also, if you make or have made a map please by all means comment on this post with a link to wherever we can find it, I'd love to see what other people come up with.

    June 25, 2016 06:01 AM

    June 24, 2016


    Control F'd

    In fact you could've just used curl

    I wanted to get some text data from the discussion forums on Project Euler about the problems I had solved (context: Once you solve a problem on Project Euler, you get access to a discussion forum where people share how they solved the problem. I wanted to see what other people had done on the same problems).

    June 24, 2016 06:21 PM


    Mike Driscoll

    Book Review: Python Projects for Kids

    I get asked by publishers to review books from time to time. Last month, Packt asked me if I’d be willing to review for their book, Python Projects for Kids by Jessica Ingrassellino. Frankly I tend to avoid beginning Python books now because they tend to be very similar, but I thought this one might be interesting.


    Quick Review

    • Why I picked it up: In this case, because Packt Publishing asked me to
  • Why I finished it: Mostly because Packt personnel badgered me to do so
  • I’d give it to: Not really sure. There are much better, more in-depth beginner books for Python out there

  • Book Formats

    You can get an eBook (PDF, EPUB or MOBI) version or a softcover.


    Book Contents

    This book only has 10 chapters and is 193 pages long.


    Full Review

    First off, I didn’t actually read every single word in this book. I call this the skimming review method. Personally I prefer to read a book at my own pace and review it accordingly, however I have been asked repeatedly to finish this review so this is what you get. My first impression was that this book would teach youngsters how to program in Python by creating mini-games. However we don’t really get into games until chapter 5. We don’t learn anything about pygame until chapter 8. Let’s go over each chapter and see what they’re about before I really dig in though

    Chapter one is your basic intro to what Python is and how to install it. The author chose to use Python 2.7 for this book over Python 3. The rest of the chapter is about creating a “Hello World” application and a work folder.

    Chapter two is about variables and functions. This chapter is pretty brief, but I thought it covered the topics well enough. The biggest thing it does is explaining to the reader how to create a function and save it to a file.

    For chapter three, we actually get to create a calculator of sorts. It’s text-based only and doesn’t really do anything to handle bad inputs from the user. In fact, one big knock against this book is that it doesn’t talk about exception handling at all. I also have a couple of problems with this chapter. I believe that page 34 is showing the wrong screenshot as the accompanying text is talking about casting from one type to another while the screenshot doesn’t show any kind of casting whatsoever. The other issue is that on page 41, the text states that you can run the script as written in the book. However I don’t see anything in the code that actually calls any of the functions, so if you run this code, you will get nothing outputted to the terminal.

    Chapter four is all about conditional statements and loops. The purpose of this chapter is to enhance the calculator application you wrote in the previous chapter such that it keeps running until the user asks it to quit.

    In chapter five, we learn how to create easy and hard levels for our game. The game is the “Higher or Lower” game. You will learn about what a Boolean is, how to import a library, and global variables.

    Chapter six dives into some of Python’s more interesting data types, the list and dictionary. The premise of this chapter is to teach the reader how to store data. While I agree that lists and dictionaries are a good format, I wonder if learning about pickle, json or yaml might have been good to learn about here too. Admittedly, I don’t think this book talks about File I/O, so those topics are probably considered to be out of scope.

    For chapter seven, the reader learns how to create a two player game that the author dubs “What’s in Your Backpack?” This chapter helps the reader layout a game that can keep score, restart the game or stop the game. You will also learn how to create a player profile, which is formatted as a dict. This seems like a good place to use a class to me, especially if we’re going to be using pygame in the next chapter, but I realize the target audience is supposed to be kids. Anyway, you will also get to add items to a virtual backpack, which is kind of fun to learn the author’s implementation.

    We finally reach pygame in chapter eight where you learn how to install pygame. You will also learn how to set up the screen size and color as well as create stationary and moving objects.

    Chapter nine builds on chapter eight by teaching the reader how to create a tennis game (i.e. pong). It introduces the reader to the concepts of game programming and how to outline your project before coding it. This chapter is actually split into four sections after this point. The first section basically creates all the pieces of the game that you will need. Section two will teach you how to move the paddles and section three will teach you how to move the ball. Section four is about how to run the game and keep score.

    The final chapter encourages the readers to keep coding! The text tells its readers where to go from here. For example, it talks about how they will need to learn about classes and objects to promote code reuse. It also mentions that you can add music and graphics with pygame. Then it talks about redesigning your game or trying to create your own versions of classic games. Finally it talks about other uses and libraries for Python, such as SciPy, iPython, and MatPlotLib.

    When I first heard about this book, I immediately thought of Jason Briggs’ book, Python for Kids and the Sande’s book, Hello World!: Computer Programming for Kids and Other Beginners. Both of these books are longer and contain a lot more information than “Python Projects for Kids” does. I personally think that of the three, I would choose the Sande book as the easiest for kids to get into. Briggs covers a lot more interesting topics, but he may go just a tad too fast depending on the child. As for “Python Projects for Kids”, I feel like there are too many items that aren’t covered (classes, exceptions, many Python data constructs, etc). It also feels like pygame itself isn’t really covered. There seemed to be a big build up to get to pygame and then there just wasn’t much content when we finally got there.

    If I were to lay out a strategy for learning Python for children, I would start with Sande and then if the child wanted to learn about games, I would move on to Sweigart’s books on creating games with Python (Invent Your Own Computer Games with Python and Making Games with Python & Pygame. Then I might move onto something else, like some of the Python for Minecraft books.

    As for this book, I just don’t know where it would fit. I believe it was written well but needed some additional polish to push to the top of the heap.

    pyprojectsforkids

    Python Projects for Kids

    by Jessica Ingrassellino

    Amazon


    Other Book Reviews

    June 24, 2016 05:30 PM


    Diego Garcia

    A armadilha do groupby do Python

    O itertools é um módulo fantástico da bibliotéca padrão do python, para trabalhar com iteradores e estruturas complexas de dados. Porém, é recomendado um conhecimento mínimo sobre geradores para evitar possíveis armadilhas. Sim, eu cai em mais uma armadilha do Python, dessa vez foi o groupby do módulo itertools.

    O que é o groupby ?

    O groupby consiste em uma função que, baseado em um iterável, retorna uma estrutura de agrupamendo com um valor de chave e um grupo de valores, relacionados a essa chave. A função groupby possui a seguinte syntax:

    def groupby(iterable, key=None)
    

    Onde:

    O resultado da função groupby é um gerador onde cada iteração retorna o valor da chave e outro gerador com os valores que foram agrupados para essa chave, por exemplo:

    >>> from itertools import groupby
    >>> items = [('animal', 'dog'), ('animal', 'cat'), ('person', 'john')]
    >>> for thing, values in groupby(items, key=lambda x: x[0]):
    ...     print('{}: {}'.format(thing, list(values)))
    ...
    animal: [('animal', 'dog'), ('animal', 'cat')]
    person: [('person', 'john')]
    

    Usei o list() no values para poder resolver o gerador e apresentar os valores no print (não a instancia do gerador).

    A armadilha

    Como você pode ver, o groupby é realmente muito útil e poderoso, porém, o que poderia acontecer caso o iterável não estivesse préviamente ordenado pelo mesmo critério a ser utilizado para o agrupamento? Vamos adaptar o exemplo anterior para realizar esse teste:

    >>> from itertools import groupby
    >>> items = [('animal', 'dog'), ('person', 'john'), ('animal', 'cat')]
    >>> for thing, values in groupby(items, key=lambda x: x[0]):
    ...     print('{}: {}'.format(thing, list(values)))
    ...
    animal: [('animal', 'dog')]
    person: [('person', 'john')]
    animal: [('animal', 'cat')]
    

    Como você pode ver, o agrupamento falha, retornado a mesma chave mais de uma vez com um grupo de valores distintos.

    Por que isso acontece ?

    Isso acontece porque internamente, o groupby gera um novo grupo a cada novo valor de chave que for encontrado no iterável. Mesmo que uma chave se repita, o groupby não consegue "olhar para atrás" e verificar os grupos que já foram gerados.

    Como resolver?

    Simples, basta antes de agrupar, ordenar o iterável pela mesma chave que será utlizada no agrupamento do groupby, por exemplo:

    >>> from itertools import groupby
    >>> items = [('animal', 'dog'), ('person', 'john'), ('animal', 'cat')]
    >>> ordered_items = sorted(items, key=lambda x: x[0])
    >>> for thing, values in groupby(ordered_items, key=lambda x: x[0]):
    ...     print('{}: {}'.format(thing, list(values)))
    ...
    animal: [('animal', 'dog'), ('animal', 'cat')]
    person: [('person', 'john')]
    

    Como se prevenir?

    Simples, leia a documentação!!! Sim, meu vacilo foi ainda maior pois, a documentação oficial do python alerta sobre esse risco:

    The operation of groupby() is similar to the uniq filter in Unix. It generates a break or new group every time the value of the key function changes (which is why it is usually necessary to have sorted the data using the same key function). That behavior differs from SQL’s GROUP BY which aggregates common elements regardless of their input order.

    Tudo bem que poderia ter um destaque maior esse alerta ou até mesmo um exemplo, porém, não adianta reclamar que não está documentado =).

    Referências
    Documentação Oficial

    June 24, 2016 01:00 PM


    Fabio Zadrozny

    PyDev 5.1.2: pytest integration

    PyDev 5.1.2 is now out. The major change is in the pytest integration.

    For those that don't know about it, pytest (http://pytest.org) is a Python test framework which requires less scaffolding to make tests (you don't need a hierarchy such as in PyUnit, just test methods in a module and asserts for checks suffice, it takes care of providing a reasonable error message -- also, it has an interesting fixture concept which allows structuring the test environment in a way more natural then through an inheritance hierarchy with unittest.TestCase.setUp/tearDown).

    If you want to use pytest, in PyDev, the runner in the preferences > PyDev > PyUnit has to be set as pytest.

    It's interesting to note that PyDev makes it trivial to just run a single test: you can select the test by using the method navigation (Ctrl+Shift+Up and Ctrl+Shift+Down) and press Ctrl+F9: this will open the outline for selecting which tests to run with only that test selected by default, then you can press Enter to run the test (or Shift+Enter to debug it) -- note that this works with the regular unittest too, not only with pytest.

    June 24, 2016 10:39 AM


    Python Anywhere

    Latest deploy: new stylings, editor fixes, and our API beta

    Morning all! A lovely day for leave-ing an old server image behind and welcoming in a new, independent, codebase. #brupgrade #brelease #breployment.

    Style tweaks, improvements to responsiveness

    On a bit of a whim we decided to upgrade to bootstrap 3, so you'll notice slightly different stylings. Flat buttons! Oh-so-3-years-ago. But also, there are some improvements to the way the site displays on mobile and smaller displays, which is nice.

    We also upgraded to the latest version of the ace editor, which should bring a few little improvements too, like better vim keybindings, and better support for ipads (you can now scroll, yay!)

    API beta

    It's not ready for prime-time yet, but we've started work on a PythonAnywhere API. It may end up not being something we publish for general use, and just something for us to use behind the scenes, but if you're keen to take a look, get in touch, and we'll switch it on for you. Currently the API allows you to do stuff to your web apps, namely:

    You should bear in mind that anything you build using that api will probably break when we next do a release, we're making no guarantees about backward-compatibility, or that the api will even work as it is. So really it's just for playing around or for the curious for now. Still, email us if you're interested, we'd love to hear from you.

    Other changes

    Your comments and suggestions are always welcome. Enjoy!

    June 24, 2016 08:47 AM


    Talk Python to Me

    #64 Inside the Python Package Index

    What is the most powerful part of the Python ecosystem? Well, the ability to say "pip install magic_library" has to be right near the top. But do you what powers the Python Package Index and the people behind it? Did you know it does over 300 TB traffic each month these days? <br/> <br/> Just me as we chat with Donald Stufft to look inside Python's package infrastructure. <br/> <br/> Links from the show: <br/> <div style="font-size: .85em;"> <br/> <b>Donald on Twitter</b>: <a href='https://twitter.com/dstufft' target='_blank'>@dstufft</a> <br/> <b>Donald on the web</b>: <a href='https://caremad.io' target='_blank'>caremad.io</a> <br/> <b>Powering the Python Package Index</b>: <br/> <a href='https://caremad.io/2016/05/powering-pypi/' target='_blank'>caremad.io/2016/05/powering-pypi/</a> <br/> <b>A Year of PyPI Downloads</b>: <br/> <a href='https://caremad.io/2015/04/a-year-of-pypi-downloads/' target='_blank'>caremad.io/2015/04/a-year-of-pypi-downloads</a> <br/> <b>Donate to PPA</b>: <a href='http://donate.pypi.io/' target='_blank'>donate.pypi.io</a> <br/> <b>PyPI (Legacy)</b>: <a href='https://pypi.python.org/pypi' target='_blank'>pypi.python.org/pypi</a> <br/> <b>Warehouse (new PyPI)</b>: <a href='https://pypi.io/' target='_blank'>pypi.io</a> <br/> <b>BigQuery Data Source</b>: <br/> <a href='https://mail.python.org/pipermail/distutils-sig/2016-May/028986.html' target='_blank'>mail.python.org/pipermail/distutils-sig/2016-May/028986.html</a> <br/> </div>

    June 24, 2016 08:00 AM


    Nigel Babu

    Scraping the Indian Judicial System

    This blog post has been sitting in my drafts folder for a long time. It’s time I finished it. A while ago, I did some work for Vidhi, scraping the Supreme Court of India website. Later on, I started some of parts of the work to scrape a couple of High Courts. Here’s a few quick lessons from my experience:

    June 24, 2016 04:20 AM

    June 23, 2016


    A. Jesse Jiryu Davis

    72% Of The People I Follow On Twitter Are Men

    Description: Black and white photo. A boy stands behind a very large abacus that fills the image. He looks up at the ball he is moving on one of the abacus's wires, above his eye-level. Behind him are two schoolchildren and a chalkboard with indistinct writing and diagrams.

    At least, that's my estimate. Twitter does not ask users their gender, so I have written a program that guesses based on their names. Among those who follow me, the ratio is even worse: 83% are men. None are gender-nonbinary as far as I can tell.

    The way to fix the first ratio is not mysterious: I should notice and seek more women experts tweeting about my interests, and follow them.

    The second ratio, on the other hand, I can merely influence, but I intend to improve it as well. My network on Twitter should represent of the software industry's diverse future, not its unfair present.


    How Did I Measure It?

    I set out to estimate the gender ratio of the people I follow—my "friends" in Twitter's jargon—and found it surprisingly hard. Twitter analytics readily shows me the converse, an estimate of my followers' gender ratio:

    Description: Chart titled,

    So, Twitter analytics divides my followers' accounts among male, female, and unknown, and tells me the ratio of the first two groups. (Gender-nonbinary folk are absent here—they're lumped in with the Twitter accounts of organizations, and those whose gender is simply unknown.) But Twitter doesn't tell me the ratio of my friends. That which is measured improves, so I searched for a service that would measure this number for me, and found FollowerWonk.

    FollowerWonk guesses my friends are 71% men. Is this a good guess? For the sake of validation, I compare FollowerWonk's estimate of my followers to Twitter's estimate:

    Twitter analytics
     
    menwomen Followers83% 17% FollowerWonk menwomen Followers81% 19% Friends I follow 72% 28%

    My followers show up 81% male here, close to the Twitter analytics number. So far so good. If FollowerWonk and Twitter agree on the gender ratio of my followers, that suggests FollowerWonk's estimate of the people I follow (which Twitter doesn't analyze) is reasonably good. With it, I can make a habit of measuring my ratio, and improve it.

    At $30 a month, however, checking my ratio with FollowerWonk is a pricey habit. I don't need all its features anyhow. Can I solve only the gender-ratio problem economically?

    Since FollowerWonk's numbers seem reasonable, I tried to reproduce them. Using Python and some nice Philadelphians' Twitter API wrapper, I began downloading the profiles of all my friends and followers. I immediately found that Twitter's rate limits are miserly, so I randomly sampled only a subset of users instead.

    I wrote a rudimentary program that searches for a pronoun announcement in each of my friends' profiles. For example, a profile description that includes "she/her" probably belongs to a woman, a description with "they/them" is probably nonbinary. But most don't state their pronouns: for these, the best gender-correlated information is the "name" field: for example, @gvanrossum's name field is "Guido van Rossum", and the first name "Guido" suggests that @gvanrossum is male. Where pronouns were not announced, I decided to use first names to estimate my ratio.

    My script passes parts of each name to the SexMachine library to guess gender. SexMachine has predictable downfalls, like mistaking "Brooklyn Zen Center" for a woman named "Brooklyn", but its estimates are as good as FollowerWonk's and Twitter's:

     nonbinarymenwomenno gender,
    unknown
    Friends I follow116866173
      0% 72% 28%  
    Followers0459108433
      0% 81% 19%  

    (Based on all 408 friends and a sample of 1000 followers.)

    Know Your Number

    I want you to check your ratios, too, so I've deployed "My Ratio" to PythonAnywhere's handy service for $10 a month:

    www.myrat.io

    The application may rate-limit you or otherwise fail, so use it gently. The code is on GitHub. It includes a command-line tool, as well.

    Who is represented in your network on Twitter? Are you speaking and listening to the same unfairly distributed group who have been talking about software for the last few decades, or does your network look like the software industry of the future? Let's know our numbers and improve them.


    Image: Cyclopedia of Photography 1975.

    June 23, 2016 10:44 PM