skip to navigation
skip to content

Planet Python

Last update: March 07, 2015 01:49 AM

March 07, 2015

Vasudev Ram

PDFCrowd and its HTML to PDF API (for Python and other languages)

By Vasudev Ram

PDFcrowd is a web service that I came across recently. It allows users to convert HTML content to PDF. This can be done both via the PDFcrowd site - by entering either the content or the URL of an HTML page to be converted to PDF - or via the PDFcrowd API, which has support for multiple programming languages, including for Python. I tried multiple approaches, and all worked fairly well.

A slightly modified version of a simple PDFcrowd API example from their site, is shown below.
# Demo program to show how to use the PDFcrowd API
# to convert HTML content to PDF.
# Author: Vasudev Ram -

import pdfcrowd

# create an API client instance
# Dummy credentials used; to actually run the program, enter your own.
client = pdfcrowd.Client("user_name", "api_key")
# Dummy credentials used; to actually run the program, enter your own.

# Convert a web page and store the generated PDF in a file.
pdf = client.convertURI('')
with open('dancingbison.pdf', 'wb') as output_file:

# Convert a web page and store the generated PDF in a file.
pdf = client.convertURI('')
with open('jugad2-about-vasudevram.pdf', 'wb') as output_file:

# convert an HTML string and save the result to a file
output_file = open('html.pdf', 'wb')
html = "My Small HTML File"
client.convertHtml(html, output_file)

except pdfcrowd.Error, why:
print 'Failed:', why
I used three calls to the API. For the first two calls, the inputs were: 1) my web site, 2) the about page of my blog.

Screenshots of the results of those two calls are below. You can see that they correspond closely to the originals.

Screenshot of generated PDF of site

Screenshot of generated PDF of About Vasudev Ram page on blog

- Vasudev Ram - Online Python training and programming

Dancing Bison Enterprises

Signup to hear about new Python or PDF related products created by me.

Posts about Python  Posts about xtopdf

Contact Page

March 07, 2015 01:42 AM

March 06, 2015

Glyph Lefkowitz

Deploying Python Applications with Docker - A Suggestion

Deploying python applications is much trickier than it should be.

Docker can simplify this, but even with Docker, there are a lot of nuances around how you package your python application, how you build it, how you pull in your python and non-python dependencies, and how you structure your images.

I would like to share with you a strategy that I have developed for deploying Python apps that deals with a number of these issues. I don’t want to claim that this is the only way to deploy Python apps, or even a particularly right way; in the rapidly evolving containerization ecosystem, new techniques pop up every day, and everyone’s application is different. However, I humbly submit that this process is a good default.

Rather than equivocate further about its abstract goodness, here are some properties of the following container construction idiom:

  1. It reduces build times from a naive “sudo install” by using Python wheels to cache repeatably built binary artifacts.
  2. It reduces container size by separating build containers from run containers.
  3. It is independent of other tooling, and should work fine with whatever configuration management or container orchestration system you want to use.
  4. It uses existing Python tooling of pip and virtualenv, and therefore doesn’t depend heavily on Docker. A lot of the same concepts apply if you have to build or deploy the same Python code into a non-containerized environment. You can also incrementally migrate towards containerization: if your deploy environment is not containerized, you can still build and test your wheels within a container and get the advantages of containerization there, as long as your base image matches the non-containerized environment you’re deploying to. This means you can quickly upgrade your build and test environments without having to upgrade the host environment on finicky continuous integration hosts, such as Jenkins or Buildbot.

To test these instructions, I used Docker 1.5.0 (via boot2docker, but hopefully that is an irrelevant detail). I also used an Ubuntu 14.04 base image (as you can see in the docker files) but hopefully the concepts should translate to other base images as well.

In order to show how to deploy a sample application, we’ll need a sample application to deploy; to keep it simple, here’s some “hello world” sample code using Klein:

# deployme/
from klein import run, route

def home(request):
    request.setHeader("content-type", "text/plain")
    return 'Hello, world!'

def main():
    run("", 8081)

And an accompanying

from setuptools import setup, find_packages

setup (
    name             = "DeployMe",
    version          = "0.1",
    description      = "Example application to be deployed.",
    packages         = find_packages(),
    install_requires = ["twisted>=15.0.0",
    entry_points     = {'console_scripts':
                        ['run-the-app = deployme:main']}

Generating certificates is a bit tedious for a simple example like this one, but in a real-life application we are likely to face the deployment issue of native dependencies, so to demonstrate how to deal with that issue, that this depends on the service_identity module, which pulls in cryptography (which depends on OpenSSL) and its dependency cffi (which depends on libffi).

To get started telling Docker what to do, we’ll need a base image that we can use for both build and run images, to ensure that certain things match up; particularly the native libraries that are used to build against. This also speeds up subsquent builds, by giving a nice common point for caching.

In this base image, we’ll set up:

  1. a Python runtime (PyPy)
  2. the C libraries we need (the libffi6 and openssl ubuntu packages)
  3. a virtual environment in which to do our building and packaging
# base.docker
FROM ubuntu:trusty

RUN echo "deb trusty main" > \

RUN apt-key adv --keyserver \
                --recv-keys 2862D0785AFACD8C65B23DB0251104D968854915
RUN apt-get update

RUN apt-get install -qyy \
    -o APT::Install-Recommends=false -o APT::Install-Suggests=false \
    python-virtualenv pypy libffi6 openssl

RUN virtualenv -p /usr/bin/pypy /appenv
RUN . /appenv/bin/activate; pip install pip==6.0.8

The apt options APT::Install-Recommends and APT::Install-Suggests are just there to prevent python-virtualenv from pulling in a whole C development toolchain with it; we’ll get to that stuff in the build container. In the run container, which is also based on this base container, we will just use virtualenv and pip for putting the already-built artifacts into the right place. Ubuntu expects that these are purely development tools, which is why it recommends installation of python development tools as well.

You might wonder “why bother with a virtualenv if I’m already in a container”? This is belt-and-suspenders isolation, but you can never have too much isolation.

It’s true that in many cases, perhaps even most, simply installing stuff into the system Python with Pip works fine; however, for more elaborate applications, you may end up wanting to invoke a tool provided by your base container that is implemented in Python, but which requires dependencies managed by the host. By putting things into a virtualenv regardless, we keep the things set up by the base image’s package system tidily separated from the things our application is building, which means that there should be no unforseen interactions, regardless of how complex the application’s usage of Python might be.

Next we need to build the base image, which is accomplished easily enough with a docker command like:

$ docker build -t deployme-base -f base.docker .;

Next, we need a container for building our application and its Python dependencies. The dockerfile for that is as follows:

# build.docker
FROM deployme-base

RUN apt-get install -qy libffi-dev libssl-dev pypy-dev
RUN . /appenv/bin/activate; \
    pip install wheel

ENV WHEELHOUSE=/wheelhouse
ENV PIP_WHEEL_DIR=/wheelhouse
ENV PIP_FIND_LINKS=/wheelhouse

VOLUME /wheelhouse
VOLUME /application

ENTRYPOINT . /appenv/bin/activate; \
           cd /application; \
           pip wheel .

Breaking this down, we first have it pulling from the base image we just built. Then, we install the development libraries and headers for each of the C-level dependencies we have to work with, as well as PyPy’s development toolchain itself. Then, to get ready to build some wheels, we install the wheel package into the virtualenv we set up in the base image. Note that the wheel package is only necessary for building wheels; the functionality to install them is built in to pip.

Note that we then have two volumes: /wheelhouse, where the wheel output should go, and /application, where the application’s distribution (i.e. the directory containing should go.

The entrypoint for this image is simply running “pip wheel” with the appropriate virtualenv activated. It runs against whatever is in the /application volume, so we could potentially build wheels for multiple different applications. In this example, I’m using pip wheel . which builds the current directory, but you may have a requirements.txt which pins all your dependencies, in which case you might want to use pip wheel -r requirements.txt instead.

At this point, we need to build the builder image, which can be accomplished with:

$ docker build -t deployme-builder -f build.docker .;

This builds a deployme-builder that we can use to build the wheels for the application. Since this is a prerequisite step for building the application container itself, you can go ahead and do that now. In order to do so, we must tell the builder to use the current directory as the application being built (the volume at /application) and to put the wheels into a wheelhouse directory (one called wheelhouse will do):

$ mkdir -p wheelhouse;
$ docker run --rm \
         -v "$(pwd)":/application \
         -v "$(pwd)"/wheelhouse:/wheelhouse \

After running this, if you look in the wheelhouse directory, you should see a bunch of wheels built there, including one for the application being built:

$ ls wheelhouse
# ...

At last, time to build the application container itself. The setup for that is very short, since most of the work has already been done for us in the production of the wheels:

# run.docker
FROM deployme-base

ADD wheelhouse /wheelhouse
RUN . /appenv/bin/activate; \
    pip install --no-index -f wheelhouse DeployMe


ENTRYPOINT . /appenv/bin/activate; \

During build, this dockerfile pulls from our shared base image, then adds the wheelhouse we just produced as a directory at /wheelhouse. The only shell command that needs to run in order to get the wheels installed is pip install TheApplicationYouJustBuilt, with two options: --no-index to tell pip “don’t bother downloading anything from PyPI, everything you need should be right here”, and, -f wheelhouse which tells it where “here” is.

The entrypoint for this one activates the virtualenv and invokes run-the-app, the setuptools entrypoint defined above in, which should be on the $PATH once that virtualenv is activated.

The application build is very simple, just

$ docker build -t deployme-run -f run.docker .;

to build the docker file.

Similarly, running the application is just like any other docker container:

$ docker run --rm -it -p 8081:8081 deployme-run

You can then hit port 8081 on your docker host to load the application.

The command-line for docker run here is just an example; for example, I’m passing --rm so that if you run this example just so that it won’t clutter up your container list. Your environment will have its own way to call docker run, how to get your VOLUMEs and EXPOSEd ports mapped, and discussing how to orchestrate your containers is out of scope for this post; you can pretty much run it however you like. Everything the image needs is built in at this point.

To review:

  1. have a common base container that contains all your non-Python (C libraries and utilities) dependencies. Avoid installing development tools here.
  2. use a virtualenv even though you’re in a container to avoid any surprises from the host Python environment
  3. have a “build” container that just makes the virtualenv and puts wheel and pip into it, and runs pip wheel
  4. run the build container with your application code in a volume as input and a wheelhouse volume as output
  5. create an application container by starting from the same base image and, once again not installing any dev tools, pip install all the wheels that you just built, turning off access to PyPI for that installation so it goes quickly and deterministically based on the wheels you’ve built.

While this sample application uses Twisted, it’s quite possible to apply this same process to just about any Python application you want to run inside Docker.

I’ve put a sample project up on Github which contain all the files referenced here, as well as “build” and “run” shell scripts that combine the necessary docker commandlines to go through the full process to build and run this sample app. While it defaults to the PyPy runtime (as most networked Python apps generally should these days, since performance is so much better than CPython), if you have an application with a hard CPython dependency, I’ve also made a branch and pull request on that project for CPython, and you can look at the relatively minor patch required to get it working for CPython as well.

Now that you have a container with an application in it that you might want to deploy, my previous write-up on a quick way to securely push stuff to a production service might be of interest.

(Once again, thanks to my employer, Rackspace, for sponsoring the time for me to write this post. Thanks also to Shawn Ashlee and Jesse Spears for helping me refine these ideas and listening to me rant about them. However, that expression of gratitude should not be taken as any kind of endorsement from any of these parties as to my technical suggestions or opinions here, as they are entirely my own.)

March 06, 2015 10:58 PM

Al-Ahmadgaid Asaad

Probability Theory: Convergence in Distribution Problem

Let's solve some theoretical problem in probability, specifically on convergence. The problem below is originally from Exercise 5.42 of Casella and Berger (2001). And I just want to share my solution on this. If there is an incorrect argument below, I would be happy if you could point that to me.


Let $X_1, X_2,\cdots$ be iid (independent and identically distributed) and $X_{(n)}=\max_{1\leq i\leq n}x_i$.
  1. If $x_i\sim$ beta(1,$\beta$), find a value of $\nu$ so that $n^{\nu}(1-X_{(n)})$ converges in distribution;
  2. If $x_i\sim$ exponential(1), find a sequence $a_n$ so that $X_{(n)}-a_n$ converges in distribution.


  1. Let $Y_n=n^{\nu}(1-X_{(n)})$, we say that $Y_n\rightarrow Y$ in distribution. If $$\lim_{n\rightarrow \infty}F_{Y_n}(y)=F_Y(y).$$ Then, $$ \begin{aligned} \lim_{n\rightarrow\infty}F_{Y_n}(y)&=\lim_{n\rightarrow\infty}P(Y_n\leq y)=\lim_{n\rightarrow\infty}P(n^{\nu}(1-X_{(n)})\leq y)\\ &=\lim_{n\rightarrow\infty}P\left(1-X_{(n)}\leq \frac{y}{n^{\nu}}\right)\\ &=\lim_{n\rightarrow\infty}P\left(-X_{(n)}\leq \frac{y}{n^{\nu}}-1\right)=\lim_{n\rightarrow\infty}\left[1-P\left(-X_{(n)}> \frac{y}{n^{\nu}}-1\right)\right]\\ &=\lim_{n\rightarrow\infty}\left[1-P\left(\max\{X_1,X_2,\cdots,X_n\} 1-\frac{y}{n^{\nu}}\right)\right]\\ &=\lim_{n\rightarrow\infty}\left[1-P\left(X_1 1-\frac{y}{n^{\nu}},X_2 1-\frac{y}{n^{\nu}},\cdots,X_n 1-\frac{y}{n^{\nu}}\right)\right]\\ &=\lim_{n\rightarrow\infty}\left[1-P\left(X_1 1-\frac{y}{n^{\nu}}\right)^n\right],\;\text{since}\;X_i's\;\text{are iid.} \end{aligned} $$ And because $x_i\sim$ beta(1,$\beta$), the density is $$ f_{X_1}(x)=\begin{cases} \beta(1-x)^{\beta - 1}&\beta>0, 0\leq x\leq 1\\ 0,&\mathrm{Otherwise} \end{cases} $$ Implies, $$ \begin{aligned} \lim_{n\to \infty}P(Y_n\leq y)&=\lim_{n\to \infty}\left\{1-\left[\int_0^{1-\frac{y}{n^{\nu}}}\beta(1-t)^{\beta-1}\,\mathrm{d}t\right]^n\right\}\\ &=\lim_{n\to \infty}\left\{1-\left[-\int_1^{\frac{y}{n^{\nu}}}\beta u^{\beta-1}\,\mathrm{d}u\right]^{n}\right\}\\ &=\lim_{n\to \infty}\left\{1-\left[-\beta\frac{u^{\beta}}{\beta}\bigg|_{u=1}^{u=\frac{y}{n^{\nu}}}\right]^{n}\right\}\\ &=1-\lim_{n\to \infty}\left[1-\left(\frac{y}{n^{\nu}}\right)^{\beta}\right]^{n} \end{aligned} $$ We can simplify the limit if $\nu=\frac{1}{\beta}$, that is $$ \lim_{n\to\infty}P(Y_n\leq y)=1-\lim_{n\to\infty}\left[1-\frac{y^{\beta}}{n}\right]^{n}=1-e^{-y^{\beta}} $$ To confirm this in Python, run the following code using the sympy module

    Therefore, if $1-e^{-y^{\beta}}$ is a distribution function of $Y$, then $Y_n=n^{\nu}(1-X_{(n)})$ converges in distribution to $Y$ for $\nu=\frac{1}{\beta}$.
  2. $$ \begin{aligned} P(X_{(n)}-a_{n}\leq y) &= P(X_{(n)}\leq y + a_n)=P(\max\{X_1,X_2,\cdots,X_n\}\leq y+a_n)\\ &=P(X_1\leq y+a_n,X_2\leq y+a_n,\cdots,X_n\leq y+a_n)\\ &=P(X_1\leq y+a_n)^n,\;\text{since}\;x_i's\;\text{are iid}\\ &=\left[\int_{-\infty}^{y+a_n}f_{X_1}(t)\,\mathrm{d}t\right]^n \end{aligned} $$ Since $X_i\sim$ exponential(1), then the density is $$ f_{X_1}=\begin{cases} e^{-x},&0\leq x\leq \infty\\ 0,&\mathrm{otherwise} \end{cases} $$ So that, $$ \begin{aligned} P(X_{(n)}-a_{n}\leq y)&=\left[\int_{0}^{y+a_n}e^{-t}\,\mathrm{d}t\right]=\left\{-\left[e^{-(y+a_n)}-1\right]\right\}^n\\ &=\left[1-e^{-(y+a_n)}\right]^n \end{aligned} $$ If we let $Y_n=X_{(n)}-a_n$, then we say that $Y_n\rightarrow Y$ in distribution if $$ \lim_{n\to\infty}P(Y_n\leq y)=P(Y\leq y) $$ Therefore, $$ \begin{aligned} \lim_{n\to\infty}P(Y_n\leq y) &= \lim_{n\to\infty}P(X_{(n)}-a_n\leq y)=\lim_{n\to \infty}\left[1-e^{-y-a_n}\right]^n\\ &=\lim_{n\to\infty}\left[1-\frac{e^{-y}}{e^{a_n}}\right]^n \end{aligned} $$ We can simplify the limit if $a_n=\log n$, that is $$ \lim_{n\to\infty}\left[1-\frac{e^{-y}}{e^{\log n}}\right]^n=\lim_{n\to\infty}\left[1-\frac{e^{-y}}{n}\right]^n=e^{-e^{-y}} $$ Check this in Python by running the following code,

    In conclusion, if $e^{-e^{-y}}$ is a distribution function of Y, then $Y_n=X_{(n)}-a_n$ converges in distribution to $Y$ for sequence $a_n=\log n$.


  1. Casella, G. and Berger, R.L. (2001). Statistical Inference. Thomson Learning, Inc.

March 06, 2015 07:27 PM

Brett Cannon

Going all-in on the mobile web

In this world of Android vs. iOS – with a smattering of Windows Mobile and Blackberry – I find native apps somewhat annoying. In the beginning, iOS was actually not going have any third-party apps on the phone and everything would run through Safari on your iPhone. Developer backlash due to worries about performance, though, helped lead to Apple changing its position on native apps on iOS.

I think this is a shame. I appreciate that web apps embody SLICE (Secure, Linkable, Indexable, Composable, Ephemeral, and I have heard it suggested that Updatable get tossed in). I also appreciate that the web embodies a common denominator platform that is cross-platform (I’m not naïve enough to think the entire web platform is cross-platform thanks to differences in what APIs are implemented at any given time by the various browsers). Why do developers need to choose whether to launch on iOS or Android first or exclusively on one platform? Why can’t developers simply launch simultaneously and instantaneously on all platforms through the web?

The answer is that many can, but they choose not to for various reasons (some legitimate, some not).

What leads to requiring a native app?

What has traditionally set native apps apart from web apps on mobile phones? I would argue it’s the following:

  1. Performance (varies between browser releases and phone generations, but low-level stuff like controlling socket connections outside of WebSockets or SIMD for GPU calculations isn’t possible)
  2. Offline access (being solved thanks to Service Workers as AppCache is not pleasant to work with; actual storage space can also be an issue)
  3. Periodic background processing (Service Workers could do it if the task scheduler API gets accepted)
  4. Notifications (w3c spec for text-based notifications, plus there is a new Push API for Service Workers to work in the background to push notifications to the user)
  5. Sensors (stuff like geolocation are available, some other things are not)
  6. OS-specific features (e.g., intents on Android)

As you can see, a good amount of features have either just landed in browsers – Service Workers arrived in Chrome 40 – or are coming very soon – the Push API for Service Workers is coming in Chrome 42. Unfortunately not everything is actively scheduled to land in a browser – scheduling a Service Worker to run isn’t planned for any browser yet – and OS-specific features like intents are typically off the table since they are not OS-agnostic and thus won’t work in a browser no matter what OS it’s running on. In terms of raw performance, it’s a constantly fluctuating thing that’s always being discussed, e.g. Mozilla, Google, and Intel looking into SIMD in JavaScript. In other words claiming the browser is “slow” doesn’t hold a-priori.

Making myself a guinea pig

How many apps do people use that lack a mobile web version but which actually could get away with having one (either full-featured or with some degraded UX)? Taking stock of what I have on the homescreen of my Android phone, I have the following list of applications grouped by category which I use almost daily and looked at whether they could have a web app experience of some form that was still useful. Anything in italic means that a web experience of some sort is possible today, and if something is bold then someone actually implemented a mobile-friendly browser experience.

Out of 20 categories, 19 could have a useful web experience but only 11 do (and 3 of them would require changing who provided me the service to get a web experience). I would be really curious to see a study done that evaluated if doing a mobile web app for Android and iOS – or even just one of the platforms – led to more or less work than doing a native app. But my point remains that just because someone provides a native app doesn’t mean a web app wouldn’t also work for the same use-case.

March 06, 2015 06:44 PM

Fabio Zadrozny

Navigating through your code when in PyDev

One of the main advantages of using an IDE is being able to navigate through your code in a really fast way, so, I'll explain below how to find what you want when coding in PyDev.

1. Any file in your workspace

The simplest way is searching for any resource for any project you have available (this is actually provided by Eclipse itself). So, if you know the file name (or at least part of it), use Ctrl+Shift+R and filter using the Open Resource dialog:

2. Any Python Token

Another way of getting to what you want is using the PyDev Globals Browser (using Ctrl+Shift+T).

This is actually what I use most as a single place can be used to filter for package names, classes, methods and attributes inside your own projects or any other package in Python itself. Also, it allows for advanced filtering, so, you can search for tokens only inside a package (i.e.: filters for any 'tz' token inside django in the example below).

3. Quick Outline (current editor)

Ctrl+O will show a Quick Outline which shows the structure of your current file. And pressing Ctrl+O one more time in this dialog will also show the structure of superclasses in the hierarchy (you can see that in the example below __setitem__ appears twice, once for the method in this class and another one for the superclass).

4. Selection History

Go back and forward in your selection: Alt+Left goes to the place you were before and Alt+Right to the place you just went from... this allows you to easily go navigate through your recent places.

5. Open files

To filter through the open files you can use Ctrl+E: a dropdown will appear and from there you can filter through its name and you can close existing editors using Del from that dropdown too.

6. Go to previous next token (class or method)

Ctrl+Shift+Up and Ctrl+Shift+Down allows you to quickly navigate from your current position to the previous or next method (selecting the full method/class name).

7. Navigate through occurrences and errors in the file

Ctrl+Dot allows navigating through occurrences and errors found in the file, so, in the case below we'll navigate through the occurrences of 'func'.

8. Go to some view/menu/action/preference

This is an Eclipse standard mechanism: using Ctrl+3 allows you to navigate to any part of the IDE.

9. References

Ctrl+Shift+G will make a search showing all the references to the token under the cursor (and the search view where results are shown can be navigated with Ctrl+Dot).

10. Go to definition

Just press F3 or Ctrl+Click some available token and go directly to the selected place.

11. Hierarchy View

Using F4 shows a hierarchy view where you can see the structure of your classes.

12. Show In

Alt+Shift+W allows you to see the current file in a given place (such as the PyDev Package Explorer or the System Explorer from your OS) or your current class/method in the Outline View.


The ones below are available in standard Eclipse and you should also definitely know about it :)

Ctrl+L allows you to navigate to a given line in your current editor.
Ctrl+Q goes to the place where the last edition was made.
Ctrl+F6 navigates through the opened editors. In LiClipse Ctrl+Tab is also bound to it by default-- and I suggest you also add this binding if you aren't using LiClipse :)
Ctrl+F7 navigates through opened views (i.e.: Package Explorer, Outline, etc.)
Ctrl+F8 navigates through opened perspectives (i.e.: PyDev perspective, Debug perspective, etc).
Ctrl+F10 opens the menu for the current view (so you can select filters in the Package Explorer, etc.)
F12 focuses the editor (so, you can go from any view to the editor)
Ctrl+H Opens the search dialog so you can do text searches
Ctrl+Shift+L twice goes to the keybindings preferences

Now you can enjoy going really fast to any place you wish inside PyDev!

March 06, 2015 04:50 PM


A report on the Salt Sprint 2015 in Paris

On Wednesday the 4th of march 2015, Logilab hosted a sprint on salt on the same day as the sprint at SaltConf15. 7 people joined in and hacked on salt for a few hours. We collaboratively chose some subjects on a pad which is still available.

We started off by familiarising those who had never used them to using tests in salt. Some of us tried to run the tests via tox which didn't work any more, a fix was found and will be submitted to the project.

We organised in teams.

Boris & Julien looked at the authorisation code and wrote a few issues (minion enumeration, acl documentation). On saltpad (client side) they modified the targeting to adapt to the permissions that the salt-api sends back.

We discussed the salt permission model (external_auth) : where should the filter happen ? the master ? should the minion receive information about authorisation and not execute what is being asked for ? Boris will summarise some of the discussion about authorisations in a new issue.

Sofian worked on some unification on execution modules (refresh_db which will be ignored for the modules that don't understand that). He will submit a pull request in the next few days.

Georges & Paul added some tests to hg_pillar, the test creates a mercurial repository, adds a top.sls and a file and checks that they are visible. Here is the diff. They had some problems while debugging the tests.

David & Arthur implemented the execution module for managing postgresql clusters (create, list, exists, remove) in debian. A pull request was submitted by the end of the day. A state module should follow shortly. On the way we removed some dead code in the postgres module.

All in all, we had a some interesting discussions about salt, it's architecture, shared tips about developing and using it and managed to get some code done. Thanks to all for participating and hopefully we'll sprint again soon...

March 06, 2015 04:33 PM


Feature Spotlight: Reformatting Python code with Pycharm’s intentions

Happy Friday everyone!

Did you have a chance to read one of my previous posts on how PyCharm helps you write clean and maintainable Python code? As a quick recap: Pycharm highlights code style violations with both PEP8 and custom inspections, and it also allows you to apply automatic quick-fixes to keep your code in consistent format.
Today I’m going to cover another feature that you may find handy for writing professional and quality code. It’s called Code intentions.

The main difference between PyCharm’s code inspections and intentions is that while inspections provide quick-fixes for code that has potential problems, intentions help you apply automatic changes to code that is most likely correct.

Here’s how it works:

When editing absolutely correct code, the yellow bulb sometimes appears in the editor:


That signals that the automatic action is available to be applied in place. To get a list of intentions applicable to the code at the caret, just press Alt + Enter:


In this particular situation PyCharm offers to convert the lambda to a normal function. Just hit Enter and it will do the job for you:


It also highlights the new function name and when you change it, it automatically changes the function call:


There is a huge number of different code intentions available for Python and all other supported languages including JavaScript, SQL, CSS, and other. You can see them all as well as enable or disable some of them in Settings (Preferences for Mac OS) | Editor | Intentions:


Another way to get to the intentions settings or simply disable the unwanted intention is to hit right arrow on the intention in the editor:


I bet you’ll like this feature if you haven’t tried it before.

Have a great weekend and see you next week!


March 06, 2015 03:40 PM

William Thompson

Python: unittest setUp and tearDown with a ContextManager

Python unittest follows the jUnit structure, but is extremely awkward.  One of the more awkward portions are the use of setUp and tearDown methods.  Python has an elegant way of handling setup and teardown, it's called a ContextManager.  So let's add it.

import unittest
from functools import wraps
from contextlib import contextmanager

def addContextHandler(fn, ctx):
def helper(self, *a, **kw):
if not hasattr(self, ctx):
return fn(self, *a, **kw)

with getattr(self, ctx)():
return fn(self, *a, **kw)

return helper = addContextHandler(, 'contextTest')


March 06, 2015 02:09 PM

Kay Hayen

Nuitka Release 0.5.10

This release has a focus on code generation optimization. Doing major changes away from "C++-ish" code to "C-ish" code, many constructs are now faster or got looked at and optimized.

Bug Fixes

  • Compatibility: The variable name in locals for the iterator provided to the generator expression should be .0, now it is.
  • Generators could leak frames until program exit, these are now properly freed immediately.


  • Faster exception save and restore functions that might be in-lined by the backend C compiler.

  • Faster error checks for many operations, where these errors are expected, e.g. instance attribute lookups.

  • Do not create traceback and locals dictionary for frame when StopIteration or GeneratorExit are raised. These tracebacks were wasted, as they were immediately released afterwards.

  • Closure variables to functions and parameters of generator functions are now attached to the function and generator objects.

  • The creation of functions with closure taking was accelerated.

  • The creation and destruction of generator objects was accelerated.

  • The re-formulation for in-place assignments got simplified and got faster doing so.

  • In-place operations of str were always copying the string, even if was not necessary. This corrects Issue#124.

    a += b # Was not re-using the storage of "a" in case of strings
  • Python2: Additions of int for Python2 are now even faster.

  • Access to local variable values got slightly accelerated at the expense of closure variables.

  • Added support for optimizing the complex built-in.

  • Removing unused temporary and local variables as a result of optimization, these previously still allocated storage.


  • The use of C++ classes for variable objects was removed. Closure variables are now attached as PyCellObject to the function objects owning them.
  • The use of C++ context classes for closure taking and generator parameters has been replaced with attaching values directly to functions and generator objects.
  • The indentation of code template instantiations spanning multiple was not in all cases proper. We were using emission objects that handle it new lines in code and mere list objects, that don't handle them in mixed forms. Now only the emission objects are used.
  • Some templates with C++ helper functions that had no variables got changed to be properly formatted templates.
  • The internal API for handling of exceptions is now more consistent and used more efficiently.
  • The printing helpers got cleaned up and moved to static code, removing any need for forward declaration.
  • The use of INCREASE_REFCOUNT_X was removed, it got replaced with proper Py_XINCREF usages. The function was once required before "C-ish" lifted the need to do everything in one function call.
  • The use of INCREASE_REFCOUNT got reduced. See above for why that is any good. The idea is that Py_INCREF must be good enough, and that we want to avoid the C function it was, even if in-lined.
  • The assertObject function that checks if an object is not NULL and has positive reference count, i.e. is sane, got turned into a preprocessor macro.
  • Deep hashes of constant values created in --debug mode, which cover also mutable values, and attempt to depend on actual content. These are checked at program exit for corruption. This may help uncover bugs.


  • Speedcenter has been enhanced with better graphing and has more benchmarks now. More work will be needed to make it useful.
  • Updates to the Developer Manual, reflecting the current near finished state of "C-ish" code generation.


  • New reference count tests to cover generator expressions and their usage got added.
  • Many new construct based tests got added, these will be used for performance graphing, and serve as micro benchmarks now.
  • Again, more basic tests are directly executable with Python3.


This is the next evolution of "C-ish" coming to pass. The use of C++ has for all practical purposes vanished. It will remain an ongoing activity to clear that up and become real C. The C++ classes were a huge road block to many things, that now will become simpler. One example of these were in-place operations, which now can be dealt with easily.

Also, lots of polishing and tweaking was done while adding construct benchmarks that were made to check the impact of these changes. Here, generators probably stand out the most, as some of the missed optimization got revealed and then addressed.

Their speed increases will be visible to some programs that depend a lot on generators.

This release is clearly major in that the most important issues got addressed, future releases will provide more tuning and completeness, but structurally the "C-ish" migration has succeeded, and now we can reap the benefits in the coming releases. More work will be needed for all in-place operations to be accelerated.

More work will be needed to complete this, but it's good that this is coming to an end, so we can focus on SSA based optimization for the major gains to be had.

March 06, 2015 06:07 AM

Ilian Iliev

Working with intervals in Python

Brief: Working with intervals in Python is really easy, fast and simple. If you want to learn more just keep reading.

Task description: Lets say that the case if the following, you have multiple users and each one of them has achieved different number of points on your website. So you want, to know how many users haven't got any point, how many made between 1 and 50 points, how many between 51 and 100 etc. In addition at 1000 the intervals start increasing by 100 instead of 50.

Preparing the intervals: Working with lists in Python is so awesome, so creating the intervals is quite a simple task. intervals = [0] + \ # The zero intervals [x * 50 for x in range(1, 20)] + \ # The 50 intervals [x * 100 for x in range(10, 100)] + \ # The 100 intervals [x * 1000 for x in range(10, 102)] # the 1000 intervals

So after running the code above we will have a list with the maximum number of points for each interval. Now it is time to prepare the different buckets that will store the users count. To ease this we are going to use a defaultdict. from collections import defaultdict buckets = defaultdict(lambda: 0)

This way, we can increase the count for each bucket without checking if it exists. Now lets got to counting for user in users: try: bucket = intervals[bisect.bisect_left(intervals, user.points)] except IndexError: # we are over the last bucket, so we put in in it bucket = intervals[-1] buckets[bucket] += 1

How it works: Well it is quite simple. The bisect.bisect_left uses a binary search to estimate the position where an item should be inserted to keep the list, in our case intervals sorted. Using the position we take the value from the invervals that represent the bucker where the specified number should go. And we are ready. The result will looks like: { 1: 10, 10: 5, 30: 8, 1100: 2 }

Final words: As you see when the default dict is used it does not have values for the empty buckets. This can be good or bad depending from the requirements how to present the data but it can be esily fixed by using the items from the intervals as keys for the buckets.

P.S. Comments and ideas for improvement are always welcome.

March 06, 2015 01:39 AM

March 05, 2015

Will McGugan

Sublime Text like fuzzy matching in Javascript

I recently implemented a Sublime Text like fuzzy matching for my encrypted notes app. Fuzzy matching is a really nice feature that I haven't seen used outside of code editors.

If you haven't used Sublime Text, the fuzzy matching is used to quickly open files. Rather than navigate directories in the UI – which can laborious – the open file dialogue uses the characters you type to filter a list of paths. Each character you type must match a character in the file path exactly once and and in the same order as they appear in the path. For instance the search “abgvi” would match “/application/blog/views”, as would “blgview”. The basic idea should work with any text, not just paths.

I fully expect a real Javascript programmer to do this in two lines (I'm a Python guy that has been faking Javascript proficiency for years).

My first thought in implementing this was regular expressions, but as well as matching I also wanted to highlight the matched characters in the list. That proved harder to do with a regular expression. Probably not impossible, but I'll be honest with you; I gave up.

Turns out a non-regex solutions is simple enough, and plenty fast. Here it is:

function fuzzy_match(text, search)
    Parameter text is a title, search is the user's search
    // remove spaces, lower case the search so the search
    // is case insensitive
    var search = search.replace(/\ /g, '').toLowerCase();
    var tokens = [];
    var search_position = 0;

    // Go through each character in the text
    for (var n=0; n<text.length; n++)
        var text_char = text[n];
        // if we watch a character in the search, highlight it
        if(search_position < search.length &&
          text_char.toLowerCase() == search[search_position])
            text_char = '<b>' + text_char + '</b>';
            search_position += 1;
    // If are characters remaining in the search text,
    // return an empty string to indicate no match
    if (search_position != search.length)
        return '';
    return tokens.join('');

The above function matches a title / path etc. with the search query. If it matches, it will return the text with the matched characters wrapped in <b> tags, otherwise it returns an empty string.

I put together a demo that gets a list of links from Reddit and gives you a text box to do the fuzzy matching:

View the source if you want to know more, there are some helpful comments.

March 05, 2015 08:27 PM

Python Software Foundation

Hyperion Development Awarded Fourth PSF grant

Today’s blog is about another African educational project that the PSF has recently funded. 
Hyperion Development, a South African based company, has been providing online training in web development and programming Python Courses, as well as in-person training and workshops in specific IT topics, to people around the world. Hyperion offers free courses to students, many of whom are unable to take formal computer science courses and others who wish or need to supplement their formal Computer Science studies. Hyperion also provides workshops and courses for businesses and professionals. They are currently the largest non-university trainer of Python in South Africa. 
According to their founder and director Riaz Moola,
Over 3500 full-time university and high school students have completed free training courses in C++/Python/Java/ programming and Computer Science topics with Hyperion. Students from over 80% of all tertiary institutions in South Africa take our courses, with approximately 54% of these students studying for full-time Computer Science degrees.
Hyperion’s courses are run on the Python-powered Virtual Learning Environment, which was developed with the help of a PSF grant awarded in 2013. The Hyperion Portal, a platform built entirely by South Africans, is used to deliver their Massive Open Online Courses, which are 100% free to full-time students. In addition, Hyperion helps students and IT professionals find jobs through their Referrals Program.
Currently, Hyperion’s Cape Town team is attempting to expand further by offering free Python training to students at the University of Cape Town. They have also conducted teacher training events, most recently in Cape Town at the 2014 Department of Education Western Cape Teachers Conference.
Their excellent work has earned them three previous grants from the PSF since 2013. How Hyperion advances the PSF’s mission is evident from a recent remark made by PSF Director and Co-Chair of the Outreach & Education Committee, David Mertz: 
The reality of the world is that not everyone can gain admission to, nor afford to attend, elite universities. It is my belief, and the belief of the PSF, that computer literacy today has a status increasingly similar to natural language literacy, and should be a skill and capability that all people obtain and have access to. More advanced research in these areas in universities has an essential role, but a basic capability is something we should strive to universalize, not to gate with accreditation, admission procedures, strict academic prerequisites and other requirements, etc.
The current grant will sponsor free 5-month Python training at the beginner, intermediate, and advanced levels.

March 05, 2015 07:52 PM

Python Piedmont Triad User Group

PYPTUG Meeting - March 30th (Ansible, Devops, Security) -

PYthon Piedmont Triad User Group meeting

Come join PYPTUG at out next monthly meeting (March 30th 2015) to learn more about the Python programming language, modules and tools. Python is the perfect language to learn if you've never programmed before, and at the other end, it is also the perfect tool that no expert would do without. Monthly meetings are in addition to our project nights.


Meeting will start at 5:30pm.

We will open on an Intro to PYPTUG and on how to get started with Python, PYPTUG activities and members projects, then on to News from the community, including some details on a security issue (how a public opinion vote was derailed), how it was discovered, and how python and data science tied it all together.

Then on to the main talk.

Main Talk: Automate with Ansible
Greg DeKoenigsberg is the Vice President of Community for Ansible,
where he leads the company's relationship with the broader open source community. Greg has over a decade of open source advocacy and community leadership, building and leading communities for Eucalyptus and open source leader Red Hat. Greg has served as leader of the Fedora project, chair of the first Fedora Project Board, and Red Hat community liaison with the One Laptop Per Child project.
Ansible is an open-source software platform for configuring and managing computers. It combines multi-node software deployment, ad hoc task execution, and configuration management.  It is also the third most forked Python project on all of Github.

In this talk, Greg DeKoenigsberg of Ansible will walk through the basic usage of Ansible, the history of why it was created, why it has become so popular so quickly, and how to get started using (and contributing). There will also be an open Q+A at the end.

Lightning talks! 

We will have some time for extemporaneous "lightning talks" of 5-10 minute duration. If you'd like to do one, some suggestions of talks were provided here, if you are looking for inspiration. Or talk about a project you are working on.


Monday, March 30th 2015
Meeting starts at 5:30PM


Wake Forest University,
close to Polo Rd and University Parkway:

Wake Forest University, Winston-Salem, NC 27109

 Map this

See also this campus map (PDF) and also the Parking Map (PDF) (Manchester hall is #20A on the parking map)

And speaking of parking:  Parking after 5pm is on a first-come, first-serve basis.  The official parking policy is:
"Visitors can park in any general parking lot on campus. Visitors should avoid reserved spaces, faculty/staff lots, fire lanes or other restricted area on campus. Frequent visitors should contact Parking and Transportation to register for a parking permit."

Mailing List

Don't forget to sign up to our user group mailing list:

It is the only step required to become a PYPTUG member.

Meetup Group

In order to get a feel for how much food we'll need, we ask that you register your attendance to this meeting on meetup:

HAB Project

We will conclude this meeting with some work on the High Altitude Balloon Project (Team Near Space Circus).

March 05, 2015 06:35 PM


Signup for Sponsor Tutorials!

Our Sponsor Tutorial schedule has come together and we've opened registration on Eventbrite! Running Wednesday and Thursday April 8-9, these free tutorials are offered by several of our generous sponsors. While registration for these tutorials is not required, it helps us plan for food and room size.

Check out the schedule at Each tutorial is 1.5 hours, and free!

We kick off Wednesday with David Gouldin of Heroku walking through building and deploying applications on Heroku. After lunch, Eric Feng of Dropbox introduces the Dropbox API and will take attendees through authentication to reading and writing files. There are two other open slots on the Wednesday schedule, and we'll update this post once those are known.

Thursday's schedule begins with Steve Downer and Chris Wilcox showing off how to build a Django app on the Microsoft Azure cloud. The folks at CodeClimate are going to be talking about a number of important development topics, including how to be provide quality code review and build an effective pull request based workflow. Kyle Kelly of Rackspace will be discussing cloudpipe and showing attendees how to contribute to it. Wrapping up the Thursday schedule is Google, who will be hosting a trio of yet-to-be-announced talks during their time slot.

If you're interested in our instructor-led tutorials, spaces are still open in many of them, but keep in mind that those are likely sell out. The tutorial schedule is available here, and you can register for $150 per tutorial here.

March 05, 2015 04:47 PM


PyCharm 4.0.5 RC2 is available

Having announced the PyCharm 4.0.5 RC build a week ago, today we’ve published the PyCharm 4.0.5 RC2 build 139.1525, which is already available for download and evaluation from the EAP page.

This build has only two new fixes that can be found in the release notes: a fix for deprecation warning when using Behave and an important fix for debugging multi-process Pyramid and Google App Engine projects.

We encourage you to download PyСharm 4.0.5 RC2 build 139.1525 for your platform and test the latest fixes. Please report any bugs and feature request to our Issue Tracker. It also will be available shortly as a patch update from within the IDE.

Develop with pleasure!
-PyCharm Team

March 05, 2015 04:40 PM

Tomasz Ducin

python open interactive console

In this article I'll show a small code snippet that simulates a breakpoint without using any IDE (Integrated Development Environment). This is similar to firebug's / chrome developer tools' javascript console, where you may run your custom commands (typed in realtime) while being enclosed in the brakpoint's scope. This is very useful when dealing with big/undocumented/legacy code and you want to check the state of variables.

All this code does is copying local/global variables, setting the console autocompletion and starting the interactive shell, where you, the developer, can look at the python runtime environment. The following code presents the module with copen method and file which demonstrates the console usage:

Loading gist

Fetch the repository and run the file:

git clone py_console && cd py_console && python
type dir() to check current scope content and see example_list and example_tuple. After closing the console, the script will continue where it stopped (see the print statement):

remote: Counting objects: 14, done.
remote: Compressing objects: 100% (12/12), done.
remote: Total 14 (delta 3), reused 8 (delta 2)
Receiving objects: 100% (14/14), done.
Resolving deltas: 100% (3/3), done.
Python 2.7.2+ (default, Jul 20 2012, 22:12:53)
[GCC 4.6.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> dir()
['__builtins__', '__doc__', '__file__', '__name__', '__package__', 'console', 'example_list', 'example_tuple']
>>> example_list
[1, 2, 3]
>>> example_tuple
('abc', 'def')
>>> # hit ctrl+D to quit
this will be continued

March 05, 2015 09:28 AM

Continuum Analytics Blog

Continuum Analytics - March Tech Events

This month, the Continuum team will be at a variety of meetups and conferences talking about the power of Python in analytics. Take a look at where you can find us, and reach out at if you’re interested in meeting up, or if you would like us to participate in your event.

March 05, 2015 12:00 AM

March 04, 2015

Python Software Foundation

Python Namibia

The second PSF sponsored African conference I want to tell you about is Python Namibia (only a mere 3500 kilometers or 2175 miles south of Cameroon). The conference, the first ever held in Namibia, was held Feb 2 – 5, 2015 at the University of Namibia in the city of Windhoek. The PSF provided funds at the level of "Gold Sponsorship" that were used to subsidize travel for international attendees and to purchase a banner. 
Photo credit to

According to an email to the PSF from organizer Daniele Procida, “. . . the event was a success, with 65 attendees for the four days, and was met with huge enthusiasm by our Namibian hosts. I hope to be back in Namibia next year for an even bigger event, organised by the newly-established Python community there.”
The official website Python Namibia provides additional information and thanks to the conference's additional sponsors: Cardiff University in Wales (through its Phoenix Project), The University of Namibia, and the Django/Python web agency, Divio AG in Zürich. 
One of the attendees was the PSF's good friend, the geologist Carl Trachte, who sums up his reasons for attending PyCons all around the world as:
The neat thing about country/regional conferences is that you more frequently get to talk to developers or tech professionals from that place who don’t always frequent conferences outside their area. Seeing how Python (and digital technology in general) is being used in Sub-Saharan Africa (for the establishment of a wireless network, for example), learning what the average work day is like for a Pythonista in these parts of the world - those are things you really can’t get without being there.
The four days of talks, workshops, coding, collaboration and interaction engendered such enthusiasm and interest that on the last day a group of the participants self-organized to form “PyNam,the Python Namibia Association”.

Photo Credit to

We certainly look forward to more exciting projects and events coming out of this group.

March 04, 2015 10:05 PM

Amit Saha

Doing Math with Python: Two more chapters in Early Access

I am excited to share that the third and fourth chapters are available as part of the early access of my book Doing Math with Python.


Chapter 3: Describing Data with Statistics

As the title suggests, this chapter is all about the statistical measures one would first learn in high school – mean, median, mode, frequency table, range, variance, standard deviation and linear correlation are discussed.

Chapter 4: Algebra and Symbolic Math with SymPy

The first three chapters are all about number crunching. The fourth chapter introduces the reader to the basics of manipulating symbolic expressions using SymPy. Factorizing algebraic expressions, solving equations, plotting from symbolic expressions are some of the topics discussed in this chapter.

Trying out the programs

Using the Anaconda distribution (Python 3) should be the easiest way to try out all the programs in the book.

March 04, 2015 08:36 PM

Python Software Foundation

Pycon Cameroon

The PSF was delighted to hear recently from the organizers of two Python conferences held in Africa that we had helped sponsor. 
The first of them, Pycon Cameroon, is the subject of this blog. It was held in December at The Blue Pearl Hotel in the North West Region city of Bamenda. 
According to organizer Ngangsia Akumbo, the main purpose of this event was to generate awareness among young people, "especially to young girls in Nkwen – Bamenda – Cameroon on the importance of writing code using the python programming language."
Photo credit Ngangsia Akumbo
Most of the attendees were brand-new to programming and had never heard of Python. Many of them did not have their own laptops. Although the event lasted for one day, its importance and impact as an early response to great need is huge. Although Cameroon is a nation that provides state-run public education to children, and the literacy rate is a fairly admirable 71%, these achievements are undercut by substantial child labor (over 50% of children work), poverty, and lack of access to health care. In addition, teachers are disproportionately located in the south, leaving northern schools understaffed and those students at an educational disadvantage. See Wikipedia
The PSF is proud to have been a part of this early outreach effort and hopes to see a great many more in the future. Thanks so much to the organizers and presenters. 
We urge our readers to check out these websites to learn more: 
PyCon Cameroon and for Conference videos, including one of Ngangsia's talk, see You Tube Cameroon and Cameroon video.

March 04, 2015 07:45 PM


PyCon 2015: Call for On-Site Volunteers

Got a couple of hours to give? PyCon is organized and run by volunteers from the Python community. This year we're looking for over 300 on-site volunteer hours to help make sure everything runs smoothly. Everyone who is attending PyCon is welcome to volunteer, but you must be registered to volunteer. All help is very much appreciated. Thank you!
Pro Tip: Sign-up to be a Session Chair or Session Runner – it's a great opportunity to meet the speakers!

Session StaffSession Staff sign-upFri - Sun
Registration DeskRegistration sign-upTues - Sun
Handout Swag BagsSwag handout sign-upFri - Sat
Swag Bag StuffingJust Show Up! Stuff 10 bags!Thur (3 - 6pm)
Tutorial SupportTutorial support / hosts sign-upWed - Thurs
Miscellaneous HelpMiscellaneous help sign-upTues - Sun

Session Staff

Volunteer: Please read and understand the duties before you sign up to be a session chair/runner. Follow the links below for complete descriptions.

Registration Desk

Volunteer: Sign-up for an hour slot at the registration desk: registration desk sign-up

Swag Bag Handout

Volunteer: Sign-up for an hour slot at the swag/t-shirt desk: swag handout sign-up

Tutorial Support

Volunteer: Sign-up for an hour slot helping with a tutorial: tutorial support / hosts sign-up

Miscellaneous Help

Volunteer: Sign-up for miscellaneous tasks: odd jobs sign-up

Stuff Swag Bags

Volunteer: Just show up! Thursday April 9th, 3pm - 6pm (or until we're done)


March 04, 2015 07:11 PM


EuroPython 2015: We've sold half of the available early-bird tickets already

We have 350 early-bird tickets available. Half of those have been sold by now in an amazing rush to our registration page:


We would like to thank everyone who bought a ticket and put trust in us to make the conference an interesting and inspiring event - even without knowing the talks and topics which will be covered in the conference.

We’d also like to apologize for the Paypal payment system not working yesterday. This is fixed, so you can use Paypal to pay your tickets if you don’t want to use a credit card.


EuroPython 2015 Team

March 04, 2015 01:44 PM

Alexey Evseev

Debug SQL in django test

Debug SQL in django test

In django tests we can measure number of sql queries:

def test_home(self):
    with self.assertNumQueries(1):
        response = self.client.get('/')
    self.assertEqual(response.status_code, 200)

If code in context of assertNumQueries will make other number of DB attempts than expected (here 1), test will throw error. But when test fails it is sometimes hard to understand, what unexpected query was made. To debug such case very useful to log SQL expressions to console. Below ...

March 04, 2015 12:34 PM

Python Piedmont Triad User Group

PYPTUG Project night - Team Near Space Circus

To Space And Back

Another project night where we will focus on our HAB project: Sending a technical payload into space, and back, as part of the 2015 Global Space Balloon Challenge (http:// The project will include a payload that will pay homage to the first NASA balloon flights in 1969 designed to take large area photographs of the earth from a very high altitude.

The payload will include a computer with operating system, many python scripts and various hardware including sensors, transmitters and other tech gear.

The monthly project nights until April will focus on building a high altitude balloon to send into near space. There is something to do for everyone, from art, to programming, to mechanical and electrical engineering, to finding stuff, reading regulations, making recovery plans, buying stuff, coming up with a team name, what experiments should be included in the payload etc. Don't wait for a direct invitation, sign up on our meetup group:

This meeting will be on Wednesday, Mar. 18 at 6pm in the Dash room at Inmar:


635 Vine St,
Room 1130H "Dash"
Winston-Salem, NC

This will be at the Inmar building in downtown Winston Salem.­

Some preliminary work has already started and discussion is ongoing on the PYPTUG mailing list:!forum/pyptug

And look for the Near Space Technical Payload Official Thread (should be at the top)

Keep an eye on this site for progress reports. At launch, you will be able to track the actual balloon through a web page.

March 04, 2015 07:36 AM

Montreal Python User Group

Call for Speakers - Montréal-Python 52: Quadruped Revolutionist

We are looking for speakers for short and longer presentation (5-45mins). Especially, for people who would like to present a lighting talk at PyCon. We want to give you this opportunity to practice your talk. For more informations, please have a look at PyCon website at:

If you are willing to take this opportunity and come show us what you are doing, send us a blurb and give a small introduction to what you are doing at the following email address:

In the mean time, we are lucky to have 2 speakers from Montreal who will present at PyCon. They will be on stage at this event, and it is your opportunity to have a preview of their talk:

Julia Evans: Systems programming as a swiss army knife

You might think of the Linux kernel as something that only kernel developers need to know about. Not so! It turns out that understanding some basics about kernels and systems programming makes you a better developer, and you can use this knowledge when debugging your normal everyday Python programs.

Greg Ward: How to Write Reusable Code

Learning to write high-quality, reusable code takes years of dedicated work. Or you can take a shortcut: attend this talk and learn some of the tricks I've figured out over a couple of decades of programming.


Monday, the March 16th 2015


We’d like to thank our sponsors for their continuous support:

March 04, 2015 05:00 AM