Cython is essentially a Python to C translator. Cython allows you to use syntax similar to Python, while achieving speeds near that of C.
This post describes how to use Cython to speed up a single Python function involving ‘tight loops’. I’ll leave more complicated applications - with many functions and classes - for a later post.
If you’re using Python and need performance there are a variety of options, see quantecon for a detailed comparison. And of course you could always choose a different language like Julia, or be brave and learn C itself.
While the static compilation approach of Cython may not be cutting edge, Cython is mature, well documented and capable of handling large complicated projects. Cython code lies behind many of the big Python scientific libraries including scikit-learn and pandas.
Our example function evaluates a Radial Basis Function (RBF) approximation scheme. We assume each data point is a ‘center’ and use Gaussian type RBFs
$$Y_{i}=\sum_{j=0}^{N} \beta_{j}~ e^{- ~\theta ~||~X_{i}-X_{j} ~||^{2} }$$so our function takes an input data array X of shape (N, D), a parameter array $\beta $ of length N and a ‘bandwidth’ parameter $\theta $ and return an array of values Y of length N.
%load_ext cython
import sys
import Cython
print("Python %d.%d.%d %s %s" % sys.version_info)
print("Cython %s" % Cython.__version__)
Here’s the naive Python implementation
from math import exp
import numpy as np
def rbf_network(X, beta, theta):
N = X.shape[0]
D = X.shape[1]
Y = np.zeros(N)
for i in range(N):
for j in range(N):
r = 0
for d in range(D):
r += (X[j, d] - X[i, d]) ** 2
r = r**0.5
Y[i] += beta[j] * exp(-(r * theta)**2)
return Y
Let’s make up some data
import numpy as np
D = 5
N = 1000
X = np.array([np.random.rand(N) for d in range(D)]).T
beta = np.random.rand(N)
theta = 10
Timing this in IPython we get
%timeit rbf_network(X, beta, theta)
Dam those Python loops are slow!
So in this case we’re lucky and there’s an external numpy based implementation
from scipy.interpolate import Rbf
rbf = Rbf(X[:,0], X[:,1], X[:,2], X[:,3], X[:, 4], np.random.rand(N))
Xtuple = tuple([X[:, i] for i in range(D)])
%timeit rbf(Xtuple)
Much better. But what if we want to go faster or we don’t have a library we can use.
First we just write %%cython at the beginning of the cell to load the cython commands.
With Cython there are a few ‘tricks’ involved in achieving good performance. Here’s the first one, if we add the option -a to the first line: %%cython -a
we generate an inline output where lines highlighted in yellow are still using Python and are slowing our code down. Our goal is get rid of yellow lines, especially any inside of loops.
%%cython -a
from math import exp
import numpy as np
def rbf_network_0(X,beta,theta):
N = X.shape[0]
D = X.shape[1]
Y = np.zeros(N)
for i in range(N):
for j in range(N):
r = 0
for d in range(D):
r += (X[j, d] - X[i, d]) ** 2
r = r**0.5
Y[i] += beta[j] * exp(-(r * theta)**2)
return Y
%timeit rbf_network_0(X, beta, theta)
We only get the speedups when we start optimizing the code by typing the variables, avoiding the use of the math lib, and omitting some bounds checking. This allows the function to be run in pure C and not touch the python interpreter.
%%cython -a
from math import exp
import numpy as np
def rbf_network_1(double[:, :] X, double[:] beta, double theta):
cdef int N = X.shape[0]
cdef int D = X.shape[1]
cdef double[:] Y = np.zeros(N)
cdef int i, j, d
cdef double r = 0
for i in range(N):
for j in range(N):
r = 0
for d in range(D):
r += (X[j, d] - X[i, d]) ** 2
r = r**0.5
Y[i] += beta[j] * exp(-(r * theta)**2)
return Y
%timeit rbf_network_1(X, beta, theta)
It is already a fair improvement given that so far all we’ve done is add some type declarations. For local variables we use the cdef keyword. For arrays we use ‘memoryviews’ which can accept numpy arrays as input.
Now our first problem is that we’re still using the Python exponential function. We need to replace this with the C version. The main functions from math.h are included in the Cython libc library, so we just replace from math import exp with
%%cython -a
#from math import exp
from libc.math cimport exp
import numpy as np
def rbf_network_2(double[:, :] X, double[:] beta, double theta):
cdef int N = X.shape[0]
cdef int D = X.shape[1]
cdef double[:] Y = np.zeros(N)
cdef int i, j, d
cdef double r = 0
for i in range(N):
for j in range(N):
r = 0
for d in range(D):
r += (X[j, d] - X[i, d]) ** 2
r = r**0.5
Y[i] += beta[j] * exp(-(r * theta)**2)
return Y
%timeit rbf_network_2(X, beta, theta)
Next we need to add some compiler directives, the easiest way is to add this line to the top of the file
%%cython --compile-args=-ffast-math --link-args=-ffast-math -a
#from math import exp
from libc.math cimport exp
import numpy as np
import cython
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.nonecheck(False)
def rbf_network_3(double[:, :] X, double[:] beta, double theta):
cdef int N = X.shape[0]
cdef int D = X.shape[1]
cdef double[:] Y = np.zeros(N)
cdef int i, j, d
cdef double r = 0
for i in range(N):
for j in range(N):
r = 0
for d in range(D):
r += (X[j, d] - X[i, d]) ** 2
r = r**0.5
Y[i] += beta[j] * exp(-(r * theta)**2)
return Y
%timeit rbf_network_3(X, beta, theta)
And that's pretty good, but we're still only using a single CPU because of python's Global Interpretter Lock (GIL). Let's use all those cores...
Cython supports parallel processing via threads using the OpenMP backend. What does that look like?
First of all, we have to add some extra compile flags to enable OpenMP. Next we put the loops in a nogil context which releases the GIL restriction, something we can only safely do when our code is in pure C without any interaction with Python objects.
Also in the same context block is parallel which sets up the OpenMP threading. Finally we see prange, parallel range, which executes the outer loop in parallel across the number of cores specified.
Now to create a multi-threaded version of rbf_network we just need to replace range() in the first loop with the multi-treaded version prange() from cython.parallel. This tells the compiler to run the loop across multiple CPU cores. In this case, we have no concurrency problems: the order in which the loop is executed doesn’t matter.
Notice the nogil argument in prange(N, nogil=True). In order to run multi-threaded code we need to turn off the GIL. This means that you can’t have Python code inside your multi-threaded loop or compilation will fail. It also means that any functions called inside the loop need to be defined nogil
%%cython --compile-args=-fopenmp --link-args=-fopenmp --compile-args=-ffast-math --link-args=-ffast-math --force -a
import cython
from cython.parallel import prange, parallel
import numpy as np
#from math import exp
from libc.math cimport exp
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.nonecheck(False)
def rbf_network_multithread(double[:, :] X, double[:] beta, double theta):
cdef int N = X.shape[0]
cdef int D = X.shape[1]
cdef double[:] Y = np.zeros(N)
cdef int i, j, d
cdef double r = 0
for i in prange(N, nogil=True,num_threads=4):
for j in range(N):
r = 0
for d in range(D):
r += (X[j, d] - X[i, d])**2
r = r**0.5
Y[i] += beta[j] * exp(-(r * theta)**2)
return Y
D = 5
N = 1000
X = np.array([np.random.rand(N) for d in range(D)]).T
beta = np.random.rand(N)
theta = 10
%timeit rbf_network_multithread(X, beta, theta)
Method | Time (ms) | Speed up Factor |
---|---|---|
Pure Python | 4560 | - |
Scipy | 194 | 23 |
Numpy | 52.9 | 86 |
Numba | 32.1 | 142 |
Cython | 31 | 147 |
Cython with parallelisation | 12.5 | 365 |
I was able to make this particular array operation 365x faster using code that looks remarkably similar to the original naive python implementation. Also important to note that the function API did not change at all; the speed benefits and multithreading are completely transparent to the user of this function.
If you are processing an array using loops, you should definitely look at Cython, particularly Cython with OpenMP threading, to speed up your operations. Maybe not as easy as Python, but certainly much better than learning C.
Note that using the power of numpy you can already speeds up your code by a factor 86, and it holds in one line for our example!!
This tutorial is a mix of some examples which can be found here:
To use cython with your own codes you should write a .pyx file and a setup.py as explained below.
A Cython version - implemented in the file fastloop.pyx - looks something like this
Note that you don’t have to add type declarations in a *.pyx file. Any lines which use untyped variables will just remain in Python rather than being translated to C.
To compile we need a setup.py script, that looks something like this
then we compile from the terminal with
which generates a C code file fastloop.c and a compiled Python extension fastloop.so.
With Cython there are a few ‘tricks’ involved in achieving good performance. Here’s the first one, if we type this in the terminal
we generate a fastloop.html file which we can open in a browser
Lines highlighted yellow are still using Python and are slowing our code down. Our goal is get rid of yellow lines, especially any inside of loops.
Out first problem is that we’re still using the Python exponential function. We need to replace this with the C version. The main functions from math.h are included in the Cython libc library, so we just replace from math import exp with
Next we need to add some compiler directives, the easiest way is to add this line to the top of the file
Note that with these checks turned off you can get segmentation faults rather than nice error messages, so its best to debug your code before putting this line in.
Next we can consider playing with compiler flags (these are C tricks rather than Cython tricks as such). When using gcc the most important option seems to be -ffast-math. From my limited experience, this can improve speeds a lot, with no noticeable loss of reliability. To implement these changes we need to modify the setup.py file:
Now if we run cython fastloop.pyx -a again we will see the loops are now free of Python
The yellow outside the loops is irrelevant here (but would matter if we needed to call this function many times within another loop).
Now you can recompile and test it out
OK, now we’re getting there.
So what else can we do? Well it turns out the exponential function is a bit of a bottleneck here, even the C version. One option is to use a fast approximation to the exponential function
From Cython its easy to call C code. Put the above code in vfastexp.h, then just add the following to your fastloop.pyx file
So now we can just use exp_approx() in place of exp().
Numba is an LLVM compiler for python code, which allows code written in Python to be converted to highly efficient compiled code in real-time. Due to its dependencies, compiling it can be a challenge. To experiment with Numba, I recommend using a local installation of Anaconda, the free cross-platform Python distribution which includes Numba and all its prerequisites within a single easy-to-install package.
Numba is extremely simple to use. We just wrap our python function with autojit (JIT stands for "just in time" compilation) to automatically create an efficient, compiled version of the function:
from numba import jit
rbf_network_numba1=jit(rbf_network)
%timeit rbf_network_numba1(X, beta, theta)
from math import exp
import numpy as np
from numba import jit,autojit
@autojit
def rbf_network_numba2(X, beta, theta):
N = X.shape[0]
D = X.shape[1]
Y = np.zeros(N)
for i in range(N):
for j in range(N):
r = 0
for d in range(D):
r += (X[j, d] - X[i, d]) ** 2
r = r**0.5
Y[i] += beta[j] * exp(-(r * theta)**2)
return Y
%timeit rbf_network_numba2(X, beta, theta)
Comparison between numba and cython here:
https://jakevdp.github.io/blog/2012/08/24/numba-vs-cython/
https://jakevdp.github.io/blog/2013/06/15/numba-vs-cython-take-2/
How to choose: https://eng.climate.com/2015/04/09/numba-vs-cython-how-to-choose/
Overview of different optimisation methods: quantecon
Before considering to use optimisation methods, you should use as much as possible the power of numpy!!! (Solution provided by Didier Vibert)
D = 5
N = 1000
X = np.array([np.random.rand(N) for d in range(D)]).T
beta = np.random.rand(N)
theta = 10
from numpy.linalg import norm
def rbf_network_vec(X, beta, theta):
Y = np.sum( beta *np.exp(-(norm(X[np.newaxis,:,:]-X[:,np.newaxis,:],axis=-1) * theta)**2 ), axis=1)
return Y
%timeit rbf_network_vec(X, beta, theta)