Performance of the optimize function from scipy

This notebook measures the performance of a simple optimize use case.

In [24]:
import numpy as np
from scipy.optimize import minimize
In [25]:
def rosen(x):
    """The Rosenbrock function"""
    return 100.0 * (x[1]-x[0]**2.0)**2.0 + (1-x[0])**2.0
In [26]:
%%timeit 
minimize(rosen, [0.0, 0.0], method="nelder-mead",
               options={'xtol': 1e-8, 'disp': False})
3.85 ms ± 232 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [27]:
import datetime
print(datetime.datetime.now())
for i in range(1000):
    minimize(rosen, [0.0, 0.0], method="nelder-mead",
               options={'xtol': 1e-8, 'disp': False})
print(datetime.datetime.now())
2018-07-22 14:33:29.011554
2018-07-22 14:33:32.803683

Let's try numba!

In [28]:
from numba import jit
In [29]:
@jit
def rosen2(x):
    """The Rosenbrock function"""
    return 100.0 * (x[1]-x[0]**2.0)**2.0 + (1-x[0])**2.0
In [30]:
%%timeit 
minimize(rosen2, [0.0, 0.0], method="nelder-mead",
               options={'xtol': 1e-8, 'disp': False})
3.55 ms ± 224 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [31]:
## mmm... it didn't seem to help.
In [32]:
import numba as nb
In [33]:
nb.__version__
Out[33]:
'0.36.2'
In [34]:
import sys
In [36]:
sys.version
Out[36]:
'3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 12:04:33) \n[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]'
In [ ]: