You can read an overview of this Numerical Linear Algebra course in this blog post. The course was originally taught in the University of San Francisco MS in Analytics graduate program. Course lecture videos are available on YouTube (note that the notebook numbers and video numbers do not line up, since some notebooks took longer than 1 video to cover).
You can ask questions about the course on our fast.ai forums.
from sklearn import datasets, linear_model, metrics
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import PolynomialFeatures
import math, scipy, numpy as np
from scipy import linalg
We will use a dataset from patients with diabates. The data consists of 442 samples and 10 variables (all are physiological characteristics), so it is tall and skinny. The dependent variable is a quantitative measure of disease progression one year after baseline.
This is a classic dataset, famously used by Efron, Hastie, Johnstone, and Tibshirani in their Least Angle Regression paper, and one of the many datasets included with scikit-learn.
data = datasets.load_diabetes()
feature_names=['age', 'sex', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
trn,test,y_trn,y_test = train_test_split(data.data, data.target, test_size=0.2)
trn.shape, test.shape
Consider a system $X\beta = y$, where $X$ has more rows than columns. This occurs when you have more data samples than variables. We want to find $\hat{\beta}$ that minimizes: $$ \big\vert\big\vert X\beta - y \big\vert\big\vert_2$$
Let's start by using the sklearn implementation:
regr = linear_model.LinearRegression()
%timeit regr.fit(trn, y_trn)
pred = regr.predict(test)
It will be helpful to have some metrics on how good our prediciton is. We will look at the mean squared norm (L2) and mean absolute error (L1).
def regr_metrics(act, pred):
return (math.sqrt(metrics.mean_squared_error(act, pred)),
metrics.mean_absolute_error(act, pred))
regr_metrics(y_test, regr.predict(test))
Linear regression finds the best coefficients $\beta_i$ for:
$$ x_0\beta_0 + x_1\beta_1 + x_2\beta_2 = y $$Adding polynomial features is still a linear regression problem, just with more terms:
$$ x_0\beta_0 + x_1\beta_1 + x_2\beta_2 + x_0^2\beta_3 + x_0 x_1\beta_4 + x_0 x_2\beta_5 + x_1^2\beta_6 + x_1 x_2\beta_7 + x_2^2\beta_8 = y $$We need to use our original data $X$ to calculate the additional polynomial features.
trn.shape
Now, we want to try improving our model's performance by adding some more features. Currently, our model is linear in each variable, but we can add polynomial features to change this.
poly = PolynomialFeatures(include_bias=False)
trn_feat = poly.fit_transform(trn)
', '.join(poly.get_feature_names(feature_names))
trn_feat.shape
regr.fit(trn_feat, y_trn)
regr_metrics(y_test, regr.predict(poly.fit_transform(test)))
Time is squared in #features and linear in #points, so this will get very slow!
%timeit poly.fit_transform(trn)
We would like to speed this up. We will use Numba, a Python library that compiles code directly to C.
Numba is a compiler.
This tutorial from Jake VanderPlas is a nice introduction. Here Jake implements a non-trivial algorithm (non-uniform fast Fourier transform) with Numba.
Cython is another alternative. I've found Cython to require more knowledge to use than Numba (it's closer to C), but to provide similar speed-ups to Numba.
Here is a thorough answer on the differences between an Ahead Of Time (AOT) compiler, a Just In Time (JIT) compiler, and an interpreter.
Let's first get aquainted with Numba, and then we will return to our problem of polynomial features for regression on the diabates data set.
%matplotlib inline
import math, numpy as np, matplotlib.pyplot as plt
from pandas_summary import DataFrameSummary
from scipy import ndimage
from numba import jit, vectorize, guvectorize, cuda, float32, void, float64
We will show the impact of:
If we use numpy on whole arrays at a time, it creates lots of temporaries, and can't use cache. If we use numba looping through an array item at a time, then we don't have to allocate large temporary arrays, and can reuse cached data since we're doing multiple calculations on each array item.
# Untype and Unvectorized
def proc_python(xx,yy):
zz = np.zeros(nobs, dtype='float32')
for j in range(nobs):
x, y = xx[j], yy[j]
x = x*2 - ( y * 55 )
y = x + y*2
z = x + y + 99
z = z * ( z - .88 )
zz[j] = z
return zz
nobs = 10000
x = np.random.randn(nobs).astype('float32')
y = np.random.randn(nobs).astype('float32')
%timeit proc_python(x,y) # Untyped and unvectorized
Numpy lets us vectorize this:
# Typed and Vectorized
def proc_numpy(x,y):
z = np.zeros(nobs, dtype='float32')
x = x*2 - ( y * 55 )
y = x + y*2
z = x + y + 99
z = z * ( z - .88 )
return z
np.allclose( proc_numpy(x,y), proc_python(x,y), atol=1e-4 )
%timeit proc_numpy(x,y) # Typed and vectorized
Numba offers several different decorators. We will try two different ones:
@jit
: very general@vectorize
: don't need to write a for loop. useful when operating on vectors of the same sizeFirst, we will use Numba's jit (just-in-time) compiler decorator, without explicitly vectorizing. This avoids large memory allocations, so we have better locality:
@jit()
def proc_numba(xx,yy,zz):
for j in range(nobs):
x, y = xx[j], yy[j]
x = x*2 - ( y * 55 )
y = x + y*2
z = x + y + 99
z = z * ( z - .88 )
zz[j] = z
return zz
z = np.zeros(nobs).astype('float32')
np.allclose( proc_numpy(x,y), proc_numba(x,y,z), atol=1e-4 )
%timeit proc_numba(x,y,z)
Now we will use Numba's vectorize
decorator. Numba's compiler optimizes this in a smarter way than what is possible with plain Python and Numpy.
@vectorize
def vec_numba(x,y):
x = x*2 - ( y * 55 )
y = x + y*2
z = x + y + 99
return z * ( z - .88 )
np.allclose(vec_numba(x,y), proc_numba(x,y,z), atol=1e-4 )
%timeit vec_numba(x,y)
Numba is amazing. Look how fast this is!
@jit(nopython=True)
def vec_poly(x, res):
m,n=x.shape
feat_idx=0
for i in range(n):
v1=x[:,i]
for k in range(m): res[k,feat_idx] = v1[k]
feat_idx+=1
for j in range(i,n):
for k in range(m): res[k,feat_idx] = v1[k]*x[k,j]
feat_idx+=1
From this blog post by Eli Bendersky:
"The row-major layout of a matrix puts the first row in contiguous memory, then the second row right after it, then the third, and so on. Column-major layout puts the first column in contiguous memory, then the second, etc.... While knowing which layout a particular data set is using is critical for good performance, there's no single answer to the question which layout 'is better' in general.
"It turns out that matching the way your algorithm works with the data layout can make or break the performance of an application.
"The short takeaway is: always traverse the data in the order it was laid out."
Column-major layout: Fortran, Matlab, R, and Julia
Row-major layout: C, C++, Python, Pascal, Mathematica
trn = np.asfortranarray(trn)
test = np.asfortranarray(test)
m,n=trn.shape
n_feat = n*(n+1)//2 + n
trn_feat = np.zeros((m,n_feat), order='F')
test_feat = np.zeros((len(y_test), n_feat), order='F')
vec_poly(trn, trn_feat)
vec_poly(test, test_feat)
regr.fit(trn_feat, y_trn)
regr_metrics(y_test, regr.predict(test_feat))
%timeit vec_poly(trn, trn_feat)
Recall, this was the time from the scikit learn implementation PolynomialFeatures, which was created by experts:
%timeit poly.fit_transform(trn)
605/7.7
This is a big deal! Numba is amazing! With a single line of code, we are getting a 78x speed-up over scikit learn (which was optimized by experts).
Regularization is a way to reduce over-fitting and create models that better generalize to new data.
Lasso regression uses an L1 penalty, which pushes towards sparse coefficients. The parameter $\alpha$ is used to weight the penalty term. Scikit Learn's LassoCV performs cross validation with a number of different values for $\alpha$.
Watch this Coursera video on Lasso regression for more info.
reg_regr = linear_model.LassoCV(n_alphas=10)
reg_regr.fit(trn_feat, y_trn)
reg_regr.alpha_
regr_metrics(y_test, reg_regr.predict(test_feat))
Now we will add some noise to the data
idxs = np.random.randint(0, len(trn), 10)
y_trn2 = np.copy(y_trn)
y_trn2[idxs] *= 10 # label noise
regr = linear_model.LinearRegression()
regr.fit(trn, y_trn)
regr_metrics(y_test, regr.predict(test))
regr.fit(trn, y_trn2)
regr_metrics(y_test, regr.predict(test))
Huber loss is a loss function that is less sensitive to outliers than squared error loss. It is quadratic for small error values, and linear for large values.
$$L(x)= \begin{cases} \frac{1}{2}x^2, & \text{for } \lvert x\rvert\leq \delta \\ \delta(\lvert x \rvert - \frac{1}{2}\delta), & \text{otherwise} \end{cases}$$
hregr = linear_model.HuberRegressor()
hregr.fit(trn, y_trn2)
regr_metrics(y_test, hregr.predict(test))