### Programming Session Week 3¶

In this session we will continue to work on regression and we will extend our toolbox to include an additional set of classification methods.

### Exercise 1¶

#### Exercise 1.a¶

The model below was generated using a degree 2 polynomial. Study the evolution of the MSE for various degrees from 1 to 5 and by generating your training and test sets as noisy samples from the true quadratic function. Use $K$-fold cross validation to retrieve the correct model complexity out the possible maximum degrees.

In [15]:
import numpy as np

x_true = np.linspace(0,1,100)
x_sample = np.linspace(0,1,10)
t_true = 0.1 + 0.1*x_true + x_true**2
t_sample = 0.1 + 0.1*x_sample + x_sample**2
t_sample = t_sample + np.random.normal(0,.1,len(x_sample))

plt.plot(x_true, t_true)
plt.scatter(x_sample, t_sample, c='r')
plt.show()


### Exercise 2¶

#### Exercise 2.a¶

Using the OLS loss, try to learn a classifier for the dataset given below.

In [2]:
import scipy.io
import matplotlib.pyplot as plt

plt.scatter(data_class1[:,0], data_class1[:,1])
plt.scatter(data_class2[:,0], data_class2[:,1])
plt.show()


#### Exercise 2.b¶

How could you extend your classifier to the dataset shown below.

In [3]:
import scipy.io
import matplotlib.pyplot as plt

plt.scatter(data_class1[:,0], data_class1[:,1])
plt.scatter(data_class2[:,0], data_class2[:,1])
plt.show()


#### Exercise 2.c¶

We now want to use the OLS to learn a multi-class classifier for the dataset below. Start by coding the one-vs-one and one-vs-rest classifiers. Then use the a single discriminant function with one hot encoding of the classes.

In [5]:
import scipy.io
import matplotlib.pyplot as plt

plt.scatter(data_class1[:,0], data_class1[:,1])
plt.scatter(data_class2[:,0], data_class2[:,1])
plt.scatter(data_class3[:,0], data_class3[:,1])
plt.scatter(data_class4[:,0], data_class4[:,1])
plt.scatter(data_class5[:,0], data_class5[:,1])
plt.show()


### Exercise 3.¶

#### Exercise 3.a¶

Use the OLS classifier from scikit-learn to classify the flowers from the iris dataset into the 3 species. Don't forget to split your dataset into a training and a test part so that you evaluate it properly once it has been trained (you can rely on scikit learn's train_test_split function)

In [ ]:
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split


#### Exercise 3.a¶

Do the same with the https://www.kaggle.com/c/titanic and try to learn a model that can efficiently predict which passengers are going to survive the wreck.

In [ ]:
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split


### Exercise 4.¶

#### Exercise 4.a¶

In this 4th exercise, we will study the robustness of the OLS approach for classification. Consider the dataset below.

• Start by learning a simple binary OLS classifier on that dataset (you can use the linear_regression model from scikit-learn).
• Then try to force misclassification by adding a blue point on the far left of the dataset.
• Once your updated dataset can be used to highlight misclassification by the OLS, replace the OLS classifier with the logistic regression classifier from scikit learn (on the same dataset). What do you notice ?
In [16]:
import scipy.io
import matplotlib.pyplot as plt