 Tutorial

Yet another ensemble learning helper Image by Brian Hawkes

What does ensemble learning mean?¶

Ensemble learning - this is a method use multiple learning algorithms to obtain(I'd say usually it so, but not in any cases) better predictive performance than could be obtained from any of the constituent learning algorithms alone.

The most common techniques are:

• boosting
• bagging
• stacking

We are looking at some meta-algorithms to make an ensemble, which can improve the performance of a metric, get a better speed of experimenting and simplify code.

I would like to show several libraries for ensambling in python:

We will understand the use of different libraries on a simple example and plot of decision boundaries to visualize differences.

Installation

In [ ]:
!pip install mlxtend
!pip install mlens
!pip install deslib

Prepare our notebook to further experiments:

• import all libraries
• load example data and split it
• make classifiers for comparison
• create utility function

We'll use Iris dataset as an example.

Features

• Sepal length
• Sepal width
• Petal length
• Petal width

Number of samples: 150.

Target variable (discrete): {50x Setosa, 50x Versicolor, 50x Virginica}

In [ ]:
import itertools
import warnings

import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
# common libraries
import numpy as np
from deslib.dcs import MCB
from deslib.des.knora_e import KNORAE
from deslib.static import StaticSelection
from mlens.ensemble import (BlendEnsemble, SequentialEnsemble, Subsemble,
SuperLearner)
from mlxtend.classifier import (EnsembleVoteClassifier, StackingClassifier,
StackingCVClassifier)
from mlxtend.data import iris_data
from mlxtend.plotting import plot_decision_regions
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC

warnings.filterwarnings("ignore")
In [ ]:
# random seed
seed = 10

X, y = iris_data()
X = X[:, [0, 2]]

# split the data into training and test data
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=seed
)
In [ ]:
# Initializing several classifiers
clf1 = LogisticRegression(random_state=seed)
clf2 = RandomForestClassifier(random_state=seed)
clf3 = SVC(random_state=seed, probability=True)
In [ ]:
def compare(classifier, X_train=X_train, y_train=y_train, X_test=X_test, y_test=y_test):
# Plotting Decision Regions
gs = gridspec.GridSpec(2, 2)
fig = plt.figure(figsize=(10, 8))

# Label for our classifiers
labels = ["Logistic Regression", "Random Forest", "RBF kernel SVM", "Ensemble"]

classifiers = [clf1, clf2, clf3, classifier]
for clf, label, grid in zip(
classifiers, labels, itertools.product([0, 1], repeat=2)
):
clf.fit(X, y)
ax = plt.subplot(gs[grid, grid])
fig = plot_decision_regions(X=X, y=y, clf=clf, legend=2)
plt.title(label)

plt.show()

for clf, label in zip(classifiers, labels):
print(label)
print(classification_report(clf.predict(X_test), y_test))

Mlxtend

Mlxtend classes:

• EnsembleVoteClassifier - a majority voting helper for classification
• StackingClassifier - an ensemble-learning meta-classifier for stacking
• StackingCVClassifier - an ensemble-learning meta-classifier for stacking using cross-validation to prepare the inputs for the level-2 classifier to prevent overfitting

Let's discover how we can use it by examples.

EnsembleVoteClassifier¶

In [ ]:
eclf = EnsembleVoteClassifier(clfs=[clf1, clf2, clf3], weights=[2, 1, 1], voting="soft")
compare(eclf)

StackingClassifier¶

In [ ]:
sclf = StackingClassifier(
classifiers=[clf1, clf2, clf3], meta_classifier=LogisticRegression()
)
compare(sclf)

StackingCVClassifier¶

In [ ]:
scvclf = StackingCVClassifier(
classifiers=[clf1, clf2, clf3], meta_classifier=LogisticRegression()
)
compare(scvclf)

mlens

Mlens has several useful classes:

• SuperLearner - a stacking ensemble
• Subsemble - a supervised ensemble algorithm that uses subsets of the full data to fit a layer
• BlendEnsemble - a supervised ensemble uses the meta-learner to estimate the prediction matrix
• SequentialEnsemble - a multi-layer ensemble learning

SuperLearner¶

In [ ]:
sl = SuperLearner(folds=5, random_state=seed, verbose=2)

# Build the first layer
# Attach the final meta-estimator

compare(sl)

Subsemble¶

In [ ]:
sub = Subsemble(partitions=3, random_state=seed, verbose=2, shuffle=True)

# Build the first layer

compare(sub)

BlendEnsemble¶

In [ ]:
be = BlendEnsemble(test_size=0.7, random_state=seed, verbose=2, shuffle=True)

# Build the first layer

compare(be)

SequentialEnsemble¶

In [ ]:
se = SequentialEnsemble(random_state=seed, shuffle=True)

# The initial layer is a blended layer, same as a layer in the BlendEnsemble

# The second layer is a stacked layer, same as a layer of the SuperLearner

# The meta estimator is added as in any other ensemble

compare(se)

DESlib

In [ ]:

DESlib has 23 algorithms different ensemble technics split into 3 group:

• Dynamic Ensemble Selection(DES)
• Dynamic Classifier Selection(DCS)
• Baseline methods(static)

let's try some of them from different groups:

• KNORAE - Dynamic Ensemble Selection(DES) algorithm based on k-Nearest Oracle-Eliminate(KNORA-E)
• MCB - Dynamic Classifier Selection(DCS) algorithm based on Multiple Classifier Behaviour (MCB)
• StaticSelection - Baseline method(static) for an ensemble model that selects N classifiers with the best performance

KNORAE¶

In [ ]:
kne = KNORAE([clf1, clf2, clf3])
compare(kne)

MCB¶

In [ ]:
mcb = MCB([clf1, clf2, clf3])
compare(mcb)