This notebook demonstrates classifiers, which are provided by Reproducible experiment platform (REP) package.
REP contains following classifiers
Also classifiers from hep_ml
(as any other sklearn
-compatible classifiers may be used)
import numpy, pandas
from rep.utils import train_test_split
from sklearn.metrics import roc_auc_score
sig_data = pandas.read_csv('toy_datasets/toyMC_sig_mass.csv', sep='\t')
bck_data = pandas.read_csv('toy_datasets/toyMC_bck_mass.csv', sep='\t')
labels = numpy.array([1] * len(sig_data) + [0] * len(bck_data))
data = pandas.concat([sig_data, bck_data])
data[:5]
CDF1 | CDF2 | CDF3 | DOCAone | DOCAthree | DOCAtwo | FlightDistance | FlightDistanceError | Hlt1Dec | Hlt2Dec | ... | p1_IP | p1_IPSig | p1_Laura_IsoBDT | p1_pt | p2_IP | p2_IPSig | p2_Laura_IsoBDT | p2_pt | peakingbkg | pt | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1.000000 | 1.000000 | 1.000000 | 0.111337 | 0.012695 | 0.123426 | 162.650955 | 0.870942 | 0 | 0 | ... | 11.314665 | 83.196968 | -0.223668 | 699.066467 | 9.799975 | 64.790207 | -0.121159 | 521.628174 | NaN | 220.742111 |
1 | 0.759755 | 0.597375 | 0.389256 | 0.021781 | 0.094551 | 0.088421 | 4.193265 | 1.262280 | 0 | 0 | ... | 0.720070 | 7.237868 | -0.256142 | 587.628935 | 0.882111 | 8.834325 | -0.203220 | 532.679950 | NaN | 661.208843 |
2 | 1.000000 | 0.796142 | 0.566286 | 0.011852 | 0.004400 | 0.009153 | 1.580610 | 0.261697 | 0 | 0 | ... | 0.362181 | 4.173097 | -0.252788 | 802.746495 | 0.427290 | 5.008959 | -0.409469 | 674.122342 | NaN | 1290.963982 |
3 | 0.716397 | 0.524712 | 0.279033 | 0.015171 | 0.083900 | 0.069127 | 7.884569 | 1.310151 | 0 | 0 | ... | 0.753449 | 6.615949 | -0.253550 | 564.203857 | 0.917409 | 8.695459 | -0.192284 | 537.791687 | NaN | 692.654175 |
4 | 1.000000 | 0.996479 | 0.888159 | 0.005547 | 0.070438 | 0.064689 | -2.267649 | 0.139555 | 0 | 0 | ... | 0.589455 | 21.869143 | -0.254778 | 746.624928 | 0.388996 | 8.465344 | -0.217319 | 988.539221 | NaN | 1328.837840 |
5 rows × 40 columns
# Get train and test data
train_data, test_data, train_labels, test_labels = train_test_split(data, labels, train_size=0.5)
All classifiers inherit from sklearn.BaseEstimator and have the following methods:
classifier.fit(X, y, sample_weight=None)
- train classifier
classifier.predict_proba(X)
- return probabilities vector for all classes
classifier.predict(X)
- return predicted labels
classifier.staged_predict_proba(X)
- return probabilities after each iteration (not supported by TMVA)
classifier.get_feature_importances()
Here we use X
to denote matrix with data of shape [n_samples, n_features]
, y
is vector with labels (0 or 1) of shape [n_samples],
sample_weight
is vector with weights.
X should be* pandas.DataFrame
, not numpy.array
.
Provided this, you'll be able to choose features used in training by setting e.g. features=['FlightTime', 'p']
in constructor.
* it works fine with numpy.array
as well, but in this case all the features will be used.
variables = ["FlightDistance", "FlightDistanceError", "IP", "VertexChi2", "pt", "p0_pt", "p1_pt", "p2_pt", 'LifeTime','dira']
wrapper for scikit-learn classifiers. In this example we use GradientBoosting with default settings
from rep.estimators import SklearnClassifier
from sklearn.ensemble import GradientBoostingClassifier
# Using gradient boosting with default settings
sk = SklearnClassifier(GradientBoostingClassifier(), features=variables)
# Training classifier
sk.fit(train_data, train_labels)
print('training complete')
training complete
# predict probabilities for each class
prob = sk.predict_proba(test_data)
print prob
[[ 0.02570074 0.97429926] [ 0.4970443 0.5029557 ] [ 0.47570811 0.52429189] ..., [ 0.02284076 0.97715924] [ 0.01029867 0.98970133] [ 0.07697665 0.92302335]]
print 'ROC AUC', roc_auc_score(test_labels, prob[:, 1])
ROC AUC 0.911885293509
sk.predict(test_data)
array([1, 1, 1, ..., 1, 1, 1])
sk.get_feature_importances()
effect | |
---|---|
FlightDistance | 0.016774 |
FlightDistanceError | 0.056593 |
IP | 0.141333 |
VertexChi2 | 0.116290 |
pt | 0.094450 |
p0_pt | 0.062632 |
p1_pt | 0.102965 |
p2_pt | 0.090848 |
LifeTime | 0.137255 |
dira | 0.180859 |
from rep.estimators import TMVAClassifier
print TMVAClassifier.__doc__
TMVAClassifier wraps estimators from TMVA (CERN library for machine learning) Parameters: ----------- :param str method: algorithm method (default='kBDT') :param features: features used in training :type features: list[str] or None :param str factory_options: options, for example:: "!V:!Silent:Color:Transformations=I;D;P;G,D" :param str sigmoid_function: function which is used to convert TMVA output to probabilities; * *identity* (svm, mlp) --- the same output, use this for methods returning class probabilities * *sigmoid* --- sigmoid transformation, use it if output varies in range [-infinity, +infinity] * *bdt* (for bdt algorithms output varies in range [-1, 1]) * *sig_eff=0.4* --- for rectangular cut optimization methods, and 0.4 will be used as signal efficiency to evaluate MVA, (put any float number from [0, 1]) :param dict method_parameters: estimator options, example: NTrees=100, BoostType='Grad' .. warning:: TMVA doesn't support *staged_predict_proba()* and *feature_importances__* .. warning:: TMVA doesn't support multiclassification, only two-classes classification
tmva = TMVAClassifier(method='kBDT', NTrees=50, Shrinkage=0.05, features=variables)
tmva.fit(train_data, train_labels)
print('training complete')
training complete
# predict probabilities for each class
prob = tmva.predict_proba(test_data)
print prob
[[ 0.2967351 0.7032649 ] [ 0.77204245 0.22795755] [ 0.79106683 0.20893317] ..., [ 0.29994913 0.70005087] [ 0.12615543 0.87384457] [ 0.48732808 0.51267192]]
print 'ROC AUC', roc_auc_score(test_labels, prob[:, 1])
ROC AUC 0.90249291305
# predict labels
tmva.predict(test_data)
array([1, 0, 0, ..., 1, 1, 1])
from rep.estimators import XGBoostClassifier
print XGBoostClassifier.__doc__
Implements classification (multiclassification) from XGBoost library. Parameters: ----------- :param features: list of features to train model :type features: None or list(str) :param int n_estimators: the number of round for boosting. :param int nthreads: number of parallel threads used to run xgboost. :param num_feature: feature dimension used in boosting, set to maximum dimension of the feature (set automatically by xgboost, no need to be set by user). :type num_feature: None or int :param float gamma: minimum loss reduction required to make a further partition on a leaf node of the tree. The larger, the more conservative the algorithm will be. :type gamma: None or float :param float eta: step size shrinkage used in update to prevent overfitting. After each boosting step, we can directly get the weights of new features and eta actually shrinkage the feature weights to make the boosting process more conservative. :param int max_depth: maximum depth of a tree. :param float scale_pos_weight: ration of weights of the class 1 to the weights of the class 0. :param float min_child_weight: minimum sum of instance weight(hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. .. note:: weights are normalized so that mean=1 before fitting. Roughly min_child_weight is equal to the number of events. :param float subsample: subsample ratio of the training instance. Setting it to 0.5 means that XGBoost randomly collected half of the data instances to grow trees and this will prevent overfitting. :param float colsample: subsample ratio of columns when constructing each tree. :param float base_score: the initial prediction score of all instances, global bias. :param int random_state: random number seed. :param boot verbose: if 1, will print messages during training :param float missing: the number considered by xgboost as missing value.
# XGBoost with default parameters
xgb = XGBoostClassifier(features=variables)
xgb.fit(train_data, train_labels, sample_weight=numpy.ones(len(train_labels)))
print('training complete')
training complete
/Users/axelr/xgboost/xgboost/wrapper/xgboost.py:80: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future. if label != None: /Users/axelr/xgboost/xgboost/wrapper/xgboost.py:82: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future. if weight !=None:
prob = xgb.predict_proba(test_data)
print 'ROC AUC:', roc_auc_score(test_labels, prob[:, 1])
ROC AUC: 0.926181519399
xgb.predict(test_data)
array([1, 0, 1, ..., 1, 1, 1])
xgb.get_feature_importances()
effect | |
---|---|
FlightDistance | 994 |
FlightDistanceError | 1028 |
IP | 994 |
VertexChi2 | 904 |
pt | 1036 |
p0_pt | 786 |
p1_pt | 1140 |
p2_pt | 1094 |
LifeTime | 420 |
dira | 264 |
As one can see above, all the classifiers implement the same interface, this simplifies work, simplifies comparison of different classifiers, but this is not the only profit.
Sklearn
provides different tools to combine different classifiers and transformers.
One of this tools is AdaBoost
, which is abstract metaclassifier built on the top of some other classifier (usually, decision dree)
Let's show that now you can run AdaBoost over classifiers from other libraries!
(isn't boosting over neural network what you were dreaming of all your life?)
from sklearn.ensemble import AdaBoostClassifier
# Construct AdaBoost with TMVA as base estimator
base_tmva = TMVAClassifier(method='kBDT', NTrees=15, Shrinkage=0.05)
ada_tmva = SklearnClassifier(AdaBoostClassifier(base_estimator=base_tmva, n_estimators=5), features=variables)
ada_tmva.fit(train_data, train_labels)
print('training complete')
training complete
prob = ada_tmva.predict_proba(test_data)
print 'AUC', roc_auc_score(test_labels, prob[:, 1])
AUC 0.876710239759
# Construct AdaBoost with xgboost base estimator
base_xgb = XGBoostClassifier(n_estimators=50)
# ada_xgb = SklearnClassifier(AdaBoostClassifier(base_estimator=base_xgb, n_estimators=1), features=variables)
ada_xgb = AdaBoostClassifier(base_estimator=base_xgb, n_estimators=1)
ada_xgb.fit(train_data[variables], train_labels)
print('training complete!')
training complete!
# predict probabilities for each class
prob = ada_xgb.predict_proba(test_data[variables])
print 'AUC', roc_auc_score(test_labels, prob[:, 1])
AUC 0.921637688032
# predict probabilities for each class
prob = ada_xgb.predict_proba(train_data[variables])
print 'AUC', roc_auc_score(train_labels, prob[:, 1])
AUC 0.948215398032
There are many things you can do with classifiers now:
sklearn.pipeline
)And you can replace classifiers at any moment.