In this tutorial, we show how to build a well-tuned H2O GBM model for a supervised classification task. We specifically don't focus on feature engineering and use a small dataset to allow you to reproduce these results in a few minutes on a laptop. This script can be directly transferred to datasets that are hundreds of GBs large and H2O clusters with dozens of compute nodes.
You can download the source from H2O's github repository.
Ports to R Markdown and Flow UI (now part of Example Flows) are available as well.
Either download H2O from H2O.ai's website or install the latest version of H2O into Python with the following set of commands:
Install dependencies from command line (prepending with sudo
if needed):
[sudo] pip install -U requests
[sudo] pip install -U tabulate
[sudo] pip install -U future
[sudo] pip install -U six
The following command removes the H2O module for Python.
[sudo] pip uninstall h2o
Next, use pip to install this version of the H2O Python module.
[sudo] pip install http://h2o-release.s3.amazonaws.com/h2o/rel-zahradnik/3/Python/h2o-3.30.0.3-py2.py3-none-any.whl
import h2o
import numpy as np
import math
from h2o.estimators.gbm import H2OGradientBoostingEstimator
from h2o.grid.grid_search import H2OGridSearch
h2o.init(nthreads=-1, strict_version_check=True)
## optional: connect to a running H2O cluster
#h2o.init(ip="mycluster", port=55555)
Checking whether there is an H2O instance running at http://localhost:54321 ..... not found. Attempting to start a local H2O server... Java Version: java version "1.8.0_231"; Java(TM) SE Runtime Environment (build 1.8.0_231-b11); Java HotSpot(TM) 64-Bit Server VM (build 25.231-b11, mixed mode) Starting server from /Users/nmashayekhi/anaconda3/envs/py_36_new/lib/python3.6/site-packages/h2o/backend/bin/h2o.jar Ice root: /var/folders/pf/w6ctt7r5639fbfclslj7nw2c0000gp/T/tmp4c3rdmax JVM stdout: /var/folders/pf/w6ctt7r5639fbfclslj7nw2c0000gp/T/tmp4c3rdmax/h2o_nmashayekhi_started_from_python.out JVM stderr: /var/folders/pf/w6ctt7r5639fbfclslj7nw2c0000gp/T/tmp4c3rdmax/h2o_nmashayekhi_started_from_python.err Server is running at http://127.0.0.1:54321 Connecting to H2O server at http://127.0.0.1:54321 ... successful.
H2O_cluster_uptime: | 01 secs |
H2O_cluster_timezone: | America/Los_Angeles |
H2O_data_parsing_timezone: | UTC |
H2O_cluster_version: | 3.30.0.3 |
H2O_cluster_version_age: | 8 days |
H2O_cluster_name: | H2O_from_python_nmashayekhi_sfscj0 |
H2O_cluster_total_nodes: | 1 |
H2O_cluster_free_memory: | 3.556 Gb |
H2O_cluster_total_cores: | 16 |
H2O_cluster_allowed_cores: | 16 |
H2O_cluster_status: | accepting new members, healthy |
H2O_connection_url: | http://127.0.0.1:54321 |
H2O_connection_proxy: | {"http": null, "https": null} |
H2O_internal_security: | False |
H2O_API_Extensions: | Amazon S3, XGBoost, Algos, AutoML, Core V3, TargetEncoder, Core V4 |
Python_version: | 3.6.9 final |
Everything is scalable and distributed from now on. All processing is done on the fully multi-threaded and distributed H2O Java-based backend and can be scaled to large datasets on large compute clusters. Here, we use a small public dataset (Titanic), but you can use datasets that are hundreds of GBs large.
## 'path' can point to a local file, hdfs, s3, nfs, Hive, directories, etc.
df = h2o.import_file(path = "http://s3.amazonaws.com/h2o-public-test-data/smalldata/gbm_test/titanic.csv")
print(df.dim)
print(df.head)
print(df.tail)
print(df.describe)
## pick a response for the supervised problem
response = "survived"
## the response variable is an integer, we will turn it into a categorical/factor for binary classification
df[response] = df[response].asfactor()
## use all other columns (except for the name & the response column ("survived")) as predictors
predictors = df.columns
del predictors[1:3]
print(predictors)
Parse progress: |█████████████████████████████████████████████████████████| 100% [1309, 14]
pclass | survived | name | sex | age | sibsp | parch | ticket | fare | cabin | embarked | boat | body | home.dest |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | Allen Miss. Elisabeth Walton | female | 29 | 0 | 0 | 24160 | 211.338 | B5 | S | 2 | nan | St Louis MO |
1 | 1 | Allison Master. Hudson Trevor | male | 0.9167 | 1 | 2 | 113781 | 151.55 | C22 C26 | S | 11 | nan | Montreal PQ / Chesterville ON |
1 | 0 | Allison Miss. Helen Loraine | female | 2 | 1 | 2 | 113781 | 151.55 | C22 C26 | S | nan | nan | Montreal PQ / Chesterville ON |
1 | 0 | Allison Mr. Hudson Joshua Creighton | male | 30 | 1 | 2 | 113781 | 151.55 | C22 C26 | S | nan | 135 | Montreal PQ / Chesterville ON |
1 | 0 | Allison Mrs. Hudson J C (Bessie Waldo Daniels) | female | 25 | 1 | 2 | 113781 | 151.55 | C22 C26 | S | nan | nan | Montreal PQ / Chesterville ON |
1 | 1 | Anderson Mr. Harry | male | 48 | 0 | 0 | 19952 | 26.55 | E12 | S | 3 | nan | New York NY |
1 | 1 | Andrews Miss. Kornelia Theodosia | female | 63 | 1 | 0 | 13502 | 77.9583 | D7 | S | 10 | nan | Hudson NY |
1 | 0 | Andrews Mr. Thomas Jr | male | 39 | 0 | 0 | 112050 | 0 | A36 | S | nan | nan | Belfast NI |
1 | 1 | Appleton Mrs. Edward Dale (Charlotte Lamson) | female | 53 | 2 | 0 | 11769 | 51.4792 | C101 | S | nan | nan | Bayside Queens NY |
1 | 0 | Artagaveytia Mr. Ramon | male | 71 | 0 | 0 | nan | 49.5042 | C | nan | 22 | Montevideo Uruguay |
<bound method H2OFrame.head of >
pclass | survived | name | sex | age | sibsp | parch | ticket | fare | cabin | embarked | boat | body | home.dest |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | Allen Miss. Elisabeth Walton | female | 29 | 0 | 0 | 24160 | 211.338 | B5 | S | 2 | nan | St Louis MO |
1 | 1 | Allison Master. Hudson Trevor | male | 0.9167 | 1 | 2 | 113781 | 151.55 | C22 C26 | S | 11 | nan | Montreal PQ / Chesterville ON |
1 | 0 | Allison Miss. Helen Loraine | female | 2 | 1 | 2 | 113781 | 151.55 | C22 C26 | S | nan | nan | Montreal PQ / Chesterville ON |
1 | 0 | Allison Mr. Hudson Joshua Creighton | male | 30 | 1 | 2 | 113781 | 151.55 | C22 C26 | S | nan | 135 | Montreal PQ / Chesterville ON |
1 | 0 | Allison Mrs. Hudson J C (Bessie Waldo Daniels) | female | 25 | 1 | 2 | 113781 | 151.55 | C22 C26 | S | nan | nan | Montreal PQ / Chesterville ON |
1 | 1 | Anderson Mr. Harry | male | 48 | 0 | 0 | 19952 | 26.55 | E12 | S | 3 | nan | New York NY |
1 | 1 | Andrews Miss. Kornelia Theodosia | female | 63 | 1 | 0 | 13502 | 77.9583 | D7 | S | 10 | nan | Hudson NY |
1 | 0 | Andrews Mr. Thomas Jr | male | 39 | 0 | 0 | 112050 | 0 | A36 | S | nan | nan | Belfast NI |
1 | 1 | Appleton Mrs. Edward Dale (Charlotte Lamson) | female | 53 | 2 | 0 | 11769 | 51.4792 | C101 | S | nan | nan | Bayside Queens NY |
1 | 0 | Artagaveytia Mr. Ramon | male | 71 | 0 | 0 | nan | 49.5042 | C | nan | 22 | Montevideo Uruguay |
<bound method H2OFrame.tail of >
pclass | survived | name | sex | age | sibsp | parch | ticket | fare | cabin | embarked | boat | body | home.dest |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | Allen Miss. Elisabeth Walton | female | 29 | 0 | 0 | 24160 | 211.338 | B5 | S | 2 | nan | St Louis MO |
1 | 1 | Allison Master. Hudson Trevor | male | 0.9167 | 1 | 2 | 113781 | 151.55 | C22 C26 | S | 11 | nan | Montreal PQ / Chesterville ON |
1 | 0 | Allison Miss. Helen Loraine | female | 2 | 1 | 2 | 113781 | 151.55 | C22 C26 | S | nan | nan | Montreal PQ / Chesterville ON |
1 | 0 | Allison Mr. Hudson Joshua Creighton | male | 30 | 1 | 2 | 113781 | 151.55 | C22 C26 | S | nan | 135 | Montreal PQ / Chesterville ON |
1 | 0 | Allison Mrs. Hudson J C (Bessie Waldo Daniels) | female | 25 | 1 | 2 | 113781 | 151.55 | C22 C26 | S | nan | nan | Montreal PQ / Chesterville ON |
1 | 1 | Anderson Mr. Harry | male | 48 | 0 | 0 | 19952 | 26.55 | E12 | S | 3 | nan | New York NY |
1 | 1 | Andrews Miss. Kornelia Theodosia | female | 63 | 1 | 0 | 13502 | 77.9583 | D7 | S | 10 | nan | Hudson NY |
1 | 0 | Andrews Mr. Thomas Jr | male | 39 | 0 | 0 | 112050 | 0 | A36 | S | nan | nan | Belfast NI |
1 | 1 | Appleton Mrs. Edward Dale (Charlotte Lamson) | female | 53 | 2 | 0 | 11769 | 51.4792 | C101 | S | nan | nan | Bayside Queens NY |
1 | 0 | Artagaveytia Mr. Ramon | male | 71 | 0 | 0 | nan | 49.5042 | C | nan | 22 | Montevideo Uruguay |
<bound method H2OFrame.describe of > ['pclass', 'sex', 'age', 'sibsp', 'parch', 'ticket', 'fare', 'cabin', 'embarked', 'boat', 'body', 'home.dest']
From now on, everything is generic and directly applies to most datasets. We assume that all feature engineering is done at this stage and focus on model tuning. For multi-class problems, you can use h2o.logloss()
or h2o.confusion_matrix()
instead of h2o.auc()
and for regression problems, you can use h2o.mean_residual_deviance()
or h2o.mse()
.
We split the data into three pieces: 60% for training, 20% for validation, 20% for final testing. Here, we use random splitting, but this assumes i.i.d. data. If this is not the case (e.g., when events span across multiple rows or data has a time structure), you'll have to sample your data non-randomly.
train, valid, test = df.split_frame(
ratios=[0.6,0.2],
seed=1234,
destination_frames=['train.hex','valid.hex','test.hex']
)
As the first step, we'll build some default models to see what accuracy we can expect. Let's use the AUC metric for this demo, but you can use h2o.logloss()
and stopping_metric="logloss"
as well. It ranges from 0.5 for random models to 1 for perfect models.
The first model is a default GBM, trained on the 60% training split
#We only provide the required parameters, everything else is default
gbm = H2OGradientBoostingEstimator()
gbm.train(x=predictors, y=response, training_frame=train)
## Show a detailed model summary
print(gbm)
gbm Model Build progress: |███████████████████████████████████████████████| 100% Model Details ============= H2OGradientBoostingEstimator : Gradient Boosting Machine Model Key: GBM_model_python_1590166894817_1 Model Summary:
number_of_trees | number_of_internal_trees | model_size_in_bytes | min_depth | max_depth | mean_depth | min_leaves | max_leaves | mean_leaves | ||
---|---|---|---|---|---|---|---|---|---|---|
0 | 50.0 | 50.0 | 22644.0 | 2.0 | 5.0 | 4.94 | 3.0 | 21.0 | 13.02 |
ModelMetricsBinomial: gbm ** Reported on train data. ** MSE: 0.020967191978133064 RMSE: 0.1448005247854201 LogLoss: 0.0878847344331042 Mean Per-Class Error: 0.025960784857711583 AUC: 0.9960535168089666 AUCPR: 0.9948602636749849 Gini: 0.9921070336179332 Confusion Matrix (Act/Pred) for max f1 @ threshold = 0.49928839180236295:
0 | 1 | Error | Rate | ||
---|---|---|---|---|---|
0 | 0 | 478.0 | 1.0 | 0.0021 | (1.0/479.0) |
1 | 1 | 15.0 | 286.0 | 0.0498 | (15.0/301.0) |
2 | Total | 493.0 | 287.0 | 0.0205 | (16.0/780.0) |
Maximum Metrics: Maximum metrics at their respective thresholds
metric | threshold | value | idx | |
---|---|---|---|---|
0 | max f1 | 0.499288 | 0.972789 | 164.0 |
1 | max f2 | 0.140574 | 0.970684 | 190.0 |
2 | max f0point5 | 0.499288 | 0.986888 | 164.0 |
3 | max accuracy | 0.499288 | 0.979487 | 164.0 |
4 | max precision | 0.996316 | 1.000000 | 0.0 |
5 | max recall | 0.056272 | 1.000000 | 234.0 |
6 | max specificity | 0.996316 | 1.000000 | 0.0 |
7 | max absolute_mcc | 0.499288 | 0.957042 | 164.0 |
8 | max min_per_class_accuracy | 0.275850 | 0.966777 | 173.0 |
9 | max mean_per_class_accuracy | 0.499288 | 0.974039 | 164.0 |
10 | max tns | 0.996316 | 479.000000 | 0.0 |
11 | max fns | 0.996316 | 300.000000 | 0.0 |
12 | max fps | 0.009568 | 479.000000 | 399.0 |
13 | max tps | 0.056272 | 301.000000 | 234.0 |
14 | max tnr | 0.996316 | 1.000000 | 0.0 |
15 | max fnr | 0.996316 | 0.996678 | 0.0 |
16 | max fpr | 0.009568 | 1.000000 | 399.0 |
17 | max tpr | 0.056272 | 1.000000 | 234.0 |
Gains/Lift Table: Avg response rate: 38.59 %, avg score: 38.61 %
group | cumulative_data_fraction | lower_threshold | lift | cumulative_lift | response_rate | score | cumulative_response_rate | cumulative_score | capture_rate | cumulative_capture_rate | gain | cumulative_gain | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0.010256 | 0.993452 | 2.591362 | 2.591362 | 1.000000 | 0.994604 | 1.000000 | 0.994604 | 0.026578 | 0.026578 | 159.136213 | 159.136213 | |
1 | 2 | 0.020513 | 0.993000 | 2.591362 | 2.591362 | 1.000000 | 0.993156 | 1.000000 | 0.993880 | 0.026578 | 0.053156 | 159.136213 | 159.136213 | |
2 | 3 | 0.032051 | 0.992791 | 2.591362 | 2.591362 | 1.000000 | 0.992868 | 1.000000 | 0.993516 | 0.029900 | 0.083056 | 159.136213 | 159.136213 | |
3 | 4 | 0.041026 | 0.992701 | 2.591362 | 2.591362 | 1.000000 | 0.992748 | 1.000000 | 0.993348 | 0.023256 | 0.106312 | 159.136213 | 159.136213 | |
4 | 5 | 0.050000 | 0.992637 | 2.591362 | 2.591362 | 1.000000 | 0.992662 | 1.000000 | 0.993225 | 0.023256 | 0.129568 | 159.136213 | 159.136213 | |
5 | 6 | 0.100000 | 0.992117 | 2.591362 | 2.591362 | 1.000000 | 0.992382 | 1.000000 | 0.992803 | 0.129568 | 0.259136 | 159.136213 | 159.136213 | |
6 | 7 | 0.150000 | 0.991556 | 2.591362 | 2.591362 | 1.000000 | 0.991763 | 1.000000 | 0.992457 | 0.129568 | 0.388704 | 159.136213 | 159.136213 | |
7 | 8 | 0.200000 | 0.988665 | 2.591362 | 2.591362 | 1.000000 | 0.990535 | 1.000000 | 0.991976 | 0.129568 | 0.518272 | 159.136213 | 159.136213 | |
8 | 9 | 0.300000 | 0.966197 | 2.591362 | 2.591362 | 1.000000 | 0.984540 | 1.000000 | 0.989498 | 0.259136 | 0.777409 | 159.136213 | 159.136213 | |
9 | 10 | 0.400000 | 0.196833 | 1.893688 | 2.416944 | 0.730769 | 0.639667 | 0.932692 | 0.902040 | 0.189369 | 0.966777 | 89.368771 | 141.694352 | |
10 | 11 | 0.502564 | 0.074133 | 0.226744 | 1.969964 | 0.087500 | 0.113804 | 0.760204 | 0.741175 | 0.023256 | 0.990033 | -77.325581 | 96.996407 | |
11 | 12 | 0.605128 | 0.043622 | 0.097176 | 1.652542 | 0.037500 | 0.051864 | 0.637712 | 0.624343 | 0.009967 | 1.000000 | -90.282392 | 65.254237 | |
12 | 13 | 0.700000 | 0.030071 | 0.000000 | 1.428571 | 0.000000 | 0.037125 | 0.551282 | 0.544757 | 0.000000 | 1.000000 | -100.000000 | 42.857143 | |
13 | 14 | 0.800000 | 0.017463 | 0.000000 | 1.250000 | 0.000000 | 0.021712 | 0.482372 | 0.479376 | 0.000000 | 1.000000 | -100.000000 | 25.000000 | |
14 | 15 | 0.919231 | 0.012569 | 0.000000 | 1.087866 | 0.000000 | 0.014086 | 0.419805 | 0.419025 | 0.000000 | 1.000000 | -100.000000 | 8.786611 | |
15 | 16 | 1.000000 | 0.009568 | 0.000000 | 1.000000 | 0.000000 | 0.011730 | 0.385897 | 0.386128 | 0.000000 | 1.000000 | -100.000000 | 0.000000 |
Scoring History:
timestamp | duration | number_of_trees | training_rmse | training_logloss | training_auc | training_pr_auc | training_lift | training_classification_error | ||
---|---|---|---|---|---|---|---|---|---|---|
0 | 2020-05-22 10:01:40 | 0.014 sec | 0.0 | 0.486807 | 0.666878 | 0.500000 | 0.385897 | 1.000000 | 0.614103 | |
1 | 2020-05-22 10:01:40 | 0.132 sec | 1.0 | 0.454407 | 0.603361 | 0.885112 | 0.902207 | 2.591362 | 0.089744 | |
2 | 2020-05-22 10:01:40 | 0.158 sec | 2.0 | 0.426777 | 0.553098 | 0.885143 | 0.902272 | 2.591362 | 0.088462 | |
3 | 2020-05-22 10:01:40 | 0.175 sec | 3.0 | 0.403201 | 0.512260 | 0.885143 | 0.902272 | 2.591362 | 0.088462 | |
4 | 2020-05-22 10:01:40 | 0.195 sec | 4.0 | 0.383119 | 0.478502 | 0.885143 | 0.902272 | 2.591362 | 0.088462 | |
5 | 2020-05-22 10:01:40 | 0.216 sec | 5.0 | 0.366062 | 0.450252 | 0.885143 | 0.902272 | 2.591362 | 0.088462 | |
6 | 2020-05-22 10:01:40 | 0.234 sec | 6.0 | 0.351626 | 0.426397 | 0.885143 | 0.902272 | 2.591362 | 0.088462 | |
7 | 2020-05-22 10:01:40 | 0.250 sec | 7.0 | 0.339453 | 0.406113 | 0.885143 | 0.902272 | 2.591362 | 0.088462 | |
8 | 2020-05-22 10:01:40 | 0.266 sec | 8.0 | 0.329226 | 0.388770 | 0.885143 | 0.902272 | 2.591362 | 0.088462 | |
9 | 2020-05-22 10:01:40 | 0.282 sec | 9.0 | 0.320665 | 0.373875 | 0.885143 | 0.902272 | 2.591362 | 0.088462 | |
10 | 2020-05-22 10:01:40 | 0.297 sec | 10.0 | 0.313521 | 0.361032 | 0.885143 | 0.902272 | 2.591362 | 0.088462 | |
11 | 2020-05-22 10:01:40 | 0.321 sec | 11.0 | 0.298956 | 0.335652 | 0.936329 | 0.946288 | 2.591362 | 0.057692 | |
12 | 2020-05-22 10:01:40 | 0.335 sec | 12.0 | 0.281305 | 0.306328 | 0.985674 | 0.982561 | 2.591362 | 0.044872 | |
13 | 2020-05-22 10:01:40 | 0.349 sec | 13.0 | 0.266289 | 0.282418 | 0.986430 | 0.983468 | 2.591362 | 0.044872 | |
14 | 2020-05-22 10:01:40 | 0.362 sec | 14.0 | 0.253452 | 0.262377 | 0.987068 | 0.984323 | 2.591362 | 0.042308 | |
15 | 2020-05-22 10:01:40 | 0.373 sec | 15.0 | 0.250758 | 0.255755 | 0.987068 | 0.984323 | 2.591362 | 0.042308 | |
16 | 2020-05-22 10:01:40 | 0.387 sec | 16.0 | 0.240112 | 0.239150 | 0.987262 | 0.984595 | 2.591362 | 0.041026 | |
17 | 2020-05-22 10:01:40 | 0.399 sec | 17.0 | 0.230945 | 0.224730 | 0.987262 | 0.984595 | 2.591362 | 0.041026 | |
18 | 2020-05-22 10:01:40 | 0.411 sec | 18.0 | 0.223221 | 0.212365 | 0.987522 | 0.984743 | 2.591362 | 0.041026 | |
19 | 2020-05-22 10:01:40 | 0.422 sec | 19.0 | 0.216215 | 0.201602 | 0.988053 | 0.985439 | 2.591362 | 0.038462 |
See the whole table with table.as_data_frame() Variable Importances:
variable | relative_importance | scaled_importance | percentage | |
---|---|---|---|---|
0 | boat | 630.076111 | 1.000000 | 0.722770 |
1 | home.dest | 118.952690 | 0.188791 | 0.136452 |
2 | sex | 64.176628 | 0.101855 | 0.073618 |
3 | ticket | 16.090433 | 0.025537 | 0.018458 |
4 | fare | 12.728808 | 0.020202 | 0.014601 |
5 | age | 11.578969 | 0.018377 | 0.013282 |
6 | cabin | 5.559652 | 0.008824 | 0.006378 |
7 | embarked | 3.775484 | 0.005992 | 0.004331 |
8 | parch | 3.281273 | 0.005208 | 0.003764 |
9 | body | 3.274645 | 0.005197 | 0.003756 |
10 | sibsp | 1.725591 | 0.002739 | 0.001979 |
11 | pclass | 0.531737 | 0.000844 | 0.000610 |
## Get the AUC on the validation set
perf = gbm.model_performance(valid)
print(perf.auc())
0.950014088475627
The AUC is 95%, so this model is highly predictive!
The second model is another default GBM, but trained on 80% of the data (here, we combine the training and validation splits to get more training data), and cross-validated using 4 folds. Note that cross-validation takes longer and is not usually done for really large datasets.
## rbind() makes a copy here, so it's better to use split_frame with `ratios = c(0.8)` instead above
cv_gbm = H2OGradientBoostingEstimator(nfolds = 4, seed = 0xDECAF)
cv_gbm.train(x = predictors, y = response, training_frame = train.rbind(valid))
gbm Model Build progress: |███████████████████████████████████████████████| 100%
We see that the cross-validated performance is similar to the validation set performance:
## Show a detailed summary of the cross validation metrics
## This gives you an idea of the variance between the folds
cv_summary = cv_gbm.cross_validation_metrics_summary().as_data_frame()
#print(cv_summary) ## Full summary of all metrics
#print(cv_summary.iloc[4]) ## get the row with just the AUCs
## Get the cross-validated AUC by scoring the combined holdout predictions.
## (Instead of taking the average of the metrics across the folds)
perf_cv = cv_gbm.model_performance(xval=True)
print(perf_cv.auc())
0.9493705528188287
Next, we train a GBM with "I feel lucky" parameters. We'll use early stopping to automatically tune the number of trees using the validation AUC. We'll use a lower learning rate (lower is always better, just takes more trees to converge). We'll also use stochastic sampling of rows and columns to (hopefully) improve generalization.
gbm_lucky = H2OGradientBoostingEstimator(
## more trees is better if the learning rate is small enough
## here, use "more than enough" trees - we have early stopping
ntrees = 10000,
## smaller learning rate is better (this is a good value for most datasets, but see below for annealing)
learn_rate = 0.01,
## early stopping once the validation AUC doesn't improve by at least 0.01% for 5 consecutive scoring events
stopping_rounds = 5, stopping_tolerance = 1e-4, stopping_metric = "AUC",
## sample 80% of rows per tree
sample_rate = 0.8,
## sample 80% of columns per split
col_sample_rate = 0.8,
## fix a random number generator seed for reproducibility
seed = 1234,
## score every 10 trees to make early stopping reproducible (it depends on the scoring interval)
score_tree_interval = 10)
gbm_lucky.train(x=predictors, y=response, training_frame=train, validation_frame=valid)
gbm Model Build progress: |███████████████████████████████████████████████| 100%
This model doesn't seem to be better than the previous models:
perf_lucky = gbm_lucky.model_performance(valid)
print(perf_lucky.auc())
0.9424908424908425
For this small dataset, dropping 20% of observations per tree seems too aggressive in terms of adding regularization. For larger datasets, this is usually not a bad idea. But we'll let this parameter tune freshly below, so no worries.
Next, we'll do real hyper-parameter optimization to see if we can beat the best AUC so far (around 94%).
The key here is to start tuning some key parameters first (i.e., those that we expect to have the biggest impact on the results). From experience with gradient boosted trees across many datasets, we can state the following "rules":
ntrees
) as it takes until the validation set error starts increasing.learn_rate
) is generally better, but will require more trees. Using learn_rate=0.02
and learn_rate_annealing=0.995
(reduction of learning rate with each additional tree) can help speed up convergence without sacrificing accuracy too much, and is great to hyper-parameter searches. For faster scans, use values of 0.05 and 0.99 instead.max_depth
) is data dependent, deeper trees take longer to train, especially at depths greater than 10.sample_rate
and col_sample_rate
) can improve generalization and lead to lower validation and test set errors. Good general values for large datasets are around 0.7 to 0.8 (sampling 70-80 percent of the data) for both parameters. Column sampling per tree (col_sample_rate_per_tree
) can also be tuned. Note that it is multiplicative with col_sample_rate
, so setting both parameters to 0.8 results in 64% of columns being considered at any given node to split.sample_rate_per_class
(array of ratios, one per response class in lexicographic order).First we want to know what value of max_depth
to use because it has a big impact on the model training time and optimal values depend strongly on the dataset.
We'll do a quick Cartesian grid search to get a rough idea of good candidate max_depth
values. Each model in the grid search will use early stopping to tune the number of trees using the validation set AUC, as before.
We'll use learning rate annealing to speed up convergence without sacrificing too much accuracy.
## Depth 10 is usually plenty of depth for most datasets, but you never know
hyper_params = {'max_depth' : list(range(1,30,2))}
#hyper_params = {max_depth = [4,6,8,12,16,20]} ##faster for larger datasets
#Build initial GBM Model
gbm_grid = H2OGradientBoostingEstimator(
## more trees is better if the learning rate is small enough
## here, use "more than enough" trees - we have early stopping
ntrees=10000,
## smaller learning rate is better
## since we have learning_rate_annealing, we can afford to start with a
#bigger learning rate
learn_rate=0.05,
## learning rate annealing: learning_rate shrinks by 1% after every tree
## (use 1.00 to disable, but then lower the learning_rate)
learn_rate_annealing = 0.99,
## sample 80% of rows per tree
sample_rate = 0.8,
## sample 80% of columns per split
col_sample_rate = 0.8,
## fix a random number generator seed for reproducibility
seed = 1234,
## score every 10 trees to make early stopping reproducible
#(it depends on the scoring interval)
score_tree_interval = 10,
## early stopping once the validation AUC doesn't improve by at least 0.01% for
#5 consecutive scoring events
stopping_rounds = 5,
stopping_metric = "AUC",
stopping_tolerance = 1e-4)
#Build grid search with previously made GBM and hyper parameters
grid = H2OGridSearch(gbm_grid,hyper_params,
grid_id = 'depth_grid',
search_criteria = {'strategy': "Cartesian"})
#Train grid search
grid.train(x=predictors,
y=response,
training_frame = train,
validation_frame = valid)
gbm Grid Build progress: |████████████████████████████████████████████████| 100%
## by default, display the grid search results sorted by increasing logloss (since this is a classification task)
print(grid)
max_depth model_ids logloss 0 13 depth_grid_model_7 0.20109637892392757 1 9 depth_grid_model_5 0.20160720998146248 2 7 depth_grid_model_4 0.20246242267462608 3 5 depth_grid_model_3 0.20290080343982356 4 11 depth_grid_model_6 0.2034349464898852 5 19 depth_grid_model_10 0.20446595941168919 6 21 depth_grid_model_11 0.20446595941168919 7 23 depth_grid_model_12 0.20446595941168919 8 25 depth_grid_model_13 0.20446595941168919 9 27 depth_grid_model_14 0.20446595941168919 10 29 depth_grid_model_15 0.20446595941168919 11 17 depth_grid_model_9 0.20446595968647824 12 15 depth_grid_model_8 0.20463752833415866 13 3 depth_grid_model_2 0.20971798928576332 14 1 depth_grid_model_1 0.23401163708609643
## sort the grid models by decreasing AUC
sorted_grid = grid.get_grid(sort_by='auc',decreasing=True)
print(sorted_grid)
max_depth model_ids auc 0 13 depth_grid_model_7 0.9525218371372218 1 9 depth_grid_model_5 0.9519019442096365 2 11 depth_grid_model_6 0.9512820512820513 3 7 depth_grid_model_4 0.9512256973795435 4 5 depth_grid_model_3 0.9511411665257818 5 19 depth_grid_model_10 0.9505494505494505 6 21 depth_grid_model_11 0.9505494505494505 7 23 depth_grid_model_12 0.9505494505494505 8 25 depth_grid_model_13 0.9505494505494505 9 27 depth_grid_model_14 0.9505494505494505 10 29 depth_grid_model_15 0.9505494505494505 11 17 depth_grid_model_9 0.9505494505494505 12 15 depth_grid_model_8 0.9503240349394196 13 1 depth_grid_model_1 0.9462383770076077 14 3 depth_grid_model_2 0.9458157227387998
It appears that max_depth
values of 5 to 13 are best suited for this dataset, which is unusally deep!
max_depths = sorted_grid.sorted_metric_table()['max_depth'][0:5]
new_max = int(max(max_depths, key=int))
new_min = int(min(max_depths, key=int))
print("MaxDepth", new_max)
print("MinDepth", new_min)
MaxDepth 13 MinDepth 5
Now that we know a good range for max_depth, we can tune all other parameters in more detail. Since we don't know what combinations of hyper-parameters will result in the best model, we'll use random hyper-parameter search to "let the machine get luckier than a best guess of any human".
# create hyperameter and search criteria lists (ranges are inclusive..exclusive))
hyper_params_tune = {'max_depth' : list(range(new_min,new_max+1,1)),
'sample_rate': [x/100. for x in range(20,101)],
'col_sample_rate' : [x/100. for x in range(20,101)],
'col_sample_rate_per_tree': [x/100. for x in range(20,101)],
'col_sample_rate_change_per_level': [x/100. for x in range(90,111)],
'min_rows': [2**x for x in range(0,int(math.log(train.nrow,2)-1)+1)],
'nbins': [2**x for x in range(4,11)],
'nbins_cats': [2**x for x in range(4,13)],
'min_split_improvement': [0,1e-8,1e-6,1e-4],
'histogram_type': ["UniformAdaptive","QuantilesGlobal","RoundRobin"]}
search_criteria_tune = {'strategy': "RandomDiscrete",
'max_runtime_secs': 3600, ## limit the runtime to 60 minutes
'max_models': 100, ## build no more than 100 models
'seed' : 1234,
'stopping_rounds' : 5,
'stopping_metric' : "AUC",
'stopping_tolerance': 1e-3
}
gbm_final_grid = H2OGradientBoostingEstimator(distribution='bernoulli',
## more trees is better if the learning rate is small enough
## here, use "more than enough" trees - we have early stopping
ntrees=10000,
## smaller learning rate is better
## since we have learning_rate_annealing, we can afford to start with a
#bigger learning rate
learn_rate=0.05,
## learning rate annealing: learning_rate shrinks by 1% after every tree
## (use 1.00 to disable, but then lower the learning_rate)
learn_rate_annealing = 0.99,
## score every 10 trees to make early stopping reproducible
#(it depends on the scoring interval)
score_tree_interval = 10,
## fix a random number generator seed for reproducibility
seed = 1234,
## early stopping once the validation AUC doesn't improve by at least 0.01% for
#5 consecutive scoring events
stopping_rounds = 5,
stopping_metric = "AUC",
stopping_tolerance = 1e-4)
#Build grid search with previously made GBM and hyper parameters
final_grid = H2OGridSearch(gbm_final_grid, hyper_params = hyper_params_tune,
grid_id = 'final_grid',
search_criteria = search_criteria_tune)
#Train grid search
final_grid.train(x=predictors,
y=response,
## early stopping based on timeout (no model should take more than 1 hour - modify as needed)
max_runtime_secs = 3600,
training_frame = train,
validation_frame = valid)
print(final_grid)
gbm Grid Build progress: |████████████████████████████████████████████████| 100% col_sample_rate col_sample_rate_change_per_level \ 0 0.49 1.04 1 0.92 0.93 2 0.35 1.09 3 0.5 0.94 4 0.97 0.96 .. .. ... ... 95 0.5 1.03 96 0.96 0.94 97 0.61 0.97 98 0.87 1.0 99 0.24 1.08 col_sample_rate_per_tree histogram_type max_depth min_rows \ 0 0.94 QuantilesGlobal 9 2.0 1 0.56 QuantilesGlobal 6 4.0 2 0.83 QuantilesGlobal 5 4.0 3 0.92 RoundRobin 13 2.0 4 0.96 QuantilesGlobal 6 1.0 .. ... ... ... ... 95 0.45 RoundRobin 13 256.0 96 0.62 QuantilesGlobal 8 256.0 97 0.36 QuantilesGlobal 8 256.0 98 0.2 RoundRobin 12 256.0 99 0.3 UniformAdaptive 5 256.0 min_split_improvement nbins nbins_cats sample_rate model_ids \ 0 0.0 32 256 0.86 final_grid_model_69 1 0.0 128 128 0.93 final_grid_model_97 2 1.0E-8 64 128 0.69 final_grid_model_39 3 0.0 128 2048 0.61 final_grid_model_15 4 1.0E-4 1024 64 0.32 final_grid_model_76 .. ... ... ... ... ... 95 1.0E-8 512 16 0.28 final_grid_model_59 96 1.0E-6 64 4096 0.57 final_grid_model_96 97 1.0E-6 128 1024 0.65 final_grid_model_99 98 1.0E-6 512 1024 0.97 final_grid_model_52 99 1.0E-4 32 64 0.97 final_grid_model_45 logloss 0 0.17067246483917042 1 0.17808872698061212 2 0.18137723622439125 3 0.18761536132107057 4 0.1888167753055619 .. ... 95 0.5440442492091072 96 0.5450334515467662 97 0.5488192692893163 98 0.5501161246099107 99 0.5827120934746953 [100 rows x 13 columns]
We can see that the best models have even better validation AUCs than our previous best models, so the random grid search was successful!
## Sort the grid models by AUC
sorted_final_grid = final_grid.get_grid(sort_by='auc',decreasing=True)
print(sorted_final_grid)
col_sample_rate col_sample_rate_change_per_level \ 0 0.92 0.93 1 0.49 1.04 2 0.35 1.09 3 0.61 1.04 4 0.81 0.94 .. .. ... ... 95 0.5 1.03 96 0.87 1.0 97 0.24 1.08 98 0.57 1.1 99 0.96 0.94 col_sample_rate_per_tree histogram_type max_depth min_rows \ 0 0.56 QuantilesGlobal 6 4.0 1 0.94 QuantilesGlobal 9 2.0 2 0.83 QuantilesGlobal 5 4.0 3 0.61 UniformAdaptive 11 1.0 4 0.89 QuantilesGlobal 8 16.0 .. ... ... ... ... 95 0.45 RoundRobin 13 256.0 96 0.2 RoundRobin 12 256.0 97 0.3 UniformAdaptive 5 256.0 98 0.68 RoundRobin 12 256.0 99 0.62 QuantilesGlobal 8 256.0 min_split_improvement nbins nbins_cats sample_rate model_ids \ 0 0.0 128 128 0.93 final_grid_model_97 1 0.0 32 256 0.86 final_grid_model_69 2 1.0E-8 64 128 0.69 final_grid_model_39 3 1.0E-4 64 16 0.69 final_grid_model_82 4 1.0E-8 1024 32 0.71 final_grid_model_70 .. ... ... ... ... ... 95 1.0E-8 512 16 0.28 final_grid_model_59 96 1.0E-6 512 1024 0.97 final_grid_model_52 97 1.0E-4 32 64 0.97 final_grid_model_45 98 0.0 16 4096 0.58 final_grid_model_9 99 1.0E-6 64 4096 0.57 final_grid_model_96 auc 0 0.974218089602705 1 0.9738799661876585 2 0.9698224852071006 3 0.9691462383770075 4 0.9684699915469147 .. ... 95 0.7997464074387151 96 0.7965624119470274 97 0.7854888701042547 98 0.7836573682727528 99 0.7608058608058608 [100 rows x 13 columns]
You can also see the results of the grid search in Flow:
Let's see how well the best model of the grid search (as judged by validation set AUC) does on the held out test set:
#Get the best model from the list (the model name listed at the top of the table)
best_model = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][0])
performance_best_model = best_model.model_performance(test)
print(performance_best_model.auc())
0.9824897581604334
Good news. It does as well on the test set as on the validation set, so it looks like our best GBM model generalizes well to the unseen test set:
We can inspect the winning model's parameters:
params_list = []
for key, value in best_model.params.items():
params_list.append(str(key)+" = "+str(value['actual']))
params_list
["model_id = {'__meta': {'schema_version': 3, 'schema_name': 'ModelKeyV3', 'schema_type': 'Key<Model>'}, 'name': 'final_grid_model_97', 'type': 'Key<Model>', 'URL': '/3/Models/final_grid_model_97'}", "training_frame = {'__meta': {'schema_version': 3, 'schema_name': 'FrameKeyV3', 'schema_type': 'Key<Frame>'}, 'name': 'train.hex', 'type': 'Key<Frame>', 'URL': '/3/Frames/train.hex'}", "validation_frame = {'__meta': {'schema_version': 3, 'schema_name': 'FrameKeyV3', 'schema_type': 'Key<Frame>'}, 'name': 'valid.hex', 'type': 'Key<Frame>', 'URL': '/3/Frames/valid.hex'}", 'nfolds = 0', 'keep_cross_validation_models = True', 'keep_cross_validation_predictions = False', 'keep_cross_validation_fold_assignment = False', 'score_each_iteration = False', 'score_tree_interval = 10', 'fold_assignment = AUTO', 'fold_column = None', "response_column = {'__meta': {'schema_version': 3, 'schema_name': 'ColSpecifierV3', 'schema_type': 'VecSpecifier'}, 'column_name': 'survived', 'is_member_of_frames': None}", "ignored_columns = ['name']", 'ignore_const_cols = True', 'offset_column = None', 'weights_column = None', 'balance_classes = False', 'class_sampling_factors = None', 'max_after_balance_size = 5.0', 'max_confusion_matrix_size = 20', 'ntrees = 10000', 'max_depth = 6', 'min_rows = 4.0', 'nbins = 128', 'nbins_top_level = 1024', 'nbins_cats = 128', 'r2_stopping = 1.7976931348623157e+308', 'stopping_rounds = 5', 'stopping_metric = AUC', 'stopping_tolerance = 0.0001', 'max_runtime_secs = 3542.137', 'seed = 1234', 'build_tree_one_node = False', 'learn_rate = 0.05', 'learn_rate_annealing = 0.99', 'distribution = bernoulli', 'quantile_alpha = 0.5', 'tweedie_power = 1.5', 'huber_alpha = 0.9', 'checkpoint = None', 'sample_rate = 0.93', 'sample_rate_per_class = None', 'col_sample_rate = 0.92', 'col_sample_rate_change_per_level = 0.93', 'col_sample_rate_per_tree = 0.56', 'min_split_improvement = 0.0', 'histogram_type = QuantilesGlobal', 'max_abs_leafnode_pred = 1.7976931348623157e+308', 'pred_noise_bandwidth = 0.0', 'categorical_encoding = AUTO', 'calibrate_model = False', 'calibration_frame = None', 'custom_metric_func = None', 'custom_distribution_func = None', 'export_checkpoints_dir = None', 'monotone_constraints = None', 'check_constant_response = True']
Now we can confirm that these parameters are generally sound, by building a GBM model on the whole dataset (instead of the 60%) and using internal 5-fold cross-validation (re-using all other parameters including the seed):
gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][0])
#get the parameters from the Random grid search model and modify them slightly
params = gbm.params
new_params = {"nfolds":5, "model_id":None, "training_frame":None, "validation_frame":None,
"response_column":None, "ignored_columns":None}
for key in new_params.keys():
params[key]['actual'] = new_params[key]
gbm_best = H2OGradientBoostingEstimator()
for key in params.keys():
if key in dir(gbm_best) and getattr(gbm_best,key) != params[key]['actual']:
setattr(gbm_best,key,params[key]['actual'])
gbm_best.train(x=predictors, y=response, training_frame=df)
gbm Model Build progress: |███████████████████████████████████████████████| 100%
print(gbm_best.cross_validation_metrics_summary())
Cross-Validation Metrics Summary:
mean | sd | cv_1_valid | cv_2_valid | cv_3_valid | cv_4_valid | cv_5_valid | ||
---|---|---|---|---|---|---|---|---|
0 | accuracy | 0.94809973 | 0.0063140313 | 0.9400749 | 0.94833946 | 0.9457364 | 0.9488189 | 0.95752895 |
1 | auc | 0.9743477 | 0.009550297 | 0.9674539 | 0.9610417 | 0.9794005 | 0.9819927 | 0.98184973 |
2 | aucpr | 0.97158337 | 0.008698236 | 0.96870947 | 0.9577326 | 0.9746778 | 0.9785839 | 0.978213 |
3 | err | 0.051900264 | 0.0063140313 | 0.059925094 | 0.051660515 | 0.054263566 | 0.051181104 | 0.042471044 |
4 | err_count | 13.6 | 1.8165902 | 16.0 | 14.0 | 14.0 | 13.0 | 11.0 |
5 | f0point5 | 0.95091534 | 0.017722148 | 0.9623016 | 0.95454544 | 0.944206 | 0.92402464 | 0.96949893 |
6 | f1 | 0.9295287 | 0.007824828 | 0.9238095 | 0.9230769 | 0.9263158 | 0.93264246 | 0.9417989 |
7 | f2 | 0.9096094 | 0.02097295 | 0.88827837 | 0.89361703 | 0.90909094 | 0.9414226 | 0.91563785 |
8 | lift_top_group | 2.6258688 | 0.15794739 | 2.3839285 | 2.8229167 | 2.632653 | 2.6736841 | 2.6161616 |
9 | logloss | 0.19542515 | 0.024004849 | 0.20480314 | 0.23214972 | 0.19031271 | 0.17594479 | 0.17391542 |
10 | max_per_class_error | 0.102922216 | 0.031553145 | 0.13392857 | 0.125 | 0.10204082 | 0.05263158 | 0.1010101 |
11 | mcc | 0.89094704 | 0.011998705 | 0.8800855 | 0.8873967 | 0.8845431 | 0.89166886 | 0.911041 |
12 | mean_per_class_accuracy | 0.9385944 | 0.008472207 | 0.9298099 | 0.9317857 | 0.93647957 | 0.948527 | 0.94636995 |
13 | mean_per_class_error | 0.061405573 | 0.008472207 | 0.070190094 | 0.06821428 | 0.06352041 | 0.05147302 | 0.05363005 |
14 | mse | 0.051655047 | 0.006927098 | 0.05615356 | 0.061237488 | 0.049908444 | 0.04610445 | 0.0448713 |
15 | pr_auc | 0.97158337 | 0.008698236 | 0.96870947 | 0.9577326 | 0.9746778 | 0.9785839 | 0.978213 |
16 | precision | 0.9660636 | 0.029850759 | 0.9897959 | 0.9767442 | 0.95652175 | 0.9183673 | 0.98888886 |
17 | r2 | 0.7805782 | 0.031156946 | 0.7694049 | 0.73230106 | 0.788131 | 0.80308014 | 0.809974 |
18 | recall | 0.8970778 | 0.031553145 | 0.8660714 | 0.875 | 0.8979592 | 0.94736844 | 0.8989899 |
19 | rmse | 0.22687589 | 0.0150988605 | 0.23696741 | 0.2474621 | 0.22340198 | 0.21471947 | 0.21182847 |
See the whole table with table.as_data_frame()
It looks like the winning model performs slightly better on the validation and test sets than during cross-validation on the training set as the mean AUC on the 5 folds is estimated to be only 97.4%, but with a fairly large standard deviation of 0.9%. For small datasets, such a large variance is not unusual. To get a better estimate of model performance, the Random hyper-parameter search could have used nfolds = 5
(or 10, or similar) in combination with 80% of the data for training (i.e., not holding out a validation set, but only the final test set). However, this would take more time, as nfolds+1
models will be built for every set of parameters.
Instead, to save time, let's just scan through the top 5 models and cross-validate their parameters with nfolds=5
on the entire dataset:
for i in range(5):
gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][i])
#get the parameters from the Random grid search model and modify them slightly
params = gbm.params
new_params = {"nfolds":5, "model_id":None, "training_frame":None, "validation_frame":None,
"response_column":None, "ignored_columns":None}
for key in new_params.keys():
params[key]['actual'] = new_params[key]
new_model = H2OGradientBoostingEstimator()
for key in params.keys():
if key in dir(new_model) and getattr(new_model,key) != params[key]['actual']:
setattr(new_model,key,params[key]['actual'])
new_model.train(x = predictors, y = response, training_frame = df)
cv_summary = new_model.cross_validation_metrics_summary().as_data_frame()
print(gbm.model_id)
print(cv_summary.iloc[1]) ## AUC
gbm Model Build progress: |███████████████████████████████████████████████| 100% final_grid_model_97 auc mean 0.9743477 sd 0.009550297 cv_1_valid 0.9674539 cv_2_valid 0.9610417 cv_3_valid 0.9794005 cv_4_valid 0.9819927 cv_5_valid 0.98184973 Name: 1, dtype: object gbm Model Build progress: |███████████████████████████████████████████████| 100% final_grid_model_69 auc mean 0.9741264 sd 0.009261287 cv_1_valid 0.96854836 cv_2_valid 0.9610417 cv_3_valid 0.97665817 cv_4_valid 0.9807349 cv_5_valid 0.983649 Name: 1, dtype: object gbm Model Build progress: |███████████████████████████████████████████████| 100% final_grid_model_39 auc mean 0.9724971 sd 0.009157102 cv_1_valid 0.9625576 cv_2_valid 0.9624107 cv_3_valid 0.97927296 cv_4_valid 0.97835153 cv_5_valid 0.9798927 Name: 1, dtype: object gbm Model Build progress: |███████████████████████████████████████████████| 100% final_grid_model_82 auc mean 0.9690046 sd 0.010956372 cv_1_valid 0.96209675 cv_2_valid 0.9530357 cv_3_valid 0.97793365 cv_4_valid 0.9755048 cv_5_valid 0.976452 Name: 1, dtype: object gbm Model Build progress: |███████████████████████████████████████████████| 100% final_grid_model_70 auc mean 0.97103506 sd 0.008409648 cv_1_valid 0.96313363 cv_2_valid 0.96068454 cv_3_valid 0.97589284 cv_4_valid 0.9776233 cv_5_valid 0.9778409 Name: 1, dtype: object
The avid reader might have noticed that we just implicitly did further parameter tuning using the "final" test set (which is part of the entire dataset df
), which is not good practice - one is not supposed to use the "final" test set more than once. Hence, we're not going to pick a different "best" model, but we're just learning about the variance in AUCs. It turns out, for this tiny dataset, that the variance is rather large, which is not surprising.
Keeping the same "best" model, we can make test set predictions as follows:
preds = best_model.predict(test)
preds.head()
gbm prediction progress: |████████████████████████████████████████████████| 100%
predict | p0 | p1 |
---|---|---|
0 | 0.942511 | 0.0574889 |
0 | 0.965239 | 0.0347607 |
0 | 0.837052 | 0.162948 |
1 | 0.0144778 | 0.985522 |
1 | 0.0111483 | 0.988852 |
0 | 0.818008 | 0.181992 |
1 | 0.0470225 | 0.952977 |
1 | 0.0242329 | 0.975767 |
1 | 0.0406579 | 0.959342 |
0 | 0.893662 | 0.106338 |
Note that the label (survived or not) is predicted as well (in the first predict column), and it uses the threshold with the highest F1 score (here: 0.528098) to make labels from the probabilities for survival (p1
). The probability for death (p0
) is given for convenience, as it is just 1-p1
.
best_model.model_performance(valid)
ModelMetricsBinomial: gbm ** Reported on test data. ** MSE: 0.045961929072573966 RMSE: 0.21438733421677217 LogLoss: 0.17808872698061212 Mean Per-Class Error: 0.06486334178641873 AUC: 0.974218089602705 AUCPR: 0.9723275034473811 Gini: 0.9484361792054099 Confusion Matrix (Act/Pred) for max f1 @ threshold = 0.41524673411844065:
0 | 1 | Error | Rate | ||
---|---|---|---|---|---|
0 | 0 | 168.0 | 1.0 | 0.0059 | (1.0/169.0) |
1 | 1 | 13.0 | 92.0 | 0.1238 | (13.0/105.0) |
2 | Total | 181.0 | 93.0 | 0.0511 | (14.0/274.0) |
Maximum Metrics: Maximum metrics at their respective thresholds
metric | threshold | value | idx | |
---|---|---|---|---|
0 | max f1 | 0.415247 | 0.929293 | 92.0 |
1 | max f2 | 0.207864 | 0.924528 | 109.0 |
2 | max f0point5 | 0.523349 | 0.970149 | 90.0 |
3 | max accuracy | 0.523349 | 0.948905 | 90.0 |
4 | max precision | 0.990276 | 1.000000 | 0.0 |
5 | max recall | 0.057998 | 1.000000 | 205.0 |
6 | max specificity | 0.990276 | 1.000000 | 0.0 |
7 | max absolute_mcc | 0.523349 | 0.894631 | 90.0 |
8 | max min_per_class_accuracy | 0.207864 | 0.928994 | 109.0 |
9 | max mean_per_class_accuracy | 0.415247 | 0.935137 | 92.0 |
10 | max tns | 0.990276 | 169.000000 | 0.0 |
11 | max fns | 0.990276 | 104.000000 | 0.0 |
12 | max fps | 0.023439 | 169.000000 | 267.0 |
13 | max tps | 0.057998 | 105.000000 | 205.0 |
14 | max tnr | 0.990276 | 1.000000 | 0.0 |
15 | max fnr | 0.990276 | 0.990476 | 0.0 |
16 | max fpr | 0.023439 | 1.000000 | 267.0 |
17 | max tpr | 0.057998 | 1.000000 | 205.0 |
Gains/Lift Table: Avg response rate: 38.32 %, avg score: 38.27 %
group | cumulative_data_fraction | lower_threshold | lift | cumulative_lift | response_rate | score | cumulative_response_rate | cumulative_score | capture_rate | cumulative_capture_rate | gain | cumulative_gain | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0.010949 | 0.988119 | 2.609524 | 2.609524 | 1.000000 | 0.989972 | 1.000000 | 0.989972 | 0.028571 | 0.028571 | 160.952381 | 160.952381 | |
1 | 2 | 0.021898 | 0.986938 | 2.609524 | 2.609524 | 1.000000 | 0.987364 | 1.000000 | 0.988668 | 0.028571 | 0.057143 | 160.952381 | 160.952381 | |
2 | 3 | 0.032847 | 0.986034 | 2.609524 | 2.609524 | 1.000000 | 0.986492 | 1.000000 | 0.987943 | 0.028571 | 0.085714 | 160.952381 | 160.952381 | |
3 | 4 | 0.040146 | 0.985732 | 2.609524 | 2.609524 | 1.000000 | 0.985849 | 1.000000 | 0.987562 | 0.019048 | 0.104762 | 160.952381 | 160.952381 | |
4 | 5 | 0.051095 | 0.984548 | 2.609524 | 2.609524 | 1.000000 | 0.985250 | 1.000000 | 0.987067 | 0.028571 | 0.133333 | 160.952381 | 160.952381 | |
5 | 6 | 0.102190 | 0.979817 | 2.609524 | 2.609524 | 1.000000 | 0.982113 | 1.000000 | 0.984590 | 0.133333 | 0.266667 | 160.952381 | 160.952381 | |
6 | 7 | 0.149635 | 0.973183 | 2.609524 | 2.609524 | 1.000000 | 0.976112 | 1.000000 | 0.981902 | 0.123810 | 0.390476 | 160.952381 | 160.952381 | |
7 | 8 | 0.200730 | 0.958581 | 2.609524 | 2.609524 | 1.000000 | 0.967963 | 1.000000 | 0.978354 | 0.133333 | 0.523810 | 160.952381 | 160.952381 | |
8 | 9 | 0.299270 | 0.882939 | 2.609524 | 2.609524 | 1.000000 | 0.929247 | 1.000000 | 0.962185 | 0.257143 | 0.780952 | 160.952381 | 160.952381 | |
9 | 10 | 0.401460 | 0.207375 | 1.491156 | 2.324848 | 0.571429 | 0.461806 | 0.890909 | 0.834816 | 0.152381 | 0.933333 | 49.115646 | 132.484848 | |
10 | 11 | 0.500000 | 0.120609 | 0.289947 | 1.923810 | 0.111111 | 0.165084 | 0.737226 | 0.702825 | 0.028571 | 0.961905 | -71.005291 | 92.380952 | |
11 | 12 | 0.598540 | 0.078934 | 0.000000 | 1.607085 | 0.000000 | 0.095367 | 0.615854 | 0.602816 | 0.000000 | 0.961905 | -100.000000 | 60.708479 | |
12 | 13 | 0.700730 | 0.061823 | 0.279592 | 1.413492 | 0.107143 | 0.069434 | 0.541667 | 0.525032 | 0.028571 | 0.990476 | -72.040816 | 41.349206 | |
13 | 14 | 0.806569 | 0.055147 | 0.089984 | 1.239819 | 0.034483 | 0.058027 | 0.475113 | 0.463750 | 0.009524 | 1.000000 | -91.001642 | 23.981900 | |
14 | 15 | 0.897810 | 0.047655 | 0.000000 | 1.113821 | 0.000000 | 0.050896 | 0.426829 | 0.421794 | 0.000000 | 1.000000 | -100.000000 | 11.382114 | |
15 | 16 | 1.000000 | 0.023439 | 0.000000 | 1.000000 | 0.000000 | 0.039518 | 0.383212 | 0.382729 | 0.000000 | 1.000000 | -100.000000 | 0.000000 |
# Key of best model:
best_model.key
'final_grid_model_97'
You can also see the "best" model in more detail in Flow:
The model and the predictions can be saved to file as follows:
# uncomment if you want to export the best model
# h2o.save_model(best_model, "/tmp/bestModel.csv", force=True)
# h2o.export_file(preds, "/tmp/bestPreds.csv", force=True)
# print pojo to screen, or provide path to download location
# h2o.download_pojo(best_model)
The model can also be exported as a plain old Java object (POJO) for H2O-independent (standalone/Storm/Kafka/UDF) scoring in any Java environment.
/*
Licensed under the Apache License, Version 2.0
http://www.apache.org/licenses/LICENSE-2.0.html
AUTOGENERATED BY H2O at 2016-07-17T18:38:50.337-07:00
3.8.3.3
Standalone prediction code with sample test data for GBMModel named final_grid_model_45
How to download, compile and execute:
mkdir tmpdir
cd tmpdir
curl http://127.0.0.1:54321/3/h2o-genmodel.jar > h2o-genmodel.jar
curl http://127.0.0.1:54321/3/Models.java/final_grid_model_45 > final_grid_model_45.java
javac -cp h2o-genmodel.jar -J-Xmx2g -J-XX:MaxPermSize=128m final_grid_model_45.java
(Note: Try java argument -XX:+PrintCompilation to show runtime JIT compiler behavior.)
*/
import java.util.Map;
import hex.genmodel.GenModel;
import hex.genmodel.annotations.ModelPojo;
...
class final_grid_model_45_Tree_0_class_0 {
static final double score0(double[] data) {
double pred = (Double.isNaN(data[1]) || !GenModel.bitSetContains(GRPSPLIT0, 0, data[1 /* sex */]) ?
(Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT1, 13, data[7 /* cabin */]) ?
(Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT2, 9, data[7 /* cabin */]) ?
(Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT3, 9, data[7 /* cabin */]) ?
(data[2 /* age */] <1.4174492f ?
0.13087687f :
(Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT4, 9, data[7 /* cabin */]) ?
(Double.isNaN(data[3]) || data[3 /* sibsp */] <1.000313f ?
(data[6 /* fare */] <7.91251f ?
(Double.isNaN(data[5]) || data[5 /* ticket */] <368744.5f ?
-0.08224204f :
(Double.isNaN(data[2]) || data[2 /* age */] <13.0f ?
-0.028962314f :
-0.08224204f)) :
(Double.isNaN(data[7]) || !GenModel.bitSetContains(GRPSPLIT5, 9, data[7 /* cabin */]) ?
(data[6 /* fare */] <7.989957f ?
(Double.isNaN(data[3]) || data[3 /* sibsp */] <0.0017434144f ?
0.07759714f :
0.13087687f) :
(data[6 /* fare */] <12.546303f ?
-0.07371729f :
(Double.isNaN(data[4]) || data[4 /* parch */] <1.0020853f ?
-0.037374903f :
-0.08224204f))) :
0.0f)) :
-0.08224204f) :
0.0f)) :
0.0f) :
-0.08224204f) :
-0.08224204f) :
...
After learning above that the variance of the test set AUC of the top few models was rather large, we might be able to turn this into our advantage by using ensembling techniques. The simplest one is taking the average of the predictions (survival probabilities) of the top k
grid search model predictions (here, we use k=10
):
prob = None
k=10
for i in range(0,k):
gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][i])
if (prob is None):
prob = gbm.predict(test)["p1"]
else:
prob = prob + gbm.predict(test)["p1"]
prob = prob/k
gbm prediction progress: |████████████████████████████████████████████████| 100% gbm prediction progress: |████████████████████████████████████████████████| 100% gbm prediction progress: |████████████████████████████████████████████████| 100% gbm prediction progress: |████████████████████████████████████████████████| 100% gbm prediction progress: |████████████████████████████████████████████████| 100% gbm prediction progress: |████████████████████████████████████████████████| 100% gbm prediction progress: |████████████████████████████████████████████████| 100% gbm prediction progress: |████████████████████████████████████████████████| 100% gbm prediction progress: |████████████████████████████████████████████████| 100% gbm prediction progress: |████████████████████████████████████████████████| 100%
We now have a blended probability of survival for each person on the Titanic.
prob.head()
p1 |
---|
0.0555282 |
0.0382219 |
0.143723 |
0.978605 |
0.982394 |
0.230839 |
0.937021 |
0.978544 |
0.939877 |
0.138475 |
We can bring those ensemble predictions to our Python session's memory space and use other Python packages.
from sklearn.metrics import roc_auc_score
# convert prob and test[response] h2oframes to pandas' frames and then convert them each to numpy array
np_array_prob = prob.as_data_frame().values
np_array_test = test[response].as_data_frame().values
probInPy = np_array_prob
labeInPy = np_array_test
# compare true scores (test[response]) to probability scores (prob)
roc_auc_score(labeInPy, probInPy)
0.9827540636976345
This simple blended ensemble test set prediction has an even higher AUC than the best single model, but we need to do more validation studies, ideally using cross-validation. We leave this as an exercise for the reader - take the parameters of the top 10
models, retrain them with nfolds=5
on the full dataset, set keep_holdout_predictions=True
and sum up their predicted probabilities, then score that with sklearn's roc_auc_score as shown above.
For more sophisticated ensembling approaches, such as stacking via a superlearner, we refer to the H2O Ensemble github page.
We learned how to build H2O GBM models for a binary classification task on a small but realistic dataset with numerical and categorical variables, with the goal to maximize the AUC (ranges from 0.5 to 1). We first established a baseline with the default model, then carefully tuned the remaining hyper-parameters without "too much" human guess-work. We used both Cartesian and Random hyper-parameter searches to find good models. We were able to get the AUC on a holdout test set from 95% range with the default model to 97% range after tuning, and to above 98% with some simple ensembling technique known as blending. We performed simple cross-validation variance analysis to learn that results were slightly "lucky" due to the specific train/valid/test set splits, and settled to expect 97% AUCs instead.
Note that this script and the findings therein are directly transferrable to large datasets on distributed clusters including Spark/Hadoop environments.
More information can be found here http://www.h2o.ai/docs/.