PiML (Python Interpretable Machine Learning) is a new Python toolbox for IML model development and validation. Through low-code automation and high-code programming, PiML supports various machine learning models in the following two categories:
Inherently interpretable models:
Arbitrary black-box models,e.g.
This example notebook demonstrates how to use PiML in its low-code mode for developing machine learning models, interpreting them and testing them. The toolbox has the following built-in datasets for demo purposes.
sklearn.datasets.make_make_circles(n_samples=2000, noise=0.1)
; see details.sklearn.datasets.make_friedman1(n_samples=2000, n_features=10, and noise=0.1)
; see details.sklearn.datasets
; see details. There are a raw version, a trim1 version (trimming only AveOccup) and a trim2 version (trimming AveRooms, AveBedrms, Population and AveOccup).!pip install piml
to install the latest version of PiML!pip install PiML
from piml import Experiment
exp = Experiment(platform="colab")
exp.data_loader()
exp.data_summary()
exp.data_prepare()
exp.eda()
exp.model_train()
exp.model_explain()
exp.model_interpret()
exp.model_diagnose()
exp.model_compare()
# train_x, train_y, test_x, test_y, Xnames, yname = exp.get_processed_data()
from lightgbm import LGBMRegressor
pipeline = exp.make_pipeline(LGBMRegressor(max_depth=7))
pipeline.fit() #train_x, train_y
exp.register(pipeline=pipeline, name='LGBM')
exp.model_compare()