Last updated: 16 Feb 2023
PyCaret is an open-source, low-code machine learning library in Python that automates machine learning workflows. It is an end-to-end machine learning and model management tool that exponentially speeds up the experiment cycle and makes you more productive.
Compared with the other open-source machine learning libraries, PyCaret is an alternate low-code library that can be used to replace hundreds of lines of code with a few lines only. This makes experiments exponentially fast and efficient. PyCaret is essentially a Python wrapper around several machine learning libraries and frameworks, such as scikit-learn, XGBoost, LightGBM, CatBoost, spaCy, Optuna, Hyperopt, Ray, and a few more.
The design and simplicity of PyCaret are inspired by the emerging role of citizen data scientists, a term first used by Gartner. Citizen Data Scientists are power users who can perform both simple and moderately sophisticated analytical tasks that would previously have required more technical expertise.
PyCaret is tested and supported on the following 64-bit systems:
You can install PyCaret with Python's pip package manager:
pip install pycaret
PyCaret's default installation will not install all the extra dependencies automatically. For that you will have to install the full version:
pip install pycaret[full]
or depending on your use-case you may install one of the following variant:
pip install pycaret[analysis]
pip install pycaret[models]
pip install pycaret[tuner]
pip install pycaret[mlops]
pip install pycaret[parallel]
pip install pycaret[test]
# check installed version
import pycaret
pycaret.__version__
'3.0.0'
PyCaret’s Classification Module is a supervised machine learning module that is used for classifying elements into groups. The goal is to predict the categorical class labels which are discrete and unordered.
Some common use cases include predicting customer default (Yes or No), predicting customer churn (customer will leave or stay), the disease found (positive or negative).
This module can be used for binary or multiclass problems. It provides several pre-processing features that prepare the data for modeling through the setup function. It has over 18 ready-to-use algorithms and several plots to analyze the performance of trained models.
A typical workflow in PyCaret consist of following 5 steps in this order:
# loading sample dataset from pycaret dataset module
from pycaret.datasets import get_data
data = get_data('diabetes')
Number of times pregnant | Plasma glucose concentration a 2 hours in an oral glucose tolerance test | Diastolic blood pressure (mm Hg) | Triceps skin fold thickness (mm) | 2-Hour serum insulin (mu U/ml) | Body mass index (weight in kg/(height in m)^2) | Diabetes pedigree function | Age (years) | Class variable | |
---|---|---|---|---|---|---|---|---|---|
0 | 6 | 148 | 72 | 35 | 0 | 33.6 | 0.627 | 50 | 1 |
1 | 1 | 85 | 66 | 29 | 0 | 26.6 | 0.351 | 31 | 0 |
2 | 8 | 183 | 64 | 0 | 0 | 23.3 | 0.672 | 32 | 1 |
3 | 1 | 89 | 66 | 23 | 94 | 28.1 | 0.167 | 21 | 0 |
4 | 0 | 137 | 40 | 35 | 168 | 43.1 | 2.288 | 33 | 1 |
This function initializes the training environment and creates the transformation pipeline. Setup function must be called before executing any other function in PyCaret. It only has two required parameters i.e. data
and target
. All the other parameters are optional.
# import pycaret classification and init setup
from pycaret.classification import *
s = setup(data, target = 'Class variable', session_id = 123)
Description | Value | |
---|---|---|
0 | Session id | 123 |
1 | Target | Class variable |
2 | Target type | Binary |
3 | Original data shape | (768, 9) |
4 | Transformed data shape | (768, 9) |
5 | Transformed train set shape | (537, 9) |
6 | Transformed test set shape | (231, 9) |
7 | Numeric features | 8 |
8 | Preprocess | True |
9 | Imputation type | simple |
10 | Numeric imputation | mean |
11 | Categorical imputation | mode |
12 | Fold Generator | StratifiedKFold |
13 | Fold Number | 10 |
14 | CPU Jobs | -1 |
15 | Use GPU | False |
16 | Log Experiment | False |
17 | Experiment Name | clf-default-name |
18 | USI | 52db |
Once the setup has been successfully executed it shows the information grid containing experiment level information.
session_id
is passed, a random number is automatically generated that is distributed to all functions.PyCaret has two set of API's that you can work with. (1) Functional (as seen above) and (2) Object Oriented API.
With Object Oriented API instead of executing functions directly you will import a class and execute methods of class.
# import ClassificationExperiment and init the class
from pycaret.classification import ClassificationExperiment
exp = ClassificationExperiment()
# check the type of exp
type(exp)
pycaret.classification.oop.ClassificationExperiment
# init setup on exp
exp.setup(data, target = 'Class variable', session_id = 123)
Description | Value | |
---|---|---|
0 | Session id | 123 |
1 | Target | Class variable |
2 | Target type | Binary |
3 | Original data shape | (768, 9) |
4 | Transformed data shape | (768, 9) |
5 | Transformed train set shape | (537, 9) |
6 | Transformed test set shape | (231, 9) |
7 | Numeric features | 8 |
8 | Preprocess | True |
9 | Imputation type | simple |
10 | Numeric imputation | mean |
11 | Categorical imputation | mode |
12 | Fold Generator | StratifiedKFold |
13 | Fold Number | 10 |
14 | CPU Jobs | -1 |
15 | Use GPU | False |
16 | Log Experiment | False |
17 | Experiment Name | clf-default-name |
18 | USI | 0071 |
<pycaret.classification.oop.ClassificationExperiment at 0x2e24286edc0>
You can use any of the two method i.e. Functional or OOP and even switch back and forth between two set of API's. The choice of method will not impact the results and has been tested for consistency.
This function trains and evaluates the performance of all the estimators available in the model library using cross-validation. The output of this function is a scoring grid with average cross-validated scores. Metrics evaluated during CV can be accessed using the get_metrics
function. Custom metrics can be added or removed using add_metric
and remove_metric
function.
# compare baseline models
best = compare_models()
Model | Accuracy | AUC | Recall | Prec. | F1 | Kappa | MCC | TT (Sec) | |
---|---|---|---|---|---|---|---|---|---|
lr | Logistic Regression | 0.7689 | 0.8047 | 0.5602 | 0.7208 | 0.6279 | 0.4641 | 0.4736 | 1.3810 |
ridge | Ridge Classifier | 0.7670 | 0.0000 | 0.5497 | 0.7235 | 0.6221 | 0.4581 | 0.4690 | 0.0370 |
lda | Linear Discriminant Analysis | 0.7670 | 0.8055 | 0.5550 | 0.7202 | 0.6243 | 0.4594 | 0.4695 | 0.0500 |
rf | Random Forest Classifier | 0.7485 | 0.7911 | 0.5284 | 0.6811 | 0.5924 | 0.4150 | 0.4238 | 0.1940 |
nb | Naive Bayes | 0.7427 | 0.7955 | 0.5702 | 0.6543 | 0.6043 | 0.4156 | 0.4215 | 0.0400 |
catboost | CatBoost Classifier | 0.7410 | 0.7993 | 0.5278 | 0.6630 | 0.5851 | 0.4005 | 0.4078 | 0.0890 |
gbc | Gradient Boosting Classifier | 0.7373 | 0.7918 | 0.5550 | 0.6445 | 0.5931 | 0.4013 | 0.4059 | 0.0770 |
ada | Ada Boost Classifier | 0.7372 | 0.7799 | 0.5275 | 0.6585 | 0.5796 | 0.3926 | 0.4017 | 0.0870 |
et | Extra Trees Classifier | 0.7299 | 0.7788 | 0.4965 | 0.6516 | 0.5596 | 0.3706 | 0.3802 | 0.1280 |
qda | Quadratic Discriminant Analysis | 0.7282 | 0.7894 | 0.5281 | 0.6558 | 0.5736 | 0.3785 | 0.3910 | 0.0510 |
lightgbm | Light Gradient Boosting Machine | 0.7133 | 0.7645 | 0.5398 | 0.6036 | 0.5650 | 0.3534 | 0.3580 | 0.2440 |
knn | K Neighbors Classifier | 0.7001 | 0.7164 | 0.5020 | 0.5982 | 0.5413 | 0.3209 | 0.3271 | 0.0570 |
dt | Decision Tree Classifier | 0.6928 | 0.6512 | 0.5137 | 0.5636 | 0.5328 | 0.3070 | 0.3098 | 0.0460 |
xgboost | Extreme Gradient Boosting | 0.6853 | 0.7516 | 0.4912 | 0.5620 | 0.5216 | 0.2887 | 0.2922 | 0.0520 |
dummy | Dummy Classifier | 0.6518 | 0.5000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0380 |
svm | SVM - Linear Kernel | 0.5954 | 0.0000 | 0.3395 | 0.4090 | 0.2671 | 0.0720 | 0.0912 | 0.0410 |
Processing: 0%| | 0/69 [00:00<?, ?it/s]
# compare models using OOP
exp.compare_models()
Model | Accuracy | AUC | Recall | Prec. | F1 | Kappa | MCC | TT (Sec) | |
---|---|---|---|---|---|---|---|---|---|
lr | Logistic Regression | 0.7689 | 0.8047 | 0.5602 | 0.7208 | 0.6279 | 0.4641 | 0.4736 | 0.0450 |
ridge | Ridge Classifier | 0.7670 | 0.0000 | 0.5497 | 0.7235 | 0.6221 | 0.4581 | 0.4690 | 0.0330 |
lda | Linear Discriminant Analysis | 0.7670 | 0.8055 | 0.5550 | 0.7202 | 0.6243 | 0.4594 | 0.4695 | 0.0370 |
rf | Random Forest Classifier | 0.7485 | 0.7911 | 0.5284 | 0.6811 | 0.5924 | 0.4150 | 0.4238 | 0.1320 |
nb | Naive Bayes | 0.7427 | 0.7955 | 0.5702 | 0.6543 | 0.6043 | 0.4156 | 0.4215 | 0.0360 |
catboost | CatBoost Classifier | 0.7410 | 0.7993 | 0.5278 | 0.6630 | 0.5851 | 0.4005 | 0.4078 | 0.0340 |
gbc | Gradient Boosting Classifier | 0.7373 | 0.7918 | 0.5550 | 0.6445 | 0.5931 | 0.4013 | 0.4059 | 0.0730 |
ada | Ada Boost Classifier | 0.7372 | 0.7799 | 0.5275 | 0.6585 | 0.5796 | 0.3926 | 0.4017 | 0.0750 |
et | Extra Trees Classifier | 0.7299 | 0.7788 | 0.4965 | 0.6516 | 0.5596 | 0.3706 | 0.3802 | 0.1320 |
qda | Quadratic Discriminant Analysis | 0.7282 | 0.7894 | 0.5281 | 0.6558 | 0.5736 | 0.3785 | 0.3910 | 0.0380 |
lightgbm | Light Gradient Boosting Machine | 0.7133 | 0.7645 | 0.5398 | 0.6036 | 0.5650 | 0.3534 | 0.3580 | 0.0390 |
knn | K Neighbors Classifier | 0.7001 | 0.7164 | 0.5020 | 0.5982 | 0.5413 | 0.3209 | 0.3271 | 0.0490 |
dt | Decision Tree Classifier | 0.6928 | 0.6512 | 0.5137 | 0.5636 | 0.5328 | 0.3070 | 0.3098 | 0.0390 |
xgboost | Extreme Gradient Boosting | 0.6853 | 0.7516 | 0.4912 | 0.5620 | 0.5216 | 0.2887 | 0.2922 | 0.0440 |
dummy | Dummy Classifier | 0.6518 | 0.5000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0330 |
svm | SVM - Linear Kernel | 0.5954 | 0.0000 | 0.3395 | 0.4090 | 0.2671 | 0.0720 | 0.0912 | 0.0310 |
Processing: 0%| | 0/69 [00:00<?, ?it/s]
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, l1_ratio=None, max_iter=1000, multi_class='auto', n_jobs=None, penalty='l2', random_state=123, solver='lbfgs', tol=0.0001, verbose=0, warm_start=False)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, l1_ratio=None, max_iter=1000, multi_class='auto', n_jobs=None, penalty='l2', random_state=123, solver='lbfgs', tol=0.0001, verbose=0, warm_start=False)
Notice that the output between functional and OOP API is consistent. Rest of the functions in this notebook will only be shown using functional API only.
You can use the plot_model
function to analyzes the performance of a trained model on the test set. It may require re-training the model in certain cases.
# plot confusion matrix
plot_model(best, plot = 'confusion_matrix')
# plot AUC
plot_model(best, plot = 'auc')
# plot feature importance
plot_model(best, plot = 'feature')
# check docstring to see available plots
# help(plot_model)
An alternate to plot_model
function is evaluate_model
. It can only be used in Notebook since it uses ipywidget.
evaluate_model(best)
interactive(children=(ToggleButtons(description='Plot Type:', icons=('',), options=(('Pipeline Plot', 'pipelin…
The predict_model
function returns prediction_label
and prediction_score
(probability of the predicted class) as new columns in dataframe. When data is None
(default), it uses the test set (created during the setup function) for scoring.
# predict on test set
holdout_pred = predict_model(best)
Model | Accuracy | AUC | Recall | Prec. | F1 | Kappa | MCC | |
---|---|---|---|---|---|---|---|---|
0 | Logistic Regression | 0.7576 | 0.8568 | 0.5309 | 0.7049 | 0.6056 | 0.4356 | 0.4447 |
# show predictions df
holdout_pred.head()
Number of times pregnant | Plasma glucose concentration a 2 hours in an oral glucose tolerance test | Diastolic blood pressure (mm Hg) | Triceps skin fold thickness (mm) | 2-Hour serum insulin (mu U/ml) | Body mass index (weight in kg/(height in m)^2) | Diabetes pedigree function | Age (years) | Class variable | prediction_label | prediction_score | |
---|---|---|---|---|---|---|---|---|---|---|---|
537 | 6 | 114 | 88 | 0 | 0 | 27.799999 | 0.247 | 66 | 0 | 0 | 0.8037 |
538 | 1 | 97 | 70 | 15 | 0 | 18.200001 | 0.147 | 21 | 0 | 0 | 0.9648 |
539 | 2 | 90 | 70 | 17 | 0 | 27.299999 | 0.085 | 22 | 0 | 0 | 0.9393 |
540 | 2 | 105 | 58 | 40 | 94 | 34.900002 | 0.225 | 25 | 0 | 0 | 0.7998 |
541 | 11 | 138 | 76 | 0 | 0 | 33.200001 | 0.420 | 35 | 0 | 1 | 0.6391 |
The same function works for predicting the labels on unseen dataset. Let's create a copy of original data and drop the Class variable
. We can then use the new data frame without labels for scoring.
# copy data and drop Class variable
new_data = data.copy()
new_data.drop('Class variable', axis=1, inplace=True)
new_data.head()
Number of times pregnant | Plasma glucose concentration a 2 hours in an oral glucose tolerance test | Diastolic blood pressure (mm Hg) | Triceps skin fold thickness (mm) | 2-Hour serum insulin (mu U/ml) | Body mass index (weight in kg/(height in m)^2) | Diabetes pedigree function | Age (years) | |
---|---|---|---|---|---|---|---|---|
0 | 6 | 148 | 72 | 35 | 0 | 33.6 | 0.627 | 50 |
1 | 1 | 85 | 66 | 29 | 0 | 26.6 | 0.351 | 31 |
2 | 8 | 183 | 64 | 0 | 0 | 23.3 | 0.672 | 32 |
3 | 1 | 89 | 66 | 23 | 94 | 28.1 | 0.167 | 21 |
4 | 0 | 137 | 40 | 35 | 168 | 43.1 | 2.288 | 33 |
# predict model on new_data
predictions = predict_model(best, data = new_data)
predictions.head()
Number of times pregnant | Plasma glucose concentration a 2 hours in an oral glucose tolerance test | Diastolic blood pressure (mm Hg) | Triceps skin fold thickness (mm) | 2-Hour serum insulin (mu U/ml) | Body mass index (weight in kg/(height in m)^2) | Diabetes pedigree function | Age (years) | prediction_label | prediction_score | |
---|---|---|---|---|---|---|---|---|---|---|
0 | 6 | 148 | 72 | 35 | 0 | 33.599998 | 0.627 | 50 | 1 | 0.6939 |
1 | 1 | 85 | 66 | 29 | 0 | 26.600000 | 0.351 | 31 | 0 | 0.9419 |
2 | 8 | 183 | 64 | 0 | 0 | 23.299999 | 0.672 | 32 | 1 | 0.7975 |
3 | 1 | 89 | 66 | 23 | 94 | 28.100000 | 0.167 | 21 | 0 | 0.9453 |
4 | 0 | 137 | 40 | 35 | 168 | 43.099998 | 2.288 | 33 | 1 | 0.8393 |
Finally, you can save the entire pipeline on disk for later use, using pycaret's save_model
function.
# save pipeline
save_model(best, 'my_first_pipeline')
Transformation Pipeline and Model Successfully Saved
(Pipeline(memory=FastMemory(location=C:\Users\owner\AppData\Local\Temp\joblib), steps=[('clean_column_names', TransformerWrapper(exclude=None, include=None, transformer=CleanColumnNames(match='[\\]\\[\\,\\{\\}\\"\\:]+'))), ('numerical_imputer', TransformerWrapper(exclude=None, include=['Number of times pregnant', 'Plasma glucose concentration a 2 ' 'hours in an oral glu... fill_value=None, missing_values=nan, strategy='most_frequent', verbose='deprecated'))), ('trained_model', LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, l1_ratio=None, max_iter=1000, multi_class='auto', n_jobs=None, penalty='l2', random_state=123, solver='lbfgs', tol=0.0001, verbose=0, warm_start=False))], verbose=False), 'my_first_pipeline.pkl')
# load pipeline
loaded_best_pipeline = load_model('my_first_pipeline')
loaded_best_pipeline
Transformation Pipeline and Model Successfully Loaded
Pipeline(memory=FastMemory(location=C:\Users\owner\AppData\Local\Temp\joblib), steps=[('clean_column_names', TransformerWrapper(exclude=None, include=None, transformer=CleanColumnNames(match='[\\]\\[\\,\\{\\}\\"\\:]+'))), ('numerical_imputer', TransformerWrapper(exclude=None, include=['Number of times pregnant', 'Plasma glucose concentration a 2 ' 'hours in an oral glu... fill_value=None, missing_values=nan, strategy='most_frequent', verbose='deprecated'))), ('trained_model', LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, l1_ratio=None, max_iter=1000, multi_class='auto', n_jobs=None, penalty='l2', random_state=123, solver='lbfgs', tol=0.0001, verbose=0, warm_start=False))], verbose=False)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
Pipeline(memory=FastMemory(location=C:\Users\owner\AppData\Local\Temp\joblib), steps=[('clean_column_names', TransformerWrapper(exclude=None, include=None, transformer=CleanColumnNames(match='[\\]\\[\\,\\{\\}\\"\\:]+'))), ('numerical_imputer', TransformerWrapper(exclude=None, include=['Number of times pregnant', 'Plasma glucose concentration a 2 ' 'hours in an oral glu... fill_value=None, missing_values=nan, strategy='most_frequent', verbose='deprecated'))), ('trained_model', LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, l1_ratio=None, max_iter=1000, multi_class='auto', n_jobs=None, penalty='l2', random_state=123, solver='lbfgs', tol=0.0001, verbose=0, warm_start=False))], verbose=False)
TransformerWrapper(exclude=None, include=None, transformer=CleanColumnNames(match='[\\]\\[\\,\\{\\}\\"\\:]+'))
CleanColumnNames()
CleanColumnNames()
TransformerWrapper(exclude=None, include=['Number of times pregnant', 'Plasma glucose concentration a 2 hours in an oral ' 'glucose tolerance test', 'Diastolic blood pressure (mm Hg)', 'Triceps skin fold thickness (mm)', '2-Hour serum insulin (mu U/ml)', 'Body mass index (weight in kg/(height in m)^2)', 'Diabetes pedigree function', 'Age (years)'], transformer=SimpleImputer(add_indicator=False, copy=True, fill_value=None, missing_values=nan, strategy='mean', verbose='deprecated'))
SimpleImputer()
SimpleImputer()
TransformerWrapper(exclude=None, include=[], transformer=SimpleImputer(add_indicator=False, copy=True, fill_value=None, missing_values=nan, strategy='most_frequent', verbose='deprecated'))
SimpleImputer(strategy='most_frequent')
SimpleImputer(strategy='most_frequent')
LogisticRegression(max_iter=1000, random_state=123)
This function initializes the experiment in PyCaret and creates the transformation pipeline based on all the parameters passed in the function. Setup function must be called before executing any other function. It takes two required parameters: data
and target
. All the other parameters are optional and are used for configuring data preprocessing pipeline.
# init setup function
s = setup(data, target = 'Class variable', session_id = 123)
Description | Value | |
---|---|---|
0 | Session id | 123 |
1 | Target | Class variable |
2 | Target type | Binary |
3 | Original data shape | (768, 9) |
4 | Transformed data shape | (768, 9) |
5 | Transformed train set shape | (537, 9) |
6 | Transformed test set shape | (231, 9) |
7 | Numeric features | 8 |
8 | Preprocess | True |
9 | Imputation type | simple |
10 | Numeric imputation | mean |
11 | Categorical imputation | mode |
12 | Fold Generator | StratifiedKFold |
13 | Fold Number | 10 |
14 | CPU Jobs | -1 |
15 | Use GPU | False |
16 | Log Experiment | False |
17 | Experiment Name | clf-default-name |
18 | USI | 038a |
To access all the variables created by the setup function such as transformed dataset, random_state, etc. you can use get_config
method.
# check all available config
get_config()
{'USI', 'X', 'X_test', 'X_test_transformed', 'X_train', 'X_train_transformed', 'X_transformed', '_available_plots', '_ml_usecase', 'data', 'dataset', 'dataset_transformed', 'exp_id', 'exp_name_log', 'fix_imbalance', 'fold_generator', 'fold_groups_param', 'fold_shuffle_param', 'gpu_n_jobs_param', 'gpu_param', 'html_param', 'idx', 'is_multiclass', 'log_plots_param', 'logging_param', 'memory', 'n_jobs_param', 'pipeline', 'seed', 'target_param', 'test', 'test_transformed', 'train', 'train_transformed', 'variable_and_property_keys', 'variables', 'y', 'y_test', 'y_test_transformed', 'y_train', 'y_train_transformed', 'y_transformed'}
# lets access X_train_transformed
get_config('X_train_transformed')
Number of times pregnant | Plasma glucose concentration a 2 hours in an oral glucose tolerance test | Diastolic blood pressure (mm Hg) | Triceps skin fold thickness (mm) | 2-Hour serum insulin (mu U/ml) | Body mass index (weight in kg/(height in m)^2) | Diabetes pedigree function | Age (years) | |
---|---|---|---|---|---|---|---|---|
0 | 13.0 | 152.0 | 90.0 | 33.0 | 29.0 | 26.799999 | 0.731 | 43.0 |
1 | 0.0 | 104.0 | 64.0 | 37.0 | 64.0 | 33.599998 | 0.510 | 22.0 |
2 | 5.0 | 137.0 | 108.0 | 0.0 | 0.0 | 48.799999 | 0.227 | 37.0 |
3 | 0.0 | 111.0 | 65.0 | 0.0 | 0.0 | 24.600000 | 0.660 | 31.0 |
4 | 6.0 | 105.0 | 70.0 | 32.0 | 68.0 | 30.799999 | 0.122 | 37.0 |
... | ... | ... | ... | ... | ... | ... | ... | ... |
532 | 10.0 | 179.0 | 70.0 | 0.0 | 0.0 | 35.099998 | 0.200 | 37.0 |
533 | 0.0 | 100.0 | 88.0 | 60.0 | 110.0 | 46.799999 | 0.962 | 31.0 |
534 | 1.0 | 89.0 | 76.0 | 34.0 | 37.0 | 31.200001 | 0.192 | 23.0 |
535 | 1.0 | 121.0 | 78.0 | 39.0 | 74.0 | 39.000000 | 0.261 | 28.0 |
536 | 0.0 | 140.0 | 65.0 | 26.0 | 130.0 | 42.599998 | 0.431 | 24.0 |
537 rows × 8 columns
# another example: let's access seed
print("The current seed is: {}".format(get_config('seed')))
# now lets change it using set_config
set_config('seed', 786)
print("The new seed is: {}".format(get_config('seed')))
The current seed is: 123 The new seed is: 786
All the preprocessing configurations and experiment settings/parameters are passed into the setup
function. To see all available parameters, check the docstring:
# help(setup)
# init setup with normalize = True
s = setup(data, target = 'Class variable', session_id = 123,
normalize = True, normalize_method = 'minmax')
Description | Value | |
---|---|---|
0 | Session id | 123 |
1 | Target | Class variable |
2 | Target type | Binary |
3 | Original data shape | (768, 9) |
4 | Transformed data shape | (768, 9) |
5 | Transformed train set shape | (537, 9) |
6 | Transformed test set shape | (231, 9) |
7 | Numeric features | 8 |
8 | Preprocess | True |
9 | Imputation type | simple |
10 | Numeric imputation | mean |
11 | Categorical imputation | mode |
12 | Normalize | True |
13 | Normalize method | minmax |
14 | Fold Generator | StratifiedKFold |
15 | Fold Number | 10 |
16 | CPU Jobs | -1 |
17 | Use GPU | False |
18 | Log Experiment | False |
19 | Experiment Name | clf-default-name |
20 | USI | f18d |
# lets check the X_train_transformed to see effect of params passed
get_config('X_train_transformed')['Number of times pregnant'].hist()
<AxesSubplot:>
Notice that all the values are between 0 and 1 - that is because we passed normalize=True
in the setup
function. If you don't remember how it compares to actual data, no problem - we can also access non-transformed values using get_config
and then compare. See below and notice the range of values on x-axis and compare it with histogram above.
get_config('X_train')['Number of times pregnant'].hist()
<AxesSubplot:>
This function trains and evaluates the performance of all estimators available in the model library using cross-validation. The output of this function is a scoring grid with average cross-validated scores. Metrics evaluated during CV can be accessed using the get_metrics
function. Custom metrics can be added or removed using add_metric
and remove_metric
function.
best = compare_models()
Model | Accuracy | AUC | Recall | Prec. | F1 | Kappa | MCC | TT (Sec) | |
---|---|---|---|---|---|---|---|---|---|
ridge | Ridge Classifier | 0.7708 | 0.0000 | 0.5392 | 0.7353 | 0.6203 | 0.4618 | 0.4744 | 0.0340 |
lr | Logistic Regression | 0.7689 | 0.8068 | 0.4959 | 0.7614 | 0.5968 | 0.4453 | 0.4673 | 0.0360 |
lda | Linear Discriminant Analysis | 0.7670 | 0.8055 | 0.5550 | 0.7202 | 0.6243 | 0.4594 | 0.4695 | 0.0340 |
svm | SVM - Linear Kernel | 0.7521 | 0.0000 | 0.5070 | 0.7363 | 0.5796 | 0.4154 | 0.4398 | 0.0340 |
rf | Random Forest Classifier | 0.7485 | 0.7917 | 0.5336 | 0.6784 | 0.5946 | 0.4164 | 0.4245 | 0.1340 |
nb | Naive Bayes | 0.7427 | 0.7957 | 0.5702 | 0.6543 | 0.6043 | 0.4156 | 0.4215 | 0.0390 |
catboost | CatBoost Classifier | 0.7410 | 0.7994 | 0.5278 | 0.6630 | 0.5851 | 0.4005 | 0.4078 | 0.0430 |
gbc | Gradient Boosting Classifier | 0.7373 | 0.7920 | 0.5550 | 0.6445 | 0.5931 | 0.4013 | 0.4059 | 0.0730 |
ada | Ada Boost Classifier | 0.7372 | 0.7799 | 0.5275 | 0.6585 | 0.5796 | 0.3926 | 0.4017 | 0.0690 |
et | Extra Trees Classifier | 0.7299 | 0.7788 | 0.4965 | 0.6516 | 0.5596 | 0.3706 | 0.3802 | 0.1330 |
qda | Quadratic Discriminant Analysis | 0.7282 | 0.7894 | 0.5281 | 0.6558 | 0.5736 | 0.3785 | 0.3910 | 0.0360 |
lightgbm | Light Gradient Boosting Machine | 0.7113 | 0.7653 | 0.5181 | 0.6036 | 0.5533 | 0.3427 | 0.3479 | 0.0480 |
knn | K Neighbors Classifier | 0.7002 | 0.7433 | 0.4860 | 0.5965 | 0.5311 | 0.3142 | 0.3210 | 0.0570 |
dt | Decision Tree Classifier | 0.6947 | 0.6526 | 0.5137 | 0.5665 | 0.5343 | 0.3103 | 0.3130 | 0.0380 |
xgboost | Extreme Gradient Boosting | 0.6853 | 0.7522 | 0.4912 | 0.5620 | 0.5216 | 0.2887 | 0.2922 | 0.0390 |
dummy | Dummy Classifier | 0.6518 | 0.5000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0380 |
Processing: 0%| | 0/69 [00:00<?, ?it/s]
compare_models
by default uses all the estimators in model library (all except models with Turbo=False
) . To see all available models you can use the function models()
# check available models
models()
Name | Reference | Turbo | |
---|---|---|---|
ID | |||
lr | Logistic Regression | sklearn.linear_model._logistic.LogisticRegression | True |
knn | K Neighbors Classifier | sklearn.neighbors._classification.KNeighborsCl... | True |
nb | Naive Bayes | sklearn.naive_bayes.GaussianNB | True |
dt | Decision Tree Classifier | sklearn.tree._classes.DecisionTreeClassifier | True |
svm | SVM - Linear Kernel | sklearn.linear_model._stochastic_gradient.SGDC... | True |
rbfsvm | SVM - Radial Kernel | sklearn.svm._classes.SVC | False |
gpc | Gaussian Process Classifier | sklearn.gaussian_process._gpc.GaussianProcessC... | False |
mlp | MLP Classifier | sklearn.neural_network._multilayer_perceptron.... | False |
ridge | Ridge Classifier | sklearn.linear_model._ridge.RidgeClassifier | True |
rf | Random Forest Classifier | sklearn.ensemble._forest.RandomForestClassifier | True |
qda | Quadratic Discriminant Analysis | sklearn.discriminant_analysis.QuadraticDiscrim... | True |
ada | Ada Boost Classifier | sklearn.ensemble._weight_boosting.AdaBoostClas... | True |
gbc | Gradient Boosting Classifier | sklearn.ensemble._gb.GradientBoostingClassifier | True |
lda | Linear Discriminant Analysis | sklearn.discriminant_analysis.LinearDiscrimina... | True |
et | Extra Trees Classifier | sklearn.ensemble._forest.ExtraTreesClassifier | True |
xgboost | Extreme Gradient Boosting | xgboost.sklearn.XGBClassifier | True |
lightgbm | Light Gradient Boosting Machine | lightgbm.sklearn.LGBMClassifier | True |
catboost | CatBoost Classifier | catboost.core.CatBoostClassifier | True |
dummy | Dummy Classifier | sklearn.dummy.DummyClassifier | True |
You can use the include
and exclude
parameter in the compare_models
to train only select model or exclude specific models from training by passing the model id's in exclude
parameter.
compare_tree_models = compare_models(include = ['dt', 'rf', 'et', 'gbc', 'xgboost', 'lightgbm', 'catboost'])
Model | Accuracy | AUC | Recall | Prec. | F1 | Kappa | MCC | TT (Sec) | |
---|---|---|---|---|---|---|---|---|---|
rf | Random Forest Classifier | 0.7485 | 0.7917 | 0.5336 | 0.6784 | 0.5946 | 0.4164 | 0.4245 | 0.1200 |
catboost | CatBoost Classifier | 0.7410 | 0.7994 | 0.5278 | 0.6630 | 0.5851 | 0.4005 | 0.4078 | 0.0410 |
gbc | Gradient Boosting Classifier | 0.7373 | 0.7920 | 0.5550 | 0.6445 | 0.5931 | 0.4013 | 0.4059 | 0.0780 |
et | Extra Trees Classifier | 0.7299 | 0.7788 | 0.4965 | 0.6516 | 0.5596 | 0.3706 | 0.3802 | 0.1300 |
lightgbm | Light Gradient Boosting Machine | 0.7113 | 0.7653 | 0.5181 | 0.6036 | 0.5533 | 0.3427 | 0.3479 | 0.0460 |
dt | Decision Tree Classifier | 0.6947 | 0.6526 | 0.5137 | 0.5665 | 0.5343 | 0.3103 | 0.3130 | 0.0360 |
xgboost | Extreme Gradient Boosting | 0.6853 | 0.7522 | 0.4912 | 0.5620 | 0.5216 | 0.2887 | 0.2922 | 0.0420 |
Processing: 0%| | 0/33 [00:00<?, ?it/s]
compare_tree_models
RandomForestClassifier(bootstrap=True, ccp_alpha=0.0, class_weight=None, criterion='gini', max_depth=None, max_features='sqrt', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=-1, oob_score=False, random_state=123, verbose=0, warm_start=False)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
RandomForestClassifier(bootstrap=True, ccp_alpha=0.0, class_weight=None, criterion='gini', max_depth=None, max_features='sqrt', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=-1, oob_score=False, random_state=123, verbose=0, warm_start=False)
The function above has return trained model object as an output. The scoring grid is only displayed and not returned. If you need access to the scoring grid you can use pull
function to access the dataframe.
compare_tree_models_results = pull()
compare_tree_models_results
Model | Accuracy | AUC | Recall | Prec. | F1 | Kappa | MCC | TT (Sec) | |
---|---|---|---|---|---|---|---|---|---|
rf | Random Forest Classifier | 0.7485 | 0.7917 | 0.5336 | 0.6784 | 0.5946 | 0.4164 | 0.4245 | 0.120 |
catboost | CatBoost Classifier | 0.7410 | 0.7994 | 0.5278 | 0.6630 | 0.5851 | 0.4005 | 0.4078 | 0.041 |
gbc | Gradient Boosting Classifier | 0.7373 | 0.7920 | 0.5550 | 0.6445 | 0.5931 | 0.4013 | 0.4059 | 0.078 |
et | Extra Trees Classifier | 0.7299 | 0.7788 | 0.4965 | 0.6516 | 0.5596 | 0.3706 | 0.3802 | 0.130 |
lightgbm | Light Gradient Boosting Machine | 0.7113 | 0.7653 | 0.5181 | 0.6036 | 0.5533 | 0.3427 | 0.3479 | 0.046 |
dt | Decision Tree Classifier | 0.6947 | 0.6526 | 0.5137 | 0.5665 | 0.5343 | 0.3103 | 0.3130 | 0.036 |
xgboost | Extreme Gradient Boosting | 0.6853 | 0.7522 | 0.4912 | 0.5620 | 0.5216 | 0.2887 | 0.2922 | 0.042 |
By default compare_models
return the single best performing model based on the metric defined in the sort
parameter. Let's change our code to return 3 top models based on Recall
.
best_recall_models_top3 = compare_models(sort = 'Recall', n_select = 3)
Model | Accuracy | AUC | Recall | Prec. | F1 | Kappa | MCC | TT (Sec) | |
---|---|---|---|---|---|---|---|---|---|
nb | Naive Bayes | 0.7427 | 0.7957 | 0.5702 | 0.6543 | 0.6043 | 0.4156 | 0.4215 | 0.0430 |
gbc | Gradient Boosting Classifier | 0.7373 | 0.7920 | 0.5550 | 0.6445 | 0.5931 | 0.4013 | 0.4059 | 0.0710 |
lda | Linear Discriminant Analysis | 0.7670 | 0.8055 | 0.5550 | 0.7202 | 0.6243 | 0.4594 | 0.4695 | 0.0330 |
ridge | Ridge Classifier | 0.7708 | 0.0000 | 0.5392 | 0.7353 | 0.6203 | 0.4618 | 0.4744 | 0.0340 |
rf | Random Forest Classifier | 0.7485 | 0.7917 | 0.5336 | 0.6784 | 0.5946 | 0.4164 | 0.4245 | 0.1190 |
qda | Quadratic Discriminant Analysis | 0.7282 | 0.7894 | 0.5281 | 0.6558 | 0.5736 | 0.3785 | 0.3910 | 0.0370 |
catboost | CatBoost Classifier | 0.7410 | 0.7994 | 0.5278 | 0.6630 | 0.5851 | 0.4005 | 0.4078 | 0.0400 |
ada | Ada Boost Classifier | 0.7372 | 0.7799 | 0.5275 | 0.6585 | 0.5796 | 0.3926 | 0.4017 | 0.0670 |
lightgbm | Light Gradient Boosting Machine | 0.7113 | 0.7653 | 0.5181 | 0.6036 | 0.5533 | 0.3427 | 0.3479 | 0.0450 |
dt | Decision Tree Classifier | 0.6947 | 0.6526 | 0.5137 | 0.5665 | 0.5343 | 0.3103 | 0.3130 | 0.0340 |
svm | SVM - Linear Kernel | 0.7521 | 0.0000 | 0.5070 | 0.7363 | 0.5796 | 0.4154 | 0.4398 | 0.0340 |
et | Extra Trees Classifier | 0.7299 | 0.7788 | 0.4965 | 0.6516 | 0.5596 | 0.3706 | 0.3802 | 0.1290 |
lr | Logistic Regression | 0.7689 | 0.8068 | 0.4959 | 0.7614 | 0.5968 | 0.4453 | 0.4673 | 0.0410 |
xgboost | Extreme Gradient Boosting | 0.6853 | 0.7522 | 0.4912 | 0.5620 | 0.5216 | 0.2887 | 0.2922 | 0.0390 |
knn | K Neighbors Classifier | 0.7002 | 0.7433 | 0.4860 | 0.5965 | 0.5311 | 0.3142 | 0.3210 | 0.0570 |
dummy | Dummy Classifier | 0.6518 | 0.5000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0550 |
Processing: 0%| | 0/71 [00:00<?, ?it/s]
# list of top 3 models by Recall
best_recall_models_top3
[GaussianNB(priors=None, var_smoothing=1e-09), GradientBoostingClassifier(ccp_alpha=0.0, criterion='friedman_mse', init=None, learning_rate=0.1, loss='log_loss', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_iter_no_change=None, random_state=123, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False), LinearDiscriminantAnalysis(covariance_estimator=None, n_components=None, priors=None, shrinkage=None, solver='svd', store_covariance=False, tol=0.0001)]
Some other parameters that you might find very useful in compare_models
are:
You can check the docstring of the function for more info.
# help(compare_models)
# check available metrics used in CV
get_metrics()
Name | Display Name | Score Function | Scorer | Target | Args | Greater is Better | Multiclass | Custom | |
---|---|---|---|---|---|---|---|---|---|
ID | |||||||||
acc | Accuracy | Accuracy | <function accuracy_score at 0x000002E242711280> | accuracy | pred | {} | True | True | False |
auc | AUC | AUC | <function roc_auc_score at 0x000002E24270B0D0> | make_scorer(roc_auc_score, needs_proba=True, e... | pred_proba | {'average': 'weighted', 'multi_class': 'ovr'} | True | True | False |
recall | Recall | Recall | <pycaret.internal.metrics.BinaryMulticlassScor... | make_scorer(recall_score, average=weighted) | pred | {'average': 'weighted'} | True | True | False |
precision | Precision | Prec. | <pycaret.internal.metrics.BinaryMulticlassScor... | make_scorer(precision_score, average=weighted) | pred | {'average': 'weighted'} | True | True | False |
f1 | F1 | F1 | <pycaret.internal.metrics.BinaryMulticlassScor... | make_scorer(f1_score, average=weighted) | pred | {'average': 'weighted'} | True | True | False |
kappa | Kappa | Kappa | <function cohen_kappa_score at 0x000002E242711... | make_scorer(cohen_kappa_score) | pred | {} | True | True | False |
mcc | MCC | MCC | <function matthews_corrcoef at 0x000002E242711... | make_scorer(matthews_corrcoef) | pred | {} | True | True | False |
# create a custom function
import numpy as np
def custom_metric(y, y_pred):
tp = np.where((y_pred==1) & (y==1), (100), 0)
fp = np.where((y_pred==1) & (y==0), -5, 0)
return np.sum([tp,fp])
# add metric to PyCaret
add_metric('custom_metric', 'Custom Metric', custom_metric)
Name Custom Metric Display Name Custom Metric Score Function <function custom_metric at 0x000002E24B0EA430> Scorer make_scorer(custom_metric) Target pred Args {} Greater is Better True Multiclass True Custom True Name: custom_metric, dtype: object
# now let's run compare_models again
compare_models()
Model | Accuracy | AUC | Recall | Prec. | F1 | Kappa | MCC | Custom Metric | TT (Sec) | |
---|---|---|---|---|---|---|---|---|---|---|
ridge | Ridge Classifier | 0.7708 | 0.0000 | 0.5392 | 0.7353 | 0.6203 | 0.4618 | 0.4744 | 991.5000 | 0.0340 |
lr | Logistic Regression | 0.7689 | 0.8068 | 0.4959 | 0.7614 | 0.5968 | 0.4453 | 0.4673 | 915.0000 | 0.0390 |
lda | Linear Discriminant Analysis | 0.7670 | 0.8055 | 0.5550 | 0.7202 | 0.6243 | 0.4594 | 0.4695 | 1019.0000 | 0.0470 |
svm | SVM - Linear Kernel | 0.7521 | 0.0000 | 0.5070 | 0.7363 | 0.5796 | 0.4154 | 0.4398 | 929.5000 | 0.0330 |
rf | Random Forest Classifier | 0.7485 | 0.7917 | 0.5336 | 0.6784 | 0.5946 | 0.4164 | 0.4245 | 976.0000 | 0.1230 |
nb | Naive Bayes | 0.7427 | 0.7957 | 0.5702 | 0.6543 | 0.6043 | 0.4156 | 0.4215 | 1041.0000 | 0.0410 |
catboost | CatBoost Classifier | 0.7410 | 0.7994 | 0.5278 | 0.6630 | 0.5851 | 0.4005 | 0.4078 | 964.5000 | 0.0400 |
gbc | Gradient Boosting Classifier | 0.7373 | 0.7920 | 0.5550 | 0.6445 | 0.5931 | 0.4013 | 0.4059 | 1011.0000 | 0.0720 |
ada | Ada Boost Classifier | 0.7372 | 0.7799 | 0.5275 | 0.6585 | 0.5796 | 0.3926 | 0.4017 | 963.5000 | 0.0690 |
et | Extra Trees Classifier | 0.7299 | 0.7788 | 0.4965 | 0.6516 | 0.5596 | 0.3706 | 0.3802 | 904.5000 | 0.1370 |
qda | Quadratic Discriminant Analysis | 0.7282 | 0.7894 | 0.5281 | 0.6558 | 0.5736 | 0.3785 | 0.3910 | 961.0000 | 0.0360 |
lightgbm | Light Gradient Boosting Machine | 0.7113 | 0.7653 | 0.5181 | 0.6036 | 0.5533 | 0.3427 | 0.3479 | 937.5000 | 0.0450 |
knn | K Neighbors Classifier | 0.7002 | 0.7433 | 0.4860 | 0.5965 | 0.5311 | 0.3142 | 0.3210 | 877.5000 | 0.0530 |
dt | Decision Tree Classifier | 0.6947 | 0.6526 | 0.5137 | 0.5665 | 0.5343 | 0.3103 | 0.3130 | 923.5000 | 0.0330 |
xgboost | Extreme Gradient Boosting | 0.6853 | 0.7522 | 0.4912 | 0.5620 | 0.5216 | 0.2887 | 0.2922 | 883.0000 | 0.0400 |
dummy | Dummy Classifier | 0.6518 | 0.5000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0410 |
Processing: 0%| | 0/69 [00:00<?, ?it/s]
RidgeClassifier(alpha=1.0, class_weight=None, copy_X=True, fit_intercept=True, max_iter=None, normalize='deprecated', positive=False, random_state=123, solver='auto', tol=0.001)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
RidgeClassifier(alpha=1.0, class_weight=None, copy_X=True, fit_intercept=True, max_iter=None, normalize='deprecated', positive=False, random_state=123, solver='auto', tol=0.001)
# remove custom metric
remove_metric('custom_metric')
PyCaret integrates with many different type of experiment loggers (default = 'mlflow'). To turn on experiment tracking in PyCaret you can set log_experiment
and experiment_name
parameter. It will automatically track all the metrics, hyperparameters, and artifacts based on the defined logger.
# from pycaret.classification import *
# s = setup(data, target = 'Class variable', log_experiment='mlflow', experiment_name='diabetes_experiment')
# compare models
# best = compare_models()
# start mlflow server on localhost:5000
# !mlflow ui
By default PyCaret uses MLFlow
logger that can be changed using log_experiment
parameter. Following loggers are available:
- mlflow
- wandb
- comet_ml
- dagshub
Other logging related parameters that you may find useful are:
For more information check out the docstring of the setup
function.
# help(setup)
This function trains and evaluates the performance of a given estimator using cross-validation. The output of this function is a scoring grid with CV scores by fold. Metrics evaluated during CV can be accessed using the get_metrics
function. Custom metrics can be added or removed using add_metric
and remove_metric
function. All the available models can be accessed using the models function.
# check all the available models
models()
Name | Reference | Turbo | |
---|---|---|---|
ID | |||
lr | Logistic Regression | sklearn.linear_model._logistic.LogisticRegression | True |
knn | K Neighbors Classifier | sklearn.neighbors._classification.KNeighborsCl... | True |
nb | Naive Bayes | sklearn.naive_bayes.GaussianNB | True |
dt | Decision Tree Classifier | sklearn.tree._classes.DecisionTreeClassifier | True |
svm | SVM - Linear Kernel | sklearn.linear_model._stochastic_gradient.SGDC... | True |
rbfsvm | SVM - Radial Kernel | sklearn.svm._classes.SVC | False |
gpc | Gaussian Process Classifier | sklearn.gaussian_process._gpc.GaussianProcessC... | False |
mlp | MLP Classifier | sklearn.neural_network._multilayer_perceptron.... | False |
ridge | Ridge Classifier | sklearn.linear_model._ridge.RidgeClassifier | True |
rf | Random Forest Classifier | sklearn.ensemble._forest.RandomForestClassifier | True |
qda | Quadratic Discriminant Analysis | sklearn.discriminant_analysis.QuadraticDiscrim... | True |
ada | Ada Boost Classifier | sklearn.ensemble._weight_boosting.AdaBoostClas... | True |
gbc | Gradient Boosting Classifier | sklearn.ensemble._gb.GradientBoostingClassifier | True |
lda | Linear Discriminant Analysis | sklearn.discriminant_analysis.LinearDiscrimina... | True |
et | Extra Trees Classifier | sklearn.ensemble._forest.ExtraTreesClassifier | True |
xgboost | Extreme Gradient Boosting | xgboost.sklearn.XGBClassifier | True |
lightgbm | Light Gradient Boosting Machine | lightgbm.sklearn.LGBMClassifier | True |
catboost | CatBoost Classifier | catboost.core.CatBoostClassifier | True |
dummy | Dummy Classifier | sklearn.dummy.DummyClassifier | True |
# train logistic regression with default fold=10
lr = create_model('lr')
Accuracy | AUC | Recall | Prec. | F1 | Kappa | MCC | |
---|---|---|---|---|---|---|---|
Fold | |||||||
0 | 0.8148 | 0.9023 | 0.5789 | 0.8462 | 0.6875 | 0.5624 | 0.5828 |
1 | 0.8333 | 0.7970 | 0.6316 | 0.8571 | 0.7273 | 0.6112 | 0.6260 |
2 | 0.8519 | 0.9383 | 0.6316 | 0.9231 | 0.7500 | 0.6499 | 0.6736 |
3 | 0.7222 | 0.7759 | 0.4211 | 0.6667 | 0.5161 | 0.3350 | 0.3524 |
4 | 0.8333 | 0.9083 | 0.5789 | 0.9167 | 0.7097 | 0.6010 | 0.6322 |
5 | 0.6852 | 0.6737 | 0.4211 | 0.5714 | 0.4848 | 0.2656 | 0.2720 |
6 | 0.7222 | 0.7820 | 0.4737 | 0.6429 | 0.5455 | 0.3520 | 0.3605 |
7 | 0.7547 | 0.8460 | 0.3333 | 0.8571 | 0.4800 | 0.3579 | 0.4263 |
8 | 0.7358 | 0.6952 | 0.4444 | 0.6667 | 0.5333 | 0.3592 | 0.3736 |
9 | 0.7358 | 0.7492 | 0.4444 | 0.6667 | 0.5333 | 0.3592 | 0.3736 |
Mean | 0.7689 | 0.8068 | 0.4959 | 0.7614 | 0.5968 | 0.4453 | 0.4673 |
Std | 0.0557 | 0.0857 | 0.0970 | 0.1236 | 0.1024 | 0.1353 | 0.1379 |
Processing: 0%| | 0/4 [00:00<?, ?it/s]
The function above has return trained model object as an output. The scoring grid is only displayed and not returned. If you need access to the scoring grid you can use pull
function to access the dataframe.
lr_results = pull()
print(type(lr_results))
lr_results
<class 'pandas.core.frame.DataFrame'>
Accuracy | AUC | Recall | Prec. | F1 | Kappa | MCC | |
---|---|---|---|---|---|---|---|
Fold | |||||||
0 | 0.8148 | 0.9023 | 0.5789 | 0.8462 | 0.6875 | 0.5624 | 0.5828 |
1 | 0.8333 | 0.7970 | 0.6316 | 0.8571 | 0.7273 | 0.6112 | 0.6260 |
2 | 0.8519 | 0.9383 | 0.6316 | 0.9231 | 0.7500 | 0.6499 | 0.6736 |
3 | 0.7222 | 0.7759 | 0.4211 | 0.6667 | 0.5161 | 0.3350 | 0.3524 |
4 | 0.8333 | 0.9083 | 0.5789 | 0.9167 | 0.7097 | 0.6010 | 0.6322 |
5 | 0.6852 | 0.6737 | 0.4211 | 0.5714 | 0.4848 | 0.2656 | 0.2720 |
6 | 0.7222 | 0.7820 | 0.4737 | 0.6429 | 0.5455 | 0.3520 | 0.3605 |
7 | 0.7547 | 0.8460 | 0.3333 | 0.8571 | 0.4800 | 0.3579 | 0.4263 |
8 | 0.7358 | 0.6952 | 0.4444 | 0.6667 | 0.5333 | 0.3592 | 0.3736 |
9 | 0.7358 | 0.7492 | 0.4444 | 0.6667 | 0.5333 | 0.3592 | 0.3736 |
Mean | 0.7689 | 0.8068 | 0.4959 | 0.7614 | 0.5968 | 0.4453 | 0.4673 |
Std | 0.0557 | 0.0857 | 0.0970 | 0.1236 | 0.1024 | 0.1353 | 0.1379 |
# train logistic regression with fold=3
lr = create_model('lr', fold=3)
Accuracy | AUC |
---|