When AI models contribute to high-impact decisions such as whether or not someone gets a loan, we want them to be fair. Unfortunately, in current practice, AI models are often optimized primarily for accuracy, with little consideration for fairness. This blog post gives a hands-on example for how AI Automation can help build AI models that are both accurate and fair. This blog post is written for data scientists who have some familiarity with Python. No prior knowledge of AI Automation or AI Fairness is required, we will introduce the relevant concepts as we get to them.
Bias in data leads to bias in models. AI models are increasingly consulted for consequential decisions about people, in domains including credit loans, hiring and retention, penal justice, medical, and more. Often, the model is trained from past decisions made by humans. If the decisions used for training were discriminatory, then your trained model will be too, unless you are careful. Being careful about bias is something you should do as a data scientist. Fortunately, you do not have to grapple with this issue alone. You can consult others about ethics. You can also ask yourself how your AI model may affect your (or your institution's) reputation. And ultimately, you must follow applicable laws and regulations.
AI Fairness can be measured via several metrics, and you need to select the appropriate metrics based on the circumstances. For illustration purposes, this blog post uses one particular fairness metric called disparate impact. Disparate impact is defined as the ratio of the rate of favorable outcome for the unprivileged group to that of the privileged group. To make this definition more concrete, consider the case where a favorable outcome means getting a loan, the unprivileged group is women, and the privileged group is men. Then if your AI model were to let women get a loan in 30% of the cases and men in 60% of the cases, the disparate impact would be 30% / 60% = 0.5, indicating a gender bias towards men. The ideal value for disparate impact is 1, and you could define fairness for this metric as a band around 1, e.g., from 0.8 to 1.2.
To get the best performance out of your AI model, you must experiment with its configuration. This means searching a high-dimensional space where some options are categorical, some are continuous, and some are even conditional. No configuration is optimal for all domains let alone all metrics, and searching them all by hand is impossible. In fact, in a high-dimensional space, even exhaustively enumerating all the valid combinations soon becomes impractical. Fortunately, you can use tools to automate the search, thus making you more productive at finding good models quickly. These productivity and quality improvements become compounded when you have to do the search over.
AI Automation is a technology that assists data scientists in building AI models by automating some of the tedious steps. One AI automation technique is algorithm selection , which automatically chooses among alternative algorithms for a particular task. Another AI automation technique is hyperparameter tuning , which automatically configures the arguments of AI algorithms. You can use AI automation to optimize for a variety of metrics. This blog post shows you how to use AI automation to optimize for both accuracy and for fairness as measured by disparate impact.
This blog post is generated from a Jupyter notebook that uses the following open-source Python libraries. AIF360 is a collection of fairness metrics and bias mitigation algorithms. The pandas, scikit-learn, and XGBoost libraries support data analysis and machine learning with data structures and a comprehensive collection of AI algorithms. The hyperopt library implements both algorithm selection and hyperparameter tuning for AI automation. And Lale is a library for semi-automated data science; this blog post uses Lale as the backbone for putting the other libraries together.
Our starting point is a dataset and a task. For illustration purposes, we picked credit-g, also known as the German Credit dataset. Each row describes a person using several features that may help evaluate them as a potential loan applicant. The task is to classify people into either good or bad credit risks. We will use AIF360 to load the dataset.
import aif360.datasets
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
creditg = aif360.datasets.GermanDataset()
print(f'labels: {creditg.label_names}, '
f'protected attributes: {creditg.protected_attribute_names}')
labels: ['credit'], protected attributes: ['sex', 'age']
AIF360 datasets carry some fairness-related metadata. The credit-g dataset has a single label, credit, to be predicted as the outcome. A protected attribute is a feature that partitions the population into groups whose outcome should have parity. The credit-g dataset has two protected attributes, sex and age.
Before we look at how to train a classifier that is optimized for both accuracy and disparate impact, we will set a baseline, by training a classifier that is only optimized for accuracy. For a realistic assessment of how well the model generalizes, we first split the data into a training set and a test set. Then, to be able to use algorithms from the popular scikit-learn library, we convert the data from the AIF360 representation to a Pandas representation.
from lale.lib.aif360 import dataset_to_pandas
train_ds, test_ds = creditg.split([0.7], shuffle=True, seed=42)
train_X, train_y = dataset_to_pandas(train_ds)
test_X, test_y = dataset_to_pandas(test_ds)
Next, we will import a few algorithms from scikit-learn and XGBoost: a dimensionality reduction transformer (PCA) and three classifiers (logistic regression, gradient boosting, and a support vector machine).
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression as LR
from xgboost import XGBClassifier as XGBoost
from sklearn.svm import LinearSVC as SVM
To use AI Automation, we need to define a search space , which is a set of possible machine learning pipelines and their associated hyperparameters. The following code uses Lale to define a search space.
from lale.lib.lale import NoOp
import lale
lale.wrap_imported_operators()
planned_orig = (PCA | NoOp) >> (LR | XGBoost | SVM)
planned_orig.visualize()
The call to wrap_imported_operators
augments the algorithms
that were imported from scikit-learn with metadata about
their hyperparameters. The Lale combinator |
indicates
algorithmic choice. For example, (PCA | NoOp)
indicates that
it is up to the AI Automation to decide whether to apply a PCA
transformer or whether to use a no-op transformer that leaves
the data unchanged. Note that the PCA
itself is not configured
with concrete hyperparameters, since those will be left for the
AI automation to choose instead. Finally, the Lale combinator
>>
pipes the output from the transformer into the input to the
classifier, which is itself a choice between (LR | XGBoost | SVM)
.
The search space is encapsulated in the object planned_orig
.
We will use hyperopt to select the algorithms and to tune their
hyperparameters. Lale provides a Hyperopt
that
turns a search space such as the one specified above into an
optimization problem for hyperopt. After 25 trials, we get back
the model that performed best for the default optimization
objective, which is accuracy.
from lale.lib.lale import Hyperopt
best_estimator = planned_orig.auto_configure(
train_X, train_y, optimizer=Hyperopt, cv=3, max_evals=10)
best_estimator.visualize()
100%|███████| 10/10 [00:35<00:00, 3.67s/trial, best loss: -0.7371030654292458]
As shown by the visualization, the search found a pipeline with a PCA transformer and an SVM classifier. Inspecting the hyperparameters reveals which values worked best for the 25 trials on the dataset at hand.
best_estimator.pretty_print(ipython_display=True, show_imports=False)
svm = SVM(C=22043.695919060654, dual=False, fit_intercept=False, penalty='l1', tol=0.002984103823071835)
pipeline = NoOp() >> svm
We can use the accuracy score metric from scikit-learn to measure how well the pipeline accomplishes the objective for which it was trained.
import sklearn.metrics
accuracy_scorer = sklearn.metrics.make_scorer(sklearn.metrics.accuracy_score)
print(f'accuracy {accuracy_scorer(best_estimator, test_X, test_y):.1%}')
accuracy 75.7%
The accuracy is close to the state of the art for this dataset. However, we would like our model to be not just accurate but also fair. As discussed before, we will use disparate impact as the fairness metric. To configure the metric, we need some fairness-related metadata.
import lale.lib.aif360
fairness_info = lale.lib.aif360.dataset_fairness_info(creditg)
fairness_info
{'favorable_label': 1.0, 'unfavorable_label': 2.0, 'protected_attribute_names': ['sex', 'age'], 'unprivileged_groups': [{'sex': 0.0, 'age': 0.0}], 'privileged_groups': [{'sex': 1.0, 'age': 1.0}]}
For illustrative purposes, we will only look at one of the protected attributes, sex, which in this dataset is encoded as 0 for female and 1 for male.
fairness_info['protected_attribute_names'] = ['sex']
fairness_info['unprivileged_groups'] = [{'sex': 0.0}]
fairness_info['privileged_groups'] = [{'sex': 1.0}]
We will use the disparate impact metric from AIF360, wrapped for compatibility with scikit-learn.
import aif360.metrics
disparate_impact_scorer = lale.lib.aif360.disparate_impact(**fairness_info)
print(f'disparate impact {disparate_impact_scorer(best_estimator, test_X, test_y):.2f}')
disparate impact 0.72
The disparate impact for this model is 0.7, which is far from
the ideal value for this metric, which is 1.0. We would prefer a
model that is much more fair. The AIF360 toolkit provides several
algorithms for mitigating fairness problems. One of them is
DisparateImpactRemover
, which modifies the features that are
not the protected attribute in such a way that it is hard to
predict the protected attribute from them. We use a Lale version
of DisparateImpactRemover
that wraps the corresponding AIF360
algorithm for AI Automation. This algorithm has a hyperparameter
repair_level
that we will tune with hyperparameter optimization.
from lale.lib.aif360 import DisparateImpactRemover
DisparateImpactRemover.hyperparam_schema('repair_level')
{'description': 'Repair amount from 0 = none to 1 = full.', 'type': 'number', 'minimum': 0, 'maximum': 1, 'default': 1}
We compose the bias mitigation algorithm in a pipeline with a projection operator that strips out the projected attribute, followed by a choice of classifiers as before. In the visualization, light blue indicates trainable operators and dark blue indicates that automation must make a choice before the operators can be trained. Compared to the earlier pipeline, we omit the PCA, to make it easier for the estimator to disregard features that cause poor disparate impact.
from lale.lib.lale import Project
dimr = DisparateImpactRemover(sensitive_attribute='sex')
proj = Project(columns=[i for i, name in enumerate(creditg.feature_names)
if name != 'sex'])
planned_fairer = dimr >> proj >> (LR | XGBoost | SVM)
planned_fairer.visualize()
Unlike accuracy, which is a metric that can be computed from
predicted labels alone, fairness metrics such as disparate
impact need to look not just at labels but also at features.
For instance, disparate impact is defined by comparing outcomes
between a privileged group and an unprivileged group, so it
needs to check the protected attribute to determine group
membership for the sample person at hand. In order to use Lale Hyperopt for this
case, we will have to define a custom scorer that takes the disparate impact into account
along with accuracy. Hyperopt minimizes (best_score - score_returned_by_the_scorer)
, where best_score
is an argument to Hyperopt and score_returned_by_the_scorer
is the value returned by the scorer for each
evaluation point. In the custom scorer defined below, if the disparate impact is outside of a margin of 10% around its ideal of 1, the score is -99, a low value. Otherwise, the score is accuracy, higher is better.
def combined_scorer(estimator, X, y):
accuracy = accuracy_scorer(estimator, X, y)
disparate_impact = disparate_impact_scorer(estimator, X, y)
if disparate_impact < 0.9 or 1.1 < disparate_impact:
return -99
else:
return accuracy
Now, we have all the pieces in place to use AI Automation
on our planned_fairer
pipeline for both accuracy and
disparate impact.
import logging
logging.getLogger('lale.lib.lale.hyperopt').setLevel(logging.WARNING)
trained_fairer = planned_fairer.auto_configure(
train_X, train_y, optimizer=Hyperopt, cv=3,
max_evals=10, scoring=combined_scorer, best_score=1.0)
100%|████████| 10/10 [02:58<00:00, 17.01s/trial, best loss: 0.2600418179817322]
As with any trained model, we can evaluate and visualize the result.
print(f'accuracy {accuracy_scorer(trained_fairer, test_X, test_y):.1%}')
print(f'disparate impact {disparate_impact_scorer(trained_fairer, test_X, test_y):.2f}')
trained_fairer.visualize()
accuracy 76.0% disparate impact 0.99
As the result demonstrates, the best model found by AI Automation has similar accuracy and better disparate impact as the one we saw before. Also, it has tuned the repair level and has picked and tuned a classifier.
trained_fairer.pretty_print(ipython_display=True, show_imports=False)
disparate_impact_remover = DisparateImpactRemover(sensitive_attribute='sex', repair_level=0.34159574210263643)
project = Project(columns=[0, 1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57])
xg_boost = XGBoost(colsample_bylevel=0.900172508121399, colsample_bytree=0.4645879165883061, learning_rate=0.3450371893786195, max_depth=10, min_child_weight=3, n_estimators=77, reg_alpha=0.9807177171317099, reg_lambda=0.5306129972505745, subsample=0.7957033383112613)
pipeline = disparate_impact_remover >> project >> xg_boost
These results may vary by dataset and search space.
In summary, this blog post showed you how to use AI Automation from Lale, while incorporating a fairness mitigation technique into the pipeline and a fairness metric into the objective. Of course, this blog post only scratches the surface of what can be done with AI Automation and AI Fairness. We encourage you to check out the open-source projects Lale and AIF360 and use them to build your own fair and accurate models!
The following notebook showcases how Lale exposes a few more operators and metrics from AIF360: https://nbviewer.jupyter.org/github/IBM/lale/blob/master/examples/demo_aif360_more.ipynb