This tutorial illustrates the use of several methods in the AI Explainability 360 Toolkit to provide different kinds of explanations suited to different users in the context of a credit approval process enabled by machine learning. We use data from the FICO Explainable Machine Learning Challenge as described below. The three types of users (a.k.a. consumers) that we consider are a data scientist, who evaluates the machine learning model before deployment, a loan officer, who makes the final decision based on the model's output, and a bank customer, who wants to understand the reasons for their application result.
For the data scientist, we present two directly interpretable rule-based models that provide global understanding of their behavior. These models are produced by the Boolean Rule Column Generation (BRCG, class BooleanRuleCG
) and Logistic Rule Regression (LogRR, class LogisticRuleRegression
) algorithms in AIX360. The former yields very simple OR-of-ANDs classification rules while the latter gives weighted combinations of rules that are more accurate and still interpretable.
For the loan officer, we demonstrate a different way of explaining machine learning predictions by showing examples, specifically prototypes or representatives in the training data that are similar to a given loan applicant and receive the same class label. We use the ProtoDash method (class ProtodashExplainer
) to find these prototypes.
For the bank customer, we consider the Contrastive Explanations Method (CEM, class CEMExplainer
) for explaining the predictions of black box models to end users. CEM builds upon the popular approach of highlighting features present in the input instance that are responsible for the model's classification. In addition to these, CEM also identifies features that are (minimally) absent in the input instance, but whose presence would have altered the classification.
The tutorial is organized around these three types of consumers, following an introduction to the dataset.
The FICO HELOC dataset contains anonymized information about home equity line of credit (HELOC) applications made by real homeowners. A HELOC is a line of credit typically offered by a US bank as a percentage of home equity (the difference between the current market value of a home and the outstanding balance of all liens, e.g. mortgages). The customers in this dataset have requested a credit line in the range of USD 5,000 - 150,000. The machine learning task we are considering is to use the information about the applicant in their credit report to predict whether they will make timely payments over a two year period. The machine learning prediction can then be used to decide whether the homeowner qualifies for a line of credit and, if so, how much credit should be extended.
The HELOC dataset and more information about it, including instructions to download, can be found here.
The table below reproduces part of the data dictionary that comes with the HELOC dataset, explaining the predictor variables and target variable. For example, NumSatisfactoryTrades is a predictor variable that counts the number of past credit agreements with the applicant, which resulted in on-time payments. The target variable to predict is a binary variable called RiskPerformance. The value “Bad” indicates that an applicant was 90 days past due or worse at least once over a period of 24 months from when the credit account was opened. The value “Good” indicates that they have made their payments without ever being more than 90 days overdue. The relationship between a predictor variable and the target is indicated in the last column of the table. If a predictor variable is monotonically decreasing with respect to probability of bad = 1, it means that as the value of the variable increases, the probability of the loan application being "Bad" decreases, i.e. it becomes more "good". For example, ExternalRiskEstimate and NumSatisfactoryTrades are shown as monotonically decreasing. Monotonically increasing has the opposite meaning.
Field | Meaning | Monotonicity Constraint (with respect to probability of bad = 1) |
---|---|---|
ExternalRiskEstimate | Consolidated version of risk markers | Monotonically Decreasing |
MSinceOldestTradeOpen | Months Since Oldest Trade Open | Monotonically Decreasing |
MSinceMostRecentTradeOpen | Months Since Most Recent Trade Open | Monotonically Decreasing |
AverageMInFile | Average Months in File | Monotonically Decreasing |
NumSatisfactoryTrades | Number Satisfactory Trades | Monotonically Decreasing |
NumTrades60Ever2DerogPubRec | Number Trades 60+ Ever | Monotonically Decreasing |
NumTrades90Ever2DerogPubRec | Number Trades 90+ Ever | Monotonically Decreasing |
PercentTradesNeverDelq | Percent Trades Never Delinquent | Monotonically Decreasing |
MSinceMostRecentDelq | Months Since Most Recent Delinquency | Monotonically Decreasing |
MaxDelq2PublicRecLast12M | Max Delq/Public Records Last 12 Months. See tab "MaxDelq" for each category | Values 0-7 are monotonically decreasing |
MaxDelqEver | Max Delinquency Ever. See tab "MaxDelq" for each category | Values 2-8 are monotonically decreasing |
NumTotalTrades | Number of Total Trades (total number of credit accounts) | No constraint |
NumTradesOpeninLast12M | Number of Trades Open in Last 12 Months | Monotonically Increasing |
PercentInstallTrades | Percent Installment Trades | No constraint |
MSinceMostRecentInqexcl7days | Months Since Most Recent Inq excl 7days | Monotonically Decreasing |
NumInqLast6M | Number of Inq Last 6 Months | Monotonically Increasing |
NumInqLast6Mexcl7days | Number of Inq Last 6 Months excl 7days. Excluding the last 7 days removes inquiries that are likely due to price comparision shopping. | Monotonically Increasing |
NetFractionRevolvingBurden | Net Fraction Revolving Burden. This is revolving balance divided by credit limit | Monotonically Increasing |
NetFractionInstallBurden | Net Fraction Installment Burden. This is installment balance divided by original loan amount | Monotonically Increasing |
NumRevolvingTradesWBalance | Number Revolving Trades with Balance | No constraint |
NumInstallTradesWBalance | Number Installment Trades with Balance | No constraint |
NumBank2NatlTradesWHighUtilization | Number Bank/Natl Trades w high utilization ratio | Monotonically Increasing |
PercentTradesWBalance | Percent Trades with Balance | No constraint |
RiskPerformance | Paid as negotiated flag (12-36 Months). String of Good and Bad | Target |
./aix360/data/heloc_data/heloc_dataset.csv
, where "." is the root directory of the Git repository before running a pip install of aix360 library.path-to-your-virtual-env/lib/python3.6/site-packages/aix360/data/heloc_data/heloc_dataset.csv
In evaluating a machine learning model for deployment, a data scientist would ideally like to understand the behavior of the model as a whole, not just in specific instances (e.g. specific loan applicants). This is especially true in regulated industries such as banking where higher standards of explainability may be required. For example, the data scientist may have to present the model to: 1) technical and business managers for review before deployment, 2) a lending expert to compare the model to the expert's knowledge, or 3) a regulator to check for compliance. Furthermore, it is common for a model to be deployed in a different geography than the one it was trained on. A global view of the model may uncover problems with overfitting and poor generalization to other geographies before deployment.
Directly interpretable models can provide such global understanding because they have a sufficiently simple form for their workings to be transparent. Below we present two directly interpretable models in the form of a Boolean rule (BR) and a logistic rule regression (LogRR) model. The former is produced by the Boolean Rule Column Generation (BRCG) algorithm while the latter is a generalized linear rule model (GLRM), both implemented in AIX360. While both models are interpretable, they provide different trade-offs between model simplicity and accuracy in predicting loan repayment. BRCG yields a very simple set of rules that has reasonable accuracy. LogRR achieves higher accuracy, higher even than some uninterpretable models, while retaining the form of a linear model. Its interpretation is enhanced by plots as demonstrated below.
We use the HELOCDataset
class in AIX360 to load the FICO HELOC data as a DataFrame. The setting custom_preprocessing=nan_preprocessing
converts special values in the data (coded as negative integers) to np.nan
, which can be handled properly by BRCG and LogRR, as opposed to replacing them with zeros or mean values. The data is then split into training and test sets using a fixed random seed.
import warnings
warnings.filterwarnings('ignore')
# Load FICO HELOC data with special values converted to np.nan
from aix360.datasets.heloc_dataset import HELOCDataset, nan_preprocessing
data = HELOCDataset(custom_preprocessing=nan_preprocessing).data()
# Separate target variable
y = data.pop('RiskPerformance')
# Split data into training and test sets using fixed random seed
from sklearn.model_selection import train_test_split
dfTrain, dfTest, yTrain, yTest = train_test_split(data, y, random_state=0, stratify=y)
dfTrain.head().transpose()
Using Heloc dataset: /Users/vijay/AIX360-TEST/AIX360/aix360/datasets/../data/heloc_data/heloc_dataset.csv
8960 | 8403 | 1949 | 4886 | 4998 | |
---|---|---|---|---|---|
ExternalRiskEstimate | 64.0 | 57.0 | 59.0 | 65.0 | 65.0 |
MSinceOldestTradeOpen | 175.0 | 47.0 | 168.0 | 228.0 | 117.0 |
MSinceMostRecentTradeOpen | 6.0 | 9.0 | 3.0 | 5.0 | 7.0 |
AverageMInFile | 97.0 | 35.0 | 38.0 | 69.0 | 48.0 |
NumSatisfactoryTrades | 29.0 | 5.0 | 21.0 | 24.0 | 7.0 |
NumTrades60Ever2DerogPubRec | 9.0 | 1.0 | 0.0 | 3.0 | 1.0 |
NumTrades90Ever2DerogPubRec | 9.0 | 0.0 | 0.0 | 2.0 | 1.0 |
PercentTradesNeverDelq | 63.0 | 50.0 | 100.0 | 85.0 | 78.0 |
MSinceMostRecentDelq | 2.0 | 16.0 | NaN | 3.0 | 36.0 |
MaxDelq2PublicRecLast12M | 4.0 | 6.0 | 7.0 | 0.0 | 6.0 |
MaxDelqEver | 4.0 | 5.0 | 8.0 | 2.0 | 4.0 |
NumTotalTrades | 41.0 | 10.0 | 21.0 | 27.0 | 9.0 |
NumTradesOpeninLast12M | 1.0 | 1.0 | 12.0 | 1.0 | 2.0 |
PercentInstallTrades | 63.0 | 30.0 | 38.0 | 31.0 | 56.0 |
MSinceMostRecentInqexcl7days | 0.0 | 0.0 | 0.0 | 7.0 | 7.0 |
NumInqLast6M | 1.0 | 2.0 | 1.0 | 0.0 | 0.0 |
NumInqLast6Mexcl7days | 1.0 | 2.0 | 1.0 | 0.0 | 0.0 |
NetFractionRevolvingBurden | 16.0 | 66.0 | 85.0 | 13.0 | 54.0 |
NetFractionInstallBurden | 94.0 | 70.0 | 90.0 | 66.0 | 69.0 |
NumRevolvingTradesWBalance | 1.0 | 2.0 | 10.0 | 3.0 | 2.0 |
NumInstallTradesWBalance | 1.0 | 2.0 | 5.0 | 2.0 | 3.0 |
NumBank2NatlTradesWHighUtilization | NaN | 0.0 | 4.0 | 0.0 | 1.0 |
PercentTradesWBalance | 50.0 | 57.0 | 94.0 | 46.0 | 83.0 |
BRCG and LogRR require non-binary features to be binarized using the provided FeatureBinarizer
class. We use the default of nine quantile thresholds (i.e. 10 bins) to binarize ordinal (including continuous-valued) features, include all negations (e.g. '>' comparisons as well as '<='), and also return standardized versions of the original unbinarized ordinal features, which are used by LogRR but not BRCG. Below is the result of binarizing the first 'ExternalRiskEstimate' feature.
# Binarize data and also return standardized ordinal features
from aix360.algorithms.rbm import FeatureBinarizer
fb = FeatureBinarizer(negations=True, returnOrd=True)
dfTrain, dfTrainStd = fb.fit_transform(dfTrain)
dfTest, dfTestStd = fb.transform(dfTest)
dfTrain['ExternalRiskEstimate'].head()
operation | <= | > | == | != | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
value | 59.0 | 63.0 | 66.0 | 69.0 | 72.0 | 75.0 | 78.0 | 82.0 | 86.0 | 59.0 | 63.0 | 66.0 | 69.0 | 72.0 | 75.0 | 78.0 | 82.0 | 86.0 | NaN | NaN |
8960 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8403 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1949 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4886 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4998 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
First we consider BRCG, which is designed to produce a very simple OR-of-ANDs rule (known more formally as disjunctive normal form, DNF) or alternatively an AND-of-ORs rule (conjunctive normal form, CNF) to predict whether an applicant will repay the loan on time (Y = 1). For a binary classification problem such as we have here, a DNF rule is equivalent to a rule set, where AND clauses in the DNF correspond to individual rules in the rule set. Furthermore, it can be shown that a CNF rule for Y = 1 is equivalent to a DNF rule for Y = 0 [1]. BRCG is distinguished by its use of the optimization technique of column generation to search the space of possible clauses, which is exponential in size. To learn more about column generation, please see our NeurIPS paper [2].
For this dataset, we find that a CNF rule for Y = 1 (i.e. a DNF for Y = 0, enabled by setting CNF=True
) is slightly better than a DNF rule for Y = 1. The model complexity parameters lambda0
and lambda1
penalize the number of clauses in the rule and the number of conditions in each clause. We use the default values of 1e-3 for lambda0
and lambda1
(decreasing them did not increase accuracy here) and leave other parameters at their defaults as well. The model is then trained, evaluated, and printed.
# Instantiate BRCG with small complexity penalty and large beam search width
from aix360.algorithms.rbm import BooleanRuleCG
br = BooleanRuleCG(lambda0=1e-3, lambda1=1e-3, CNF=True)
# Train, print, and evaluate model
br.fit(dfTrain, yTrain)
from sklearn.metrics import accuracy_score
print('Training accuracy:', accuracy_score(yTrain, br.predict(dfTrain)))
print('Test accuracy:', accuracy_score(yTest, br.predict(dfTest)))
print('Predict Y=0 if ANY of the following rules are satisfied, otherwise Y=1:')
print(br.explain()['rules'])
Learning CNF rule with complexity parameters lambda0=0.001, lambda1=0.001 Initial LP solved Iteration: 1, Objective: 0.2895 Iteration: 2, Objective: 0.2895 Iteration: 3, Objective: 0.2895 Iteration: 4, Objective: 0.2895 Iteration: 5, Objective: 0.2864 Iteration: 6, Objective: 0.2864 Iteration: 7, Objective: 0.2864 Training accuracy: 0.719573146021883 Test accuracy: 0.696515397082658 Predict Y=0 if ANY of the following rules are satisfied, otherwise Y=1: ['ExternalRiskEstimate <= 75.00 AND NumSatisfactoryTrades <= 17.00', 'ExternalRiskEstimate <= 72.00 AND NumSatisfactoryTrades > 17.00']
The returned DNF rule for Y = 0 is indeed very simple with only two clauses, each involving the same two features. It is interesting to see that such a rule can already achieve 69.7% accuracy. 'ExternalRiskEstimate' is a consolidated version of some risk markers (higher is better), while 'NumSatisfactoryTrades' is the number of satisfactory credit accounts. It makes sense therefore that for applicants with more than 17 satisfactory accounts, the ExternalRiskEstimate threshold dividing good (Y = 1) and bad (Y = 0) credit risk is slightly lower (more lenient) than for applicants with fewer satisfactory accounts.
We note that AIX360 includes only a heuristic beam search version of BRCG. The published version of BRCG [2] (not implemented in AIX360) uses integer programming to yield slightly more complex rules that are also more accurate (close to 72% test accuracy).
Next we consider a LogRR model, which can improve accuracy at the cost of a more complex but still interpretable model. Specifically, LogRR fits a logistic regression model using rule-based features, where column generation is again used to generate promising candidates from the space of all possible rules. Here we are also including unbinarized ordinal features (useOrd=True
) in addition to rules. Similar to BRCG, the complexity parameters lambda0
, lambda1
penalize the number of rules included in the model and the number of conditions in each rule. the The values for lambda0
, lambda1
below strike a good balance between accuracy and model complexity, based on our published experience with the FICO HELOC dataset [3].
# Instantiate LRR with good complexity penalties and numerical features
from aix360.algorithms.rbm import LogisticRuleRegression
lrr = LogisticRuleRegression(lambda0=0.005, lambda1=0.001, useOrd=True)
# Train, print, and evaluate model
lrr.fit(dfTrain, yTrain, dfTrainStd)
print('Training accuracy:', accuracy_score(yTrain, lrr.predict(dfTrain, dfTrainStd)))
print('Test accuracy:', accuracy_score(yTest, lrr.predict(dfTest, dfTestStd)))
print('Probability of Y=1 is predicted as logistic(z) = 1 / (1 + exp(-z))')
print('where z is a linear combination of the following rules/numerical features:')
lrr.explain()
Training accuracy: 0.742536809401594 Test accuracy: 0.7260940032414911 Probability of Y=1 is predicted as logistic(z) = 1 / (1 + exp(-z)) where z is a linear combination of the following rules/numerical features:
rule/numerical feature | coefficient | |
---|---|---|
0 | (intercept) | -0.129822 |
1 | MSinceMostRecentInqexcl7days > 0.00 | 0.680258 |
2 | ExternalRiskEstimate | 0.654165 |
3 | NetFractionRevolvingBurden | -0.554117 |
4 | NumSatisfactoryTrades | 0.551641 |
5 | NumInqLast6M | -0.463191 |
6 | NumBank2NatlTradesWHighUtilization | -0.448356 |
7 | AverageMInFile <= 52.00 | -0.434369 |
8 | NumRevolvingTradesWBalance <= 5.00 | 0.421528 |
9 | MaxDelq2PublicRecLast12M <= 5.00 | -0.418162 |
10 | PercentInstallTrades > 50.00 | -0.317591 |
11 | NumSatisfactoryTrades <= 12.00 | -0.31249 |
12 | MSinceMostRecentDelq <= 21.00 | -0.301577 |
13 | PercentTradesNeverDelq <= 95.00 | -0.273943 |
14 | ExternalRiskEstimate > 75.00 | 0.263449 |
15 | AverageMInFile <= 84.00 | -0.182149 |
16 | PercentInstallTrades > 42.00 | -0.174293 |
17 | PercentTradesNeverDelq | 0.16652 |
18 | AverageMInFile | 0.150668 |
19 | NumBank2NatlTradesWHighUtilization <= 0.00 | 0.135378 |
20 | MSinceOldestTradeOpen <= 122.00 | -0.132573 |
21 | PercentTradesNeverDelq <= 91.00 | -0.117714 |
22 | NumSatisfactoryTrades <= 17.00 | -0.110231 |
23 | ExternalRiskEstimate > 72.00 | 0.107622 |
24 | NumInqLast6M <= 0.00 | 0.0994023 |
25 | MSinceOldestTradeOpen <= 146.00 | -0.0967138 |
26 | MSinceMostRecentInqexcl7days <= 0.00 | -0.0900502 |
27 | AverageMInFile <= 61.00 | -0.0794766 |
28 | AverageMInFile <= 76.00 | -0.0722786 |
29 | PercentInstallTrades <= 42.00 | 0.0661076 |
30 | NetFractionRevolvingBurden <= 39.00 | 0.0627442 |
31 | MSinceOldestTradeOpen > 122.00 | 0.0602986 |
32 | NetFractionRevolvingBurden <= 50.00 | 0.0455399 |
33 | MSinceOldestTradeOpen | 0.0421244 |
34 | ExternalRiskEstimate > 69.00 | 0.035422 |
35 | PercentTradesWBalance <= 73.00 | -0.0345516 |
36 | MSinceOldestTradeOpen > 146.00 | 0.0244397 |
The test accuracy of LogRR is significantly better than that of BRCG and even better than the neural network in the Loan Officer and Customer sections. The LogRR model remains directly interpretable as it is a logistic regression model that uses the 36 rule-based and ordinal features shown above (in addition to an intercept term). Rules are distinguished by having one or more conditions on feature values (e.g. AverageMInFile <= 52.0) while ordinal features are marked by just the feature name without conditions (e.g. ExternalRiskEstimate). Being a linear model, feature importance is naturally given by the model coefficients and thus the list is sorted in order of decreasing coefficient magnitude. The list can be truncated if the user wishes to display fewer features.
Since the rules in this LogRR model happen to all be single conditions on individual features, the model contains no interactions between features. It is therefore a kind of generalized additive model (GAM), i.e. a sum of functions of individual features, where these functions are themselves sums of step function components from rules and linear components from unbinarized ordinal features. Thus a better way to visualize the model is by plotting the univariate functions that make up the GAM, as we do next.
We use the visualize()
method of LogisticRuleRegression
to plot the functions in the GAM that corresponds to the LogRR model (more generally, visualize()
plots the GAM part of a LogRR model, excluding higher-degree rules). The plots show the sizes and shapes of the model's dependences on individual features. These can then be compared to a lending expert's knowledge. In the present case, all plots indicate that the model behaves as we would expect with some interesting nuances.
The 36 features shown above involve only 14 of the original features in the data (not including the intercept), as verified below. For example, ExternalRiskEstimate appears in its unbinarized form in row 2 above and also in 3 rules (rows 14, 23, 34).
dfx = lrr.explain()
# Separate 1st-degree rules into (feature, operation, value) to count unique features
dfx2 = dfx['rule/numerical feature'].str.split(' ', expand=True)
dfx2.columns = ['feature','operation','value']
dfx2['feature'].nunique() # includes intercept
15
It follows that there are 14 functions to plot, which we organize into semantic groups below to ease interpretation.
As expected from the BRCG Boolean rule above, 'ExternalRiskEstimate' is an important feature positively correlated with good credit risk. The jumps in the plot indicate that applicants with above average 'ExternalRiskEstimate' (the mean is 72) get an additional boost.
lrr.visualize(data, fb, ['ExternalRiskEstimate']);
The next two plots illustrate the dependence on the applicant's credit inquiries. The first plot shows a significant penalty for having less than one month since the most recent inquiry ('MSinceMostRecentInqexcl7days' = 0).
lrr.visualize(data, fb, ['MSinceMostRecentInqexcl7days']);
The second shows that predicted risk increases with the number of inquiries in the last six months ('NumInqLast6M').
lrr.visualize(data, fb, ['NumInqLast6M']);
The following four plots relate to the applicant's debt level. 'NetFractionRevolvingBurden' is the ratio of revolving debt (e.g. credit card) balance to credit limit, expressed as a percentage, and has a large negative impact on the probability of good credit. A small fraction of applicants (less than 1%) actually have NetFractionRevolvingBurden greater than 100%, i.e. more revolving debt than their credit limit. This might be investigated further by the data scientist.
lrr.visualize(data, fb, ['NetFractionRevolvingBurden']);
The second 'NumBank2NatlTradesWHighUtilization' plot shows that the number of accounts ("trades") with high utilization (high balance relative to credit limit for each account) also has a large impact, with a drop as soon as one account has high utilization.
lrr.visualize(data, fb, ['NumBank2NatlTradesWHighUtilization']);
The third plot shows that the model gives a bonus to applicants who carry balances on no more than five revolving debt accounts.
lrr.visualize(data, fb, ['NumRevolvingTradesWBalance']);
The fourth shows an effect from the percentage of accounts with a balance that is much smaller than those from other features.
lrr.visualize(data, fb, ['PercentTradesWBalance']);
The number of "satisfactory" accounts ("trades") has a significant positive effect on the predicted probability of good credit, with jumps at 12 and 17 accounts.
lrr.visualize(data, fb, ['NumSatisfactoryTrades']);
However, having more than 40% as installment debt accounts (e.g. car loans) is seen as a negative.
lrr.visualize(data, fb, ['PercentInstallTrades']);
The 'AverageMInFile' plot shows that most of the benefit of having a longer average credit history accrues between average ages of 52 and 84 months (four to seven years).
lrr.visualize(data, fb, ['AverageMInFile']);
Similar but smaller gains come when the age of the oldest account ('MSinceOldestTradeOpen') exceeds 122 and 146 months (10-12 years).
lrr.visualize(data, fb, ['MSinceOldestTradeOpen']);
The last set of plots looks at the effect of delinquencies. The first plot shows that much of the change due to the percentage of accounts that were never delinquent ('PercentTradesNeverDelq') occurs between 90% and 100%.
lrr.visualize(data, fb, ['PercentTradesNeverDelq']);
'MaxDelq2PublicRecLast12M' measures the severity of the applicant's worst delinquency from the last 12 months of the public record. A value of 5 or below indicates that some delinquency has occurred, whether of unknown duration, 30/60/90/120 days delinquent, or a derogatory comment.
lrr.visualize(data, fb, ['MaxDelq2PublicRecLast12M']);
According to the last 'MSinceMostRecentDelq' plot, the effect of the most recent delinquency wears off after 21 months.
lrr.visualize(data, fb, ['MSinceMostRecentDelq']);
We now show how to generate explanations in the form of selecting prototypical or similar user profiles to an applicant in question that a bank employee such as a loan officer may be interested in. This may help the employee understand the decision of an applicant's HELOC application being accepted or rejected in the context of other similar applications. Note that the selected prototypical applications are profiles that are part of the training set that has been used to train an AI model that predicts good or bad i.e. approved or rejected for these applications. In fact, the method used in this notebook can work even if we are given not just one but a set of user profiles for which we want to find similar profiles from a training dataset. Additionally, the method computes weights for each prototype showcasing its similarity to the user(s) in question.
The prototypical explanations in AIX360 are obtained using the Protodash algorithm developed in the following work: ProtoDash: Fast Interpretable Prototype Selection
We now provide a brief overview of the method. The method takes as input a datapoint (or group of datapoints) that we want to explain with respect to instances in a training set belonging to the same feature space. The method then tries to minimize the maximum mean discrepancy (MMD metric) between the datapoints we want to explain and a prespecified number of instances from the training set that it will select. In other words, it will try to select training instances that have the same distribution as the datapoints we want to explain. The method does greedy selection and has quality guarantees with it also returning importance weights for the chosen prototypical training instances indicative of how similar/representative they are.
In this tutorial, we will see two examples of obtaining prototypes, one for a user whose HELOC application was approved and another for a user whose HELOC application was rejected. In each case, we showcase the top five prototypes from the training data along with how similar the feature values were for these prototypes.
Example 1. Obtaining similar samples as explanations for a HELOC applicant predicted as "Good"
Example 2. Obtaining similar samples as explanations for a HELOC applicant predicted as "Bad"
Before we showcase the two examples we provide some motivation for using this method. The method selects applications from the training set that are similar in different ways to the user application we want to explain. For example, a users loan may be rejected justifiably because the number of satisfactory trades he performed were low similar to another rejected user, or because his/her debts were too high similar to a different rejected user. Either of these reasons in isolation may be sufficient for rejection and the method is able to surface a variety of such reasons through the selected prototypes. This is not the case using standard nearest neighbor techniques which use metrics such as euclidean distance, cosine similarity amongst others, where one might get the same type of explanation (i.e. applications with only low number of satisfactory trades). Protodash thus is able to provide a much more well rounded and comprehensive view of why the decision for the applicant may be justifiable.
Another benefit of the method is that — since it does distribution matching between the user/users in question and those available in the training set — it could, in principle, be applied also in non-iid settings such as for time series data. Other approaches which find similar profiles using standard distance measures (viz. euclidean, cosine) do not have this property. Additionally, we can also highlight important features for the different prototypes that made them similar to the user/users in question.
Import necessary libraries, frameworks and algorithms.
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import tensorflow as tf
from keras.models import Sequential, Model, load_model, model_from_json
from keras.layers import Dense
import matplotlib.pyplot as plt
from IPython.core.display import display, HTML
from aix360.algorithms.contrastive import CEMExplainer, KerasClassifier
from aix360.algorithms.protodash import ProtodashExplainer
from aix360.datasets.heloc_dataset import HELOCDataset
heloc = HELOCDataset()
df = heloc.dataframe()
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 24)
pd.set_option('display.width', 1000)
print("Size of HELOC dataset:", df.shape)
print("Number of \"Good\" applicants:", np.sum(df['RiskPerformance']=='Good'))
print("Number of \"Bad\" applicants:", np.sum(df['RiskPerformance']=='Bad'))
print("Sample Applicants:")
df.head(10).transpose()
Using Heloc dataset: /Users/vijay/AIX360-TEST/AIX360/aix360/datasets/../data/heloc_data/heloc_dataset.csv Size of HELOC dataset: (10459, 24) Number of "Good" applicants: 5000 Number of "Bad" applicants: 5459 Sample Applicants:
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |
---|---|---|---|---|---|---|---|---|---|---|
ExternalRiskEstimate | 55 | 61 | 67 | 66 | 81 | 59 | 54 | 68 | 59 | 61 |
MSinceOldestTradeOpen | 144 | 58 | 66 | 169 | 333 | 137 | 88 | 148 | 324 | 79 |
MSinceMostRecentTradeOpen | 4 | 15 | 5 | 1 | 27 | 11 | 7 | 7 | 2 | 4 |
AverageMInFile | 84 | 41 | 24 | 73 | 132 | 78 | 37 | 65 | 138 | 36 |
NumSatisfactoryTrades | 20 | 2 | 9 | 28 | 12 | 31 | 25 | 17 | 24 | 19 |
NumTrades60Ever2DerogPubRec | 3 | 4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
NumTrades90Ever2DerogPubRec | 0 | 4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
PercentTradesNeverDelq | 83 | 100 | 100 | 93 | 100 | 91 | 92 | 83 | 85 | 95 |
MSinceMostRecentDelq | 2 | -7 | -7 | 76 | -7 | 1 | 9 | 31 | 5 | 5 |
MaxDelq2PublicRecLast12M | 3 | 0 | 7 | 6 | 7 | 4 | 4 | 6 | 4 | 4 |
MaxDelqEver | 5 | 8 | 8 | 6 | 8 | 6 | 6 | 6 | 6 | 6 |
NumTotalTrades | 23 | 7 | 9 | 30 | 12 | 32 | 26 | 18 | 27 | 19 |
NumTradesOpeninLast12M | 1 | 0 | 4 | 3 | 0 | 1 | 3 | 1 | 1 | 3 |
PercentInstallTrades | 43 | 67 | 44 | 57 | 25 | 47 | 58 | 44 | 26 | 26 |
MSinceMostRecentInqexcl7days | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
NumInqLast6M | 0 | 0 | 4 | 5 | 1 | 0 | 4 | 0 | 1 | 6 |
NumInqLast6Mexcl7days | 0 | 0 | 4 | 4 | 1 | 0 | 4 | 0 | 1 | 6 |
NetFractionRevolvingBurden | 33 | 0 | 53 | 72 | 51 | 62 | 89 | 28 | 68 | 31 |
NetFractionInstallBurden | -8 | -8 | 66 | 83 | 89 | 93 | 76 | 48 | -8 | 86 |
NumRevolvingTradesWBalance | 8 | 0 | 4 | 6 | 3 | 12 | 7 | 2 | 7 | 5 |
NumInstallTradesWBalance | 1 | -8 | 2 | 4 | 1 | 4 | 7 | 2 | 1 | 3 |
NumBank2NatlTradesWHighUtilization | 1 | -8 | 1 | 3 | 0 | 3 | 2 | 2 | 3 | 1 |
PercentTradesWBalance | 69 | 0 | 86 | 91 | 80 | 94 | 100 | 40 | 90 | 62 |
RiskPerformance | Bad | Bad | Bad | Bad | Bad | Bad | Good | Good | Bad | Bad |
# Plot (example) distributions for two features
print("Distribution of ExternalRiskEstimate and NumSatisfactoryTrades columns:")
hist = df.hist(column=['ExternalRiskEstimate', 'NumSatisfactoryTrades'], bins=10)
Distribution of ExternalRiskEstimate and NumSatisfactoryTrades columns:
We will first process the HELOC dataset before using it to train an NN model that can predict the target variable RiskPerformance. The HELOC dataset is a tabular dataset with numerical values. However, some of the values are negative and need to be filtered. The processed data is stored in the file heloc.npz for easy access. The dataset is also normalized for training.
The data processing and the type of model built in this case is different from the Data Scientist persona described above where rule based methods are showcased. This is the reason for going through these steps again for the Loan Officer persona.
# Clean data and split dataset into train/test
(Data, x_train, x_test, y_train_b, y_test_b) = heloc.split()
Z = np.vstack((x_train, x_test))
Zmax = np.max(Z, axis=0)
Zmin = np.min(Z, axis=0)
#normalize an array of samples to range [-0.5, 0.5]
def normalize(V):
VN = (V - Zmin)/(Zmax - Zmin)
VN = VN - 0.5
return(VN)
# rescale a sample to recover original values for normalized values.
def rescale(X):
return(np.multiply ( X + 0.5, (Zmax - Zmin) ) + Zmin)
N = normalize(Z)
xn_train = N[0:x_train.shape[0], :]
xn_test = N[x_train.shape[0]:, :]
# nn with no softmax
def nn_small():
model = Sequential()
model.add(Dense(10, input_dim=23, kernel_initializer='normal', activation='relu'))
model.add(Dense(2, kernel_initializer='normal'))
return model
# Set random seeds for repeatability
np.random.seed(1)
tf.set_random_seed(2)
class_names = ['Bad', 'Good']
# loss function
def fn(correct, predicted):
return tf.nn.softmax_cross_entropy_with_logits(labels=correct, logits=predicted)
# compile and print model summary
nn = nn_small()
nn.compile(loss=fn, optimizer='adam', metrics=['accuracy'])
nn.summary()
# train model or load a trained model
TRAIN_MODEL = False
if (TRAIN_MODEL):
nn.fit(xn_train, y_train_b, batch_size=128, epochs=500, verbose=1, shuffle=False)
nn.save_weights("heloc_nnsmall.h5")
else:
nn.load_weights("heloc_nnsmall.h5")
# evaluate model accuracy
score = nn.evaluate(xn_train, y_train_b, verbose=0) #Compute training set accuracy
#print('Train loss:', score[0])
print('Train accuracy:', score[1])
score = nn.evaluate(xn_test, y_test_b, verbose=0) #Compute test set accuracy
#print('Test loss:', score[0])
print('Test accuracy:', score[1])
WARNING:tensorflow:From <ipython-input-27-96ecdbbb04d6>:9: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version. Instructions for updating: Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default. See `tf.nn.softmax_cross_entropy_with_logits_v2`.
WARNING:tensorflow:From <ipython-input-27-96ecdbbb04d6>:9: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version. Instructions for updating: Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default. See `tf.nn.softmax_cross_entropy_with_logits_v2`.
Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 10) 240 _________________________________________________________________ dense_2 (Dense) (None, 2) 22 ================================================================= Total params: 262 Trainable params: 262 Non-trainable params: 0 _________________________________________________________________ WARNING:tensorflow:From /Users/vijay/opt/anaconda3/envs/aix360/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
WARNING:tensorflow:From /Users/vijay/opt/anaconda3/envs/aix360/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
Train accuracy: 0.7387545704841614 Test accuracy: 0.7224473357200623
p_train = nn.predict_classes(xn_train) # Use trained neural network to predict train points
p_train = p_train.reshape((p_train.shape[0],1))
z_train = np.hstack((xn_train, p_train)) # Store (normalized) instances that were predicted as Good
z_train_good = z_train[z_train[:,-1]==1, :]
zun_train = np.hstack((x_train, p_train)) # Store (unnormalized) instances that were predicted as Good
zun_train_good = zun_train[zun_train[:,-1]==1, :]
Let us now consider applicant 8 whose loan was approved. Note that this applicant was also considered for the contrastive explainer, however, we now justify the approved status in a different manner using prototypical examples, which is arguably a better explanation for a bank employee.
idx = 8
X = xn_test[idx].reshape((1,) + xn_test[idx].shape)
print("Chosen Sample:", idx)
print("Prediction made by the model:", class_names[np.argmax(nn.predict_proba(X))])
print("Prediction probabilities:", nn.predict_proba(X))
print("")
# attach the prediction made by the model to X
X = np.hstack((X, nn.predict_classes(X).reshape((1,1))))
Xun = x_test[idx].reshape((1,) + x_test[idx].shape)
dfx = pd.DataFrame.from_records(Xun.astype('double')) # Create dataframe with original feature values
dfx[23] = class_names[int(X[0, -1])]
dfx.columns = df.columns
dfx.transpose()
Chosen Sample: 8 Prediction made by the model: Good Prediction probabilities: [[-0.1889221 0.29527372]]
0 | |
---|---|
ExternalRiskEstimate | 82 |
MSinceOldestTradeOpen | 280 |
MSinceMostRecentTradeOpen | 13 |
AverageMInFile | 102 |
NumSatisfactoryTrades | 22 |
NumTrades60Ever2DerogPubRec | 0 |
NumTrades90Ever2DerogPubRec | 0 |
PercentTradesNeverDelq | 91 |
MSinceMostRecentDelq | 26 |
MaxDelq2PublicRecLast12M | 6 |
MaxDelqEver | 6 |
NumTotalTrades | 23 |
NumTradesOpeninLast12M | 0 |
PercentInstallTrades | 9 |
MSinceMostRecentInqexcl7days | 0 |
NumInqLast6M | 0 |
NumInqLast6Mexcl7days | 0 |
NetFractionRevolvingBurden | 3 |
NetFractionInstallBurden | 0 |
NumRevolvingTradesWBalance | 4 |
NumInstallTradesWBalance | 1 |
NumBank2NatlTradesWHighUtilization | 1 |
PercentTradesWBalance | 42 |
RiskPerformance | Good |
explainer = ProtodashExplainer()
(W, S, setValues) = explainer.explain(X, z_train_good, m=5) # Return weights W, Prototypes S and objective function values
dfs = pd.DataFrame.from_records(zun_train_good[S, 0:-1].astype('double'))
RP=[]
for i in range(S.shape[0]):
RP.append(class_names[int(z_train_good[S[i], -1])]) # Append class names
dfs[23] = RP
dfs.columns = df.columns
dfs["Weight"] = np.around(W, 5)/np.sum(np.around(W, 5)) # Calculate normalized importance weights
dfs.transpose()
0 | 1 | 2 | 3 | 4 | |
---|---|---|---|---|---|
ExternalRiskEstimate | 85 | 89 | 77 | 83 | 73 |
MSinceOldestTradeOpen | 223 | 379 | 338 | 789 | 230 |
MSinceMostRecentTradeOpen | 13 | 156 | 2 | 6 | 5 |
AverageMInFile | 87 | 257 | 109 | 102 | 89 |
NumSatisfactoryTrades | 23 | 3 | 16 | 41 | 61 |
NumTrades60Ever2DerogPubRec | 0 | 0 | 2 | 0 | 0 |
NumTrades90Ever2DerogPubRec | 0 | 0 | 2 | 0 | 0 |
PercentTradesNeverDelq | 91 | 100 | 90 | 100 | 100 |
MSinceMostRecentDelq | 26 | 0 | 65 | 0 | 0 |
MaxDelq2PublicRecLast12M | 6 | 7 | 6 | 7 | 6 |
MaxDelqEver | 6 | 8 | 2 | 8 | 7 |
NumTotalTrades | 26 | 3 | 21 | 41 | 37 |
NumTradesOpeninLast12M | 0 | 0 | 1 | 1 | 3 |
PercentInstallTrades | 9 | 33 | 14 | 17 | 18 |
MSinceMostRecentInqexcl7days | 1 | 0 | 0 | 0 | 0 |
NumInqLast6M | 1 | 0 | 1 | 1 | 2 |
NumInqLast6Mexcl7days | 1 | 0 | 1 | 0 | 2 |
NetFractionRevolvingBurden | 4 | 0 | 2 | 1 | 59 |
NetFractionInstallBurden | 0 | 0 | 0 | 0 | 72 |
NumRevolvingTradesWBalance | 4 | 0 | 1 | 3 | 9 |
NumInstallTradesWBalance | 1 | 0 | 1 | 0 | 1 |
NumBank2NatlTradesWHighUtilization | 0 | 0 | 0 | 1 | 7 |
PercentTradesWBalance | 50 | 0 | 22 | 23 | 53 |
RiskPerformance | Good | Good | Good | Good | Good |
Weight | 0.730229 | 0.0690569 | 0.0978603 | 0.0498052 | 0.0530484 |
The more similar the feature of prototypical user is to the applicant, the closer its weight is to 1. We can see below that several features for prototypes are quite similar to the chosen applicant. A human friendly explanation is provided thereafter.
z = z_train_good[S, 0:-1] # Store chosen prototypes
eps = 1e-10 # Small constant defined to eliminate divide-by-zero errors
fwt = np.zeros(z.shape)
for i in range (z.shape[0]):
for j in range(z.shape[1]):
fwt[i, j] = np.exp(-1 * abs(X[0, j] - z[i,j])/(np.std(z[:, j])+eps)) # Compute feature similarity in [0,1]
# move wts to a dataframe to display
dfw = pd.DataFrame.from_records(np.around(fwt.astype('double'), 2))
dfw.columns = df.columns[:-1]
dfw.transpose()
0 | 1 | 2 | 3 | 4 | |
---|---|---|---|---|---|
ExternalRiskEstimate | 0.59 | 0.29 | 0.42 | 0.84 | 0.21 |
MSinceOldestTradeOpen | 0.76 | 0.62 | 0.76 | 0.09 | 0.79 |
MSinceMostRecentTradeOpen | 1.00 | 0.09 | 0.83 | 0.89 | 0.87 |
AverageMInFile | 0.79 | 0.09 | 0.90 | 1.00 | 0.82 |
NumSatisfactoryTrades | 0.95 | 0.39 | 0.74 | 0.39 | 0.15 |
NumTrades60Ever2DerogPubRec | 1.00 | 1.00 | 0.08 | 1.00 | 1.00 |
NumTrades90Ever2DerogPubRec | 1.00 | 1.00 | 0.08 | 1.00 | 1.00 |
PercentTradesNeverDelq | 1.00 | 0.15 | 0.81 | 0.15 | 0.15 |
MSinceMostRecentDelq | 1.00 | 0.36 | 0.22 | 0.36 | 0.36 |
MaxDelq2PublicRecLast12M | 1.00 | 0.13 | 1.00 | 0.13 | 1.00 |
MaxDelqEver | 1.00 | 0.41 | 0.17 | 0.41 | 0.64 |
NumTotalTrades | 0.80 | 0.23 | 0.86 | 0.26 | 0.35 |
NumTradesOpeninLast12M | 1.00 | 1.00 | 0.40 | 0.40 | 0.06 |
PercentInstallTrades | 1.00 | 0.05 | 0.54 | 0.37 | 0.33 |
MSinceMostRecentInqexcl7days | 0.08 | 1.00 | 1.00 | 1.00 | 1.00 |
NumInqLast6M | 0.21 | 1.00 | 0.21 | 0.21 | 0.04 |
NumInqLast6Mexcl7days | 0.26 | 1.00 | 0.26 | 1.00 | 0.07 |
NetFractionRevolvingBurden | 0.96 | 0.88 | 0.96 | 0.92 | 0.09 |
NetFractionInstallBurden | 1.00 | 1.00 | 1.00 | 1.00 | 0.08 |
NumRevolvingTradesWBalance | 1.00 | 0.28 | 0.38 | 0.73 | 0.20 |
NumInstallTradesWBalance | 1.00 | 0.13 | 1.00 | 0.13 | 1.00 |
NumBank2NatlTradesWHighUtilization | 0.69 | 0.69 | 0.69 | 1.00 | 0.11 |
PercentTradesWBalance | 0.67 | 0.12 | 0.36 | 0.38 | 0.57 |
The above table depicts the five closest user profiles to the chosen applicant. Based on importance weight outputted by the method, we see that the prototype under column zero is the most representative user profile by far. This is (intuitively) confirmed from the feature similarity table above where more than 50% of the features (12 out of 23) of this prototype are identical to that of the chosen user whose prediction we want to explain. Also, the bank employee looking at the prototypical users and their features surmises that the approved applicant belongs to a group of approved users that have practically no debt (NetFractionInstallBurden). This justification gives the employee more confidence in approving the users application.
We now consider a user 1272 whose loan was denied. We obtained a contrastive explanation for this user before. Similar to user 8, we now obtain exemplar based explanations for this user to help the bank employee understand the reasons for the rejection. Steps similar to example 1 are followed in this case too, where we first process the data, obtain prototypes and their importance weights, and finally showcase how similar the features are of these prototypes to the user we want to explain.
z_train_bad = z_train[z_train[:,-1]==0, :]
zun_train_bad = zun_train[zun_train[:,-1]==0, :]
idx = 1272 #another user to try 2385
X = xn_test[idx].reshape((1,) + xn_test[idx].shape)
print("Chosen Sample:", idx)
print("Prediction made by the model:", class_names[np.argmax(nn.predict_proba(X))])
print("Prediction probabilities:", nn.predict_proba(X))
print("")
X = np.hstack((X, nn.predict_classes(X).reshape((1,1))))
# move samples to a dataframe to display
Xun = x_test[idx].reshape((1,) + x_test[idx].shape)
dfx = pd.DataFrame.from_records(Xun.astype('double'))
dfx[23] = class_names[int(X[0, -1])]
dfx.columns = df.columns
dfx.transpose()
Chosen Sample: 1272 Prediction made by the model: Bad Prediction probabilities: [[ 0.40682057 -0.391679 ]]
0 | |
---|---|
ExternalRiskEstimate | 65 |
MSinceOldestTradeOpen | 256 |
MSinceMostRecentTradeOpen | 15 |
AverageMInFile | 52 |
NumSatisfactoryTrades | 17 |
NumTrades60Ever2DerogPubRec | 0 |
NumTrades90Ever2DerogPubRec | 0 |
PercentTradesNeverDelq | 100 |
MSinceMostRecentDelq | 0 |
MaxDelq2PublicRecLast12M | 7 |
MaxDelqEver | 8 |
NumTotalTrades | 19 |
NumTradesOpeninLast12M | 0 |
PercentInstallTrades | 29 |
MSinceMostRecentInqexcl7days | 2 |
NumInqLast6M | 5 |
NumInqLast6Mexcl7days | 5 |
NetFractionRevolvingBurden | 57 |
NetFractionInstallBurden | 79 |
NumRevolvingTradesWBalance | 2 |
NumInstallTradesWBalance | 4 |
NumBank2NatlTradesWHighUtilization | 2 |
PercentTradesWBalance | 60 |
RiskPerformance | Bad |
(W, S, setValues) = explainer.explain(X, z_train_bad, m=5) # Return weights W, Prototypes S and objective function values
# move samples to a dataframe to display
dfs = pd.DataFrame.from_records(zun_train_bad[S, 0:-1].astype('double'))
RP=[]
for i in range(S.shape[0]):
RP.append(class_names[int(z_train_bad[S[i], -1])]) # Append class names
dfs[23] = RP
dfs.columns = df.columns
dfs["Weight"] = np.around(W, 5)/np.sum(np.around(W, 5)) # Compute normalized importance weights for prototypes
dfs.transpose()
0 | 1 | 2 | 3 | 4 | |
---|---|---|---|---|---|
ExternalRiskEstimate | 73 | 61 | 64 | 55 | 0 |
MSinceOldestTradeOpen | 191 | 125 | 85 | 194 | 383 |
MSinceMostRecentTradeOpen | 17 | 7 | 0 | 26 | 383 |
AverageMInFile | 53 | 32 | 13 | 100 | 383 |
NumSatisfactoryTrades | 19 | 5 | 2 | 18 | 1 |
NumTrades60Ever2DerogPubRec | 0 | 1 | 0 | 0 | 1 |
NumTrades90Ever2DerogPubRec | 0 | 1 | 0 | 0 | 1 |
PercentTradesNeverDelq | 100 | 100 | 100 | 84 | 100 |
MSinceMostRecentDelq | 0 | 0 | 0 | 1 | 0 |
MaxDelq2PublicRecLast12M | 7 | 7 | 7 | 4 | 6 |
MaxDelqEver | 8 | 8 | 8 | 6 | 8 |
NumTotalTrades | 20 | 6 | 9 | 11 | 1 |
NumTradesOpeninLast12M | 0 | 3 | 8 | 0 | 0 |
PercentInstallTrades | 25 | 60 | 33 | 42 | 100 |
MSinceMostRecentInqexcl7days | 0 | 0 | 0 | 23 | 0 |
NumInqLast6M | 0 | 1 | 66 | 0 | 1 |
NumInqLast6Mexcl7days | 0 | 1 | 66 | 0 | 1 |
NetFractionRevolvingBurden | 31 | 232 | 65 | 84 | 0 |
NetFractionInstallBurden | 78 | 83 | 0 | 48 | 0 |
NumRevolvingTradesWBalance | 4 | 1 | 2 | 5 | 0 |
NumInstallTradesWBalance | 3 | 3 | 3 | 3 | 0 |
NumBank2NatlTradesWHighUtilization | 1 | 1 | 1 | 3 | 0 |
PercentTradesWBalance | 54 | 100 | 71 | 100 | 0 |
RiskPerformance | Bad | Bad | Bad | Bad | Bad |
Weight | 0.781773 | 0.0822525 | 0.0573946 | 0.0642844 | 0.0142955 |
The more similar the feature of prototypical user is to the applicant, the closer its weight is to 1. We can see below that several features for prototypes are quite similar to the chosen applicant. Following this table we provide human friendly explanation based on this table.
z = z_train_bad[S, 0:-1] # Store the prototypes
eps = 1e-10 # Small constant to guard against divide by zero errors
fwt = np.zeros(z.shape)
for i in range (z.shape[0]): # Compute feature similarity for each prototype
for j in range(z.shape[1]):
fwt[i, j] = np.exp(-1 * abs(X[0, j] - z[i,j])/(np.std(z[:, j])+eps))
# move wts to a dataframe to display
dfw = pd.DataFrame.from_records(np.around(fwt.astype('double'), 2))
dfw.columns = df.columns[:-1]
dfw.transpose()
0 | 1 | 2 | 3 | 4 | |
---|---|---|---|---|---|
ExternalRiskEstimate | 0.73 | 0.86 | 0.96 | 0.68 | 0.08 |
MSinceOldestTradeOpen | 0.53 | 0.28 | 0.19 | 0.55 | 0.29 |
MSinceMostRecentTradeOpen | 0.99 | 0.95 | 0.90 | 0.93 | 0.08 |
AverageMInFile | 0.99 | 0.86 | 0.75 | 0.70 | 0.09 |
NumSatisfactoryTrades | 0.78 | 0.22 | 0.15 | 0.88 | 0.13 |
NumTrades60Ever2DerogPubRec | 1.00 | 0.13 | 1.00 | 1.00 | 0.13 |
NumTrades90Ever2DerogPubRec | 1.00 | 0.13 | 1.00 | 1.00 | 0.13 |
PercentTradesNeverDelq | 1.00 | 1.00 | 1.00 | 0.08 | 1.00 |
MSinceMostRecentDelq | 1.00 | 1.00 | 1.00 | 0.08 | 1.00 |
MaxDelq2PublicRecLast12M | 1.00 | 1.00 | 1.00 | 0.08 | 0.42 |
MaxDelqEver | 1.00 | 1.00 | 1.00 | 0.08 | 1.00 |
NumTotalTrades | 0.85 | 0.13 | 0.20 | 0.28 | 0.06 |
NumTradesOpeninLast12M | 1.00 | 0.38 | 0.08 | 1.00 | 1.00 |
PercentInstallTrades | 0.86 | 0.31 | 0.86 | 0.61 | 0.07 |
MSinceMostRecentInqexcl7days | 0.80 | 0.80 | 0.80 | 0.10 | 0.80 |
NumInqLast6M | 0.83 | 0.86 | 0.10 | 0.83 | 0.86 |
NumInqLast6Mexcl7days | 0.83 | 0.86 | 0.10 | 0.83 | 0.86 |
NetFractionRevolvingBurden | 0.72 | 0.11 | 0.91 | 0.71 | 0.49 |
NetFractionInstallBurden | 0.97 | 0.90 | 0.11 | 0.42 | 0.11 |
NumRevolvingTradesWBalance | 0.34 | 0.58 | 1.00 | 0.20 | 0.34 |
NumInstallTradesWBalance | 0.43 | 0.43 | 0.43 | 0.43 | 0.04 |
NumBank2NatlTradesWHighUtilization | 0.36 | 0.36 | 0.36 | 0.36 | 0.13 |
PercentTradesWBalance | 0.85 | 0.34 | 0.74 | 0.34 | 0.20 |
Here again, the above table depicts the five closest user profiles to the chosen applicant. Based on importance weight outputted by the method we see that the prototype under column zero is the most representative user profile by far. This is (intuitively) confirmed from the feature similarity table above where 10 features out of 23 of this prototype are highly similar (>0.9) to that of the user we want to explain. Also the bank employee can see that the applicant belongs to a group of rejected applicants with similar deliquency behavior. Realizing that the user also poses similar risk as these other applicants whose loan was rejected, the employee takes the more conservative decision of rejecting the users application as well.
We now demonstrate how to compute contrastive explanations using AIX360 and how such explanations can help home owners understand the decisions made by AI models that approve or reject their HELOC applications.
Typically, home owners would like to understand why they do not qualify for a line of credit and if so what changes in their application would qualify them. On the other hand, if they qualified, they might want to know what factors led to the approval of their application.
In this context, contrastive explanations provide information to applicants about what minimal changes to their profile would have changed the decision of the AI model from reject to accept or vice-versa (pertinent negatives). For example, increasing the number of satisfactory trades to a certain value may have led to the acceptance of the application everything else being the same.
The method presented here also highlights a minimal set of features and their values that would still maintain the original decision (pertinent positives). For example, for an applicant whose HELOC application was approved, the explanation may say that even if the number of satisfactory trades was reduced to a lower number, the loan would have still gotten through.
Additionally, organizations (Banks, financial institutions, etc.) would like to understand trends in the behavior of their AI models in approving loan applications, which could be done by studying contrastive explanations for individuals whose loans were either accepted or rejected. Looking at the aggregate statistics of pertinent positives for approved applicants the organization can get insight into what minimal set of features and their values play an important role in acceptances. While studying the aggregate statistics of pertinent negatives the organization can get insight into features that could change the status of rejected applicants and potentially uncover ways that an applicant may game the system by changing potentially non-important features that could alter the models outcome.
The contrastive explanations in AIX360 are implemented using the algorithm developed in the following work:
We now provide a brief overview of the method. As mentioned above the algorithm outputs a contrastive explanation which consists of two parts: a) pertinent negatives (PNs) and b) pertinent positives (PPs). PNs identify a minimal set of features which if altered would change the classification of the original input. For example, in the loan case if a person's credit score is increased their loan application status may change from reject to accept. The manner in which the method accomplishes this is by optimizing a change in the prediction probability loss while enforcing an elastic norm constraint that results in minimal change of features and their values. Optionally, an auto-encoder may also be used to force these minimal changes to produce realistic PNs. PPs on the other hand identify a minimal set of features and their values that are sufficient to yield the original input's classification. For example, an individual's loan may still be accepted if the salary was 50K as opposed to 100K. Here again we have an elastic norm term so that the amount of information needed is minimal, however, the first loss term in this case tries to make the original input's class to be the winning class. For a more in-depth discussion, please refer to the above work.
The three main steps to obtain a contrastive explanation are shown below. The first two steps are more about processing the data and building an AI model while the third step computes the actual explanation.
Step 1. Process and Normalize HELOC dataset for training
Step 2. Define and train a NN classifier
Step 3. Compute contrastive explanations for a few applicants
import warnings
warnings.filterwarnings('ignore')
heloc = HELOCDataset()
df = heloc.dataframe()
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 24)
pd.set_option('display.width', 1000)
print("Size of HELOC dataset:", df.shape)
print("Number of \"Good\" applicants:", np.sum(df['RiskPerformance']=='Good'))
print("Number of \"Bad\" applicants:", np.sum(df['RiskPerformance']=='Bad'))
print("Sample Applicants:")
df.head(10).transpose()
Using Heloc dataset: /Users/vijay/AIX360-TEST/AIX360/aix360/datasets/../data/heloc_data/heloc_dataset.csv Size of HELOC dataset: (10459, 24) Number of "Good" applicants: 5000 Number of "Bad" applicants: 5459 Sample Applicants:
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |
---|---|---|---|---|---|---|---|---|---|---|
ExternalRiskEstimate | 55 | 61 | 67 | 66 | 81 | 59 | 54 | 68 | 59 | 61 |
MSinceOldestTradeOpen | 144 | 58 | 66 | 169 | 333 | 137 | 88 | 148 | 324 | 79 |
MSinceMostRecentTradeOpen | 4 | 15 | 5 | 1 | 27 | 11 | 7 | 7 | 2 | 4 |
AverageMInFile | 84 | 41 | 24 | 73 | 132 | 78 | 37 | 65 | 138 | 36 |
NumSatisfactoryTrades | 20 | 2 | 9 | 28 | 12 | 31 | 25 | 17 | 24 | 19 |
NumTrades60Ever2DerogPubRec | 3 | 4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
NumTrades90Ever2DerogPubRec | 0 | 4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
PercentTradesNeverDelq | 83 | 100 | 100 | 93 | 100 | 91 | 92 | 83 | 85 | 95 |
MSinceMostRecentDelq | 2 | -7 | -7 | 76 | -7 | 1 | 9 | 31 | 5 | 5 |
MaxDelq2PublicRecLast12M | 3 | 0 | 7 | 6 | 7 | 4 | 4 | 6 | 4 | 4 |
MaxDelqEver | 5 | 8 | 8 | 6 | 8 | 6 | 6 | 6 | 6 | 6 |
NumTotalTrades | 23 | 7 | 9 | 30 | 12 | 32 | 26 | 18 | 27 | 19 |
NumTradesOpeninLast12M | 1 | 0 | 4 | 3 | 0 | 1 | 3 | 1 | 1 | 3 |
PercentInstallTrades | 43 | 67 | 44 | 57 | 25 | 47 | 58 | 44 | 26 | 26 |
MSinceMostRecentInqexcl7days | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
NumInqLast6M | 0 | 0 | 4 | 5 | 1 | 0 | 4 | 0 | 1 | 6 |
NumInqLast6Mexcl7days | 0 | 0 | 4 | 4 | 1 | 0 | 4 | 0 | 1 | 6 |
NetFractionRevolvingBurden | 33 | 0 | 53 | 72 | 51 | 62 | 89 | 28 | 68 | 31 |
NetFractionInstallBurden | -8 | -8 | 66 | 83 | 89 | 93 | 76 | 48 | -8 | 86 |
NumRevolvingTradesWBalance | 8 | 0 | 4 | 6 | 3 | 12 | 7 | 2 | 7 | 5 |
NumInstallTradesWBalance | 1 | -8 | 2 | 4 | 1 | 4 | 7 | 2 | 1 | 3 |
NumBank2NatlTradesWHighUtilization | 1 | -8 | 1 | 3 | 0 | 3 | 2 | 2 | 3 | 1 |
PercentTradesWBalance | 69 | 0 | 86 | 91 | 80 | 94 | 100 | 40 | 90 | 62 |
RiskPerformance | Bad | Bad | Bad | Bad | Bad | Bad | Good | Good | Bad | Bad |
# Plot (example) distributions for two features
print("Distribution of ExternalRiskEstimate and NumSatisfactoryTrades columns:")
hist = df.hist(column=['ExternalRiskEstimate', 'NumSatisfactoryTrades'], bins=10)
Distribution of ExternalRiskEstimate and NumSatisfactoryTrades columns:
We will first process the HELOC dataset before using it to train an NN model that can predict the target variable RiskPerformance. The HELOC dataset is a tabular dataset with numerical values. However, some of the values are negative and need to be filtered. The processed data is stored in the file heloc.npz for easy access. The dataset is also normalized for training.
The data processing and model building is very similar to the Loan Officer persona above, where ProtoDash was the method of choice. We repeat these steps here so that both the use cases can be run independently.
# Clean data and split dataset into train/test
PROCESS_DATA = False
if (PROCESS_DATA):
(Data, x_train, x_test, y_train_b, y_test_b) = heloc.split()
np.savez('heloc.npz', Data=Data, x_train=x_train, x_test=x_test, y_train_b=y_train_b, y_test_b=y_test_b)
else:
heloc = np.load('heloc.npz', allow_pickle = True)
Data = heloc['Data']
x_train = heloc['x_train']
x_test = heloc['x_test']
y_train_b = heloc['y_train_b']
y_test_b = heloc['y_test_b']
Z = np.vstack((x_train, x_test))
Zmax = np.max(Z, axis=0)
Zmin = np.min(Z, axis=0)
#normalize an array of samples to range [-0.5, 0.5]
def normalize(V):
VN = (V - Zmin)/(Zmax - Zmin)
VN = VN - 0.5
return(VN)
# rescale a sample to recover original values for normalized values.
def rescale(X):
return(np.multiply ( X + 0.5, (Zmax - Zmin) ) + Zmin)
N = normalize(Z)
xn_train = N[0:x_train.shape[0], :]
xn_test = N[x_train.shape[0]:, :]
# nn with no softmax
def nn_small():
model = Sequential()
model.add(Dense(10, input_dim=23, kernel_initializer='normal', activation='relu'))
model.add(Dense(2, kernel_initializer='normal'))
return model
# Set random seeds for repeatability
np.random.seed(1)
tf.set_random_seed(2)
class_names = ['Bad', 'Good']
# loss function
def fn(correct, predicted):
return tf.nn.softmax_cross_entropy_with_logits(labels=correct, logits=predicted)
# compile and print model summary
nn = nn_small()
nn.compile(loss=fn, optimizer='adam', metrics=['accuracy'])
nn.summary()
# train model or load a trained model
TRAIN_MODEL = False
if (TRAIN_MODEL):
nn.fit(xn_train, y_train_b, batch_size=128, epochs=500, verbose=1, shuffle=False)
nn.save_weights("heloc_nnsmall.h5")
else:
nn.load_weights("heloc_nnsmall.h5")
# evaluate model accuracy
score = nn.evaluate(xn_train, y_train_b, verbose=0) #Compute training set accuracy
#print('Train loss:', score[0])
print('Train accuracy:', score[1])
score = nn.evaluate(xn_test, y_test_b, verbose=0) #Compute test set accuracy
#print('Test loss:', score[0])
print('Test accuracy:', score[1])
Model: "sequential_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_3 (Dense) (None, 10) 240 _________________________________________________________________ dense_4 (Dense) (None, 2) 22 ================================================================= Total params: 262 Trainable params: 262 Non-trainable params: 0 _________________________________________________________________ Train accuracy: 0.7387545704841614 Test accuracy: 0.7224473357200623
Given the trained NN model to decide on loan approvals, let us first examine an applicant whose application was denied and what (minimal) changes to his/her application would lead to approval (i.e. finding pertinent negatives). We will then look at another applicant whose loan was approved and ascertain features that would minimally suffice in him/her still getting a positive outcome (i.e. finding pertinent positives).
In order to compute pertinent negatives, the CEM explainer computes a user profile that is close to the original applicant but for whom the decision of HELOC application is different. The explainer alters a minimal set of features by a minimal (positive) amount. This will help the user whose loan application was initially rejected say, to ascertain how to get it accepted.
# Some interesting user samples to try: 2344 449 1168 1272
idx = 1272
X = xn_test[idx].reshape((1,) + xn_test[idx].shape)
print("Computing PN for Sample:", idx)
print("Prediction made by the model:", nn.predict_proba(X))
print("Prediction probabilities:", class_names[np.argmax(nn.predict_proba(X))])
print("")
mymodel = KerasClassifier(nn)
explainer = CEMExplainer(mymodel)
arg_mode = 'PN' # Find pertinent negatives
arg_max_iter = 1000 # Maximum number of iterations to search for the optimal PN for given parameter settings
arg_init_const = 10.0 # Initial coefficient value for main loss term that encourages class change
arg_b = 9 # No. of updates to the coefficient of the main loss term
arg_kappa = 0.2 # Minimum confidence gap between the PNs (changed) class probability and original class' probability
arg_beta = 1e-1 # Controls sparsity of the solution (L1 loss)
arg_gamma = 100 # Controls how much to adhere to a (optionally trained) auto-encoder
my_AE_model = None # Pointer to an auto-encoder
arg_alpha = 0.01 # Penalizes L2 norm of the solution
arg_threshold = 1. # Automatically turn off features <= arg_threshold if arg_threshold < 1
arg_offset = 0.5 # the model assumes classifier trained on data normalized
# in [-arg_offset, arg_offset] range, where arg_offset is 0 or 0.5
# Find PN for applicant 1272
(adv_pn, delta_pn, info_pn) = explainer.explain_instance(X, arg_mode, my_AE_model, arg_kappa, arg_b,
arg_max_iter, arg_init_const, arg_beta, arg_gamma,
arg_alpha, arg_threshold, arg_offset)
Computing PN for Sample: 1272 Prediction made by the model: [[ 0.40682057 -0.391679 ]] Prediction probabilities: Bad WARNING:tensorflow:From /Users/vijay/AIX360-TEST/AIX360/aix360/algorithms/contrastive/CEM_aen.py:60: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From /Users/vijay/AIX360-TEST/AIX360/aix360/algorithms/contrastive/CEM_aen.py:60: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From /Users/vijay/AIX360-TEST/AIX360/aix360/algorithms/contrastive/CEM_aen.py:151: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.
WARNING:tensorflow:From /Users/vijay/AIX360-TEST/AIX360/aix360/algorithms/contrastive/CEM_aen.py:151: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.
WARNING:tensorflow:From /Users/vijay/AIX360-TEST/AIX360/aix360/algorithms/contrastive/CEM_aen.py:213: The name tf.train.polynomial_decay is deprecated. Please use tf.compat.v1.train.polynomial_decay instead.
WARNING:tensorflow:From /Users/vijay/AIX360-TEST/AIX360/aix360/algorithms/contrastive/CEM_aen.py:213: The name tf.train.polynomial_decay is deprecated. Please use tf.compat.v1.train.polynomial_decay instead.
WARNING:tensorflow:From /Users/vijay/opt/anaconda3/envs/aix360/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/learning_rate_schedule.py:409: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Deprecated in favor of operator or tf.math.divide.
WARNING:tensorflow:From /Users/vijay/opt/anaconda3/envs/aix360/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/learning_rate_schedule.py:409: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Deprecated in favor of operator or tf.math.divide.
WARNING:tensorflow:From /Users/vijay/AIX360-TEST/AIX360/aix360/algorithms/contrastive/CEM_aen.py:216: The name tf.train.GradientDescentOptimizer is deprecated. Please use tf.compat.v1.train.GradientDescentOptimizer instead.
WARNING:tensorflow:From /Users/vijay/AIX360-TEST/AIX360/aix360/algorithms/contrastive/CEM_aen.py:216: The name tf.train.GradientDescentOptimizer is deprecated. Please use tf.compat.v1.train.GradientDescentOptimizer instead.
WARNING:tensorflow:From /Users/vijay/opt/anaconda3/envs/aix360/lib/python3.6/site-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:From /Users/vijay/opt/anaconda3/envs/aix360/lib/python3.6/site-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:From /Users/vijay/AIX360-TEST/AIX360/aix360/algorithms/contrastive/CEM_aen.py:230: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.
WARNING:tensorflow:From /Users/vijay/AIX360-TEST/AIX360/aix360/algorithms/contrastive/CEM_aen.py:230: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.
iter:0 const:[10.] Loss_Overall:0.2935, Loss_Attack:0.0000 Loss_L2Dist:0.2065, Loss_L1Dist:0.8703, AE_loss:0.0 target_lab_score:-1.1559, max_nontarget_lab_score:1.3184 iter:500 const:[10.] Loss_Overall:0.1706, Loss_Attack:0.0000 Loss_L2Dist:0.1153, Loss_L1Dist:0.5534, AE_loss:0.0 target_lab_score:-0.7383, max_nontarget_lab_score:0.8658 iter:0 const:[5.] Loss_Overall:0.0668, Loss_Attack:0.0000 Loss_L2Dist:0.0368, Loss_L1Dist:0.3000, AE_loss:0.0 target_lab_score:-0.2295, max_nontarget_lab_score:0.3076 iter:500 const:[5.] Loss_Overall:1.0819, Loss_Attack:1.0453 Loss_L2Dist:0.0219, Loss_L1Dist:0.1478, AE_loss:0.0 target_lab_score:0.0316, max_nontarget_lab_score:0.0226 iter:0 const:[2.5] Loss_Overall:2.0533, Loss_Attack:2.0489 Loss_L2Dist:0.0011, Loss_L1Dist:0.0335, AE_loss:0.0 target_lab_score:0.3218, max_nontarget_lab_score:-0.2978 iter:500 const:[2.5] Loss_Overall:2.4962, Loss_Attack:2.4962 Loss_L2Dist:0.0000, Loss_L1Dist:0.0000, AE_loss:0.0 target_lab_score:0.4068, max_nontarget_lab_score:-0.3917 iter:0 const:[3.75] Loss_Overall:1.1392, Loss_Attack:1.1129 Loss_L2Dist:0.0113, Loss_L1Dist:0.1500, AE_loss:0.0 target_lab_score:0.0727, max_nontarget_lab_score:-0.0241 iter:500 const:[3.75] Loss_Overall:0.2901, Loss_Attack:0.2420 Loss_L2Dist:0.0306, Loss_L1Dist:0.1749, AE_loss:0.0 target_lab_score:-0.0370, max_nontarget_lab_score:0.0984 iter:0 const:[3.125] Loss_Overall:1.9299, Loss_Attack:1.9179 Loss_L2Dist:0.0045, Loss_L1Dist:0.0750, AE_loss:0.0 target_lab_score:0.2238, max_nontarget_lab_score:-0.1899 iter:500 const:[3.125] Loss_Overall:2.4110, Loss_Attack:2.4049 Loss_L2Dist:0.0018, Loss_L1Dist:0.0429, AE_loss:0.0 target_lab_score:0.2980, max_nontarget_lab_score:-0.2715 iter:0 const:[2.8125] Loss_Overall:2.0618, Loss_Attack:2.0543 Loss_L2Dist:0.0025, Loss_L1Dist:0.0502, AE_loss:0.0 target_lab_score:0.2794, max_nontarget_lab_score:-0.2510 iter:500 const:[2.8125] Loss_Overall:2.7157, Loss_Attack:2.7151 Loss_L2Dist:0.0000, Loss_L1Dist:0.0062, AE_loss:0.0 target_lab_score:0.3911, max_nontarget_lab_score:-0.3743 iter:0 const:[2.65625] Loss_Overall:2.0645, Loss_Attack:2.0585 Loss_L2Dist:0.0018, Loss_L1Dist:0.0419, AE_loss:0.0 target_lab_score:0.3006, max_nontarget_lab_score:-0.2744 iter:500 const:[2.65625] Loss_Overall:0.3235, Loss_Attack:0.2788 Loss_L2Dist:0.0280, Loss_L1Dist:0.1673, AE_loss:0.0 target_lab_score:-0.0178, max_nontarget_lab_score:0.0772 iter:0 const:[2.734375] Loss_Overall:2.0649, Loss_Attack:2.0582 Loss_L2Dist:0.0021, Loss_L1Dist:0.0460, AE_loss:0.0 target_lab_score:0.2900, max_nontarget_lab_score:-0.2627 iter:500 const:[2.734375] Loss_Overall:1.6931, Loss_Attack:1.6808 Loss_L2Dist:0.0052, Loss_L1Dist:0.0719, AE_loss:0.0 target_lab_score:0.2244, max_nontarget_lab_score:-0.1903 iter:0 const:[2.7734375] Loss_Overall:2.0638, Loss_Attack:2.0567 Loss_L2Dist:0.0023, Loss_L1Dist:0.0481, AE_loss:0.0 target_lab_score:0.2847, max_nontarget_lab_score:-0.2568 iter:500 const:[2.7734375] Loss_Overall:2.2875, Loss_Attack:2.2832 Loss_L2Dist:0.0011, Loss_L1Dist:0.0328, AE_loss:0.0 target_lab_score:0.3235, max_nontarget_lab_score:-0.2997
Let us start by examining one particular loan application that was denied for applicant 1272. We showcase below how the decision could have been different through minimal changes to the profile conveyed by the pertinent negative. We also indicate the importance of different features to produce the change in the application status. The column delta in the table below indicates the necessary deviations for each of the features to produce this change. A human friendly explanation is then provided based on these deviations following the feature importance plot.
Xpn = adv_pn
classes = [ class_names[np.argmax(nn.predict_proba(X))], class_names[np.argmax(nn.predict_proba(Xpn))], 'NIL' ]
print("Sample:", idx)
print("prediction(X)", nn.predict_proba(X), class_names[np.argmax(nn.predict_proba(X))])
print("prediction(Xpn)", nn.predict_proba(Xpn), class_names[np.argmax(nn.predict_proba(Xpn))] )
X_re = rescale(X) # Convert values back to original scale from normalized
Xpn_re = rescale(Xpn)
Xpn_re = np.around(Xpn_re.astype(np.double), 2)
delta_re = Xpn_re - X_re
delta_re = np.around(delta_re.astype(np.double), 2)
delta_re[np.absolute(delta_re) < 1e-4] = 0
X3 = np.vstack((X_re, Xpn_re, delta_re))
dfre = pd.DataFrame.from_records(X3) # Create dataframe to display original point, PN and difference (delta)
dfre[23] = classes
dfre.columns = df.columns
dfre.rename(index={0:'X',1:'X_PN', 2:'(X_PN - X)'}, inplace=True)
dfret = dfre.transpose()
def highlight_ce(s, col, ncols):
if (type(s[col]) != str):
if (s[col] > 0):
return(['background-color: yellow']*ncols)
return(['background-color: white']*ncols)
dfret.style.apply(highlight_ce, col='(X_PN - X)', ncols=3, axis=1)
Sample: 1272 prediction(X) [[ 0.40682057 -0.391679 ]] Bad prediction(Xpn) [[-0.16797018 0.24030855]] Good
X | X_PN | (X_PN - X) | |
---|---|---|---|
ExternalRiskEstimate | 65.000000 | 80.840000 | 15.840000 |
MSinceOldestTradeOpen | 256.000000 | 256.000000 | 0.000000 |
MSinceMostRecentTradeOpen | 15.000000 | 15.000000 | 0.000000 |
AverageMInFile | 52.000000 | 65.620000 | 13.620000 |
NumSatisfactoryTrades | 17.000000 | 21.400000 | 4.400000 |
NumTrades60Ever2DerogPubRec | 0.000000 | 0.000000 | 0.000000 |
NumTrades90Ever2DerogPubRec | 0.000000 | 0.000000 | 0.000000 |
PercentTradesNeverDelq | 100.000000 | 100.000000 | 0.000000 |
MSinceMostRecentDelq | 0.000000 | 0.000000 | 0.000000 |
MaxDelq2PublicRecLast12M | 7.000000 | 7.000000 | 0.000000 |
MaxDelqEver | 8.000000 | 8.000000 | 0.000000 |
NumTotalTrades | 19.000000 | 19.000000 | 0.000000 |
NumTradesOpeninLast12M | 0.000 |