%reload_ext autoreload %autoreload 2 %matplotlib inline
As of v0.27.x, ktrain supports causal inference using meta-learners. We will use the well-studied Adults Census dataset from the UCI ML repository, which is census data from the early to mid 1990s. The objective is to estimate how much earning a PhD increases the probability of making over $50K in salary. This dataset is simply being used as a simple demonstration example. Unlike conventional supervised machine learning, there is typically no ground truth for causal inference models, unless you're using a simulated dataset. So, we will simply check our estimates to see if they agree with intuition for illustration purposes in addition to inspecting robustness.
Let's begin by loading the dataset and creating a binary treatment (1 for PhD and 0 for no PhD).
!wget https://raw.githubusercontent.com/amaiya/ktrain/master/ktrain/tests/tabular_data/adults.csv -O /tmp/adults.csv
--2021-07-20 14:17:32-- https://raw.githubusercontent.com/amaiya/ktrain/master/ktrain/tests/tabular_data/adults.csv Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 126.96.36.199, 188.8.131.52, 184.108.40.206, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|220.127.116.11|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 4573758 (4.4M) [text/plain] Saving to: ‘/tmp/adults.csv’ /tmp/adults.csv 100%[===================>] 4.36M 26.3MB/s in 0.2s 2021-07-20 14:17:32 (26.3 MB/s) - ‘/tmp/adults.csv’ saved [4573758/4573758]
import pandas as pd df = pd.read_csv('/tmp/adults.csv') df = df.rename(columns=lambda x: x.strip()) df = df.applymap(lambda x: x.strip() if isinstance(x, str) else x) filter_set = 'Doctorate' df['treatment'] = df['education'].apply(lambda x: 1 if x in filter_set else 0) df.head()
Next, let's invoke the
causal_inference_model function to create a
CausalInferenceModel instance and invoke
fit to estimate the individualized treatment effect for each row in this dataset. By default, a T-Learner metalearner is used with LightGBM models as base learners. These can be adjusted using the
learner parameters. Since this example is simply being used for illustration purposes, we will ignore the
fnlwgt column, which represents the number of people the census believes the entry represents. In practice, one might incorporate domain knowledge when choosing which variables to include and ignore. For instance, variables thought to be common effects of both the treatment and outcome might be excluded as colliders). Finally, we will also exclude the education-related columns as they are already captured in the treatment.
from ktrain.tabular.causalinference import causal_inference_model cm = causal_inference_model(df, treatment_col='treatment', outcome_col='class', ignore_cols=['fnlwgt', 'education','education-num']).fit()
replaced ['<=50K', '>50K'] in column "class" with [0, 1] outcome column (categorical): class treatment column: treatment numerical/categorical covariates: ['age', 'workclass', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country'] preprocess time: 0.5897183418273926 sec start fitting causal inference model time to fit causal inference model: 0.9125957489013672 sec
As shown above, the dataset is automatically preprocessed and fitted very quickly.
The overall average treatment effect for all examples is 0.20. That is, having a PhD increases your probability of making over $50K by 20 percentage points.
We also compute treatment effects after conditioning on attributes.
For those with Master's degrees, we find that it is lower than the overall population as expected but still positive (which is qualitatively consistent with studies by the Census Bureau):
cm.estimate_ate(cm.df['education'] == 'Masters')
For those that dropped out of school, we find that it is higher (also as expected):
cm.estimate_ate(cm.df['education'].isin(['Preschool', '1st-4th', '5th-6th', '7th-8th', '9th', '10th', '12th']))
The CATEs above illustrate how causal effects vary across different subpopulations in the dataset. In fact,
CausalInferenceModel.df stores a DataFrame representation of your dataset that has been augmented with a column called
treatment_effect that stores the individualized treatment effect for each row in your dataset.
For instance, these individuals are predicted to benefit the most from a PhD with an increase of nearly 100 percentage points in the probability (see the treatment_effect column).
drop_cols = ['fnlwgt', 'education-num', 'capital-gain', 'capital-loss'] # omitted for readability cm.df.sort_values('treatment_effect', ascending=False).drop(drop_cols, axis=1).head()
Examining how the treatment effect varies across units in the population can be useful for variety of different applications. Uplift modeling is often used by companies for targeted promotions by identifying those individuals with the highest estimated treatment effects. Assessing the impact after such campaigns is yet another way to assess the model.
Finally, we can predict treatment effects on new examples, as long as they are formatted like the original DataFrame. For instance, let's make a prediction for one of the rows we already examined:
df_example = cm.df.sort_values('treatment_effect', ascending=False).iloc[] df_example
As mentioned above, there is no ground truth for this problem to validate our estimates. In the cells above, we simply inspected the estimates to see if they correspond to our intuition on what should happen. Another approach to validating causal estimates is to evaluate robustness to various data manipulations (i.e., sensitivity analysis). For instance, the Placebo Treatment test replaces the treatment with a random covariate. We see below that this causes our estimate to drop to near zero, which is expected and exactly what we want. Such tests might be used to compare different models.
|Method||ATE||New ATE||New ATE LB||New ATE UB||Distance from Desired (should be near 0)|
|0||Subset Data(sample size @0.8)||0.203406||0.194687||0.173465||0.215908||-0.00871967|
ktrain uses the CausalNLP package for inferring causality. For more information, see the CausalNLP documentation.