Inverse probability weighting is a basic model to obtain average effect estimation.
It calculates the probability of each sample to belong to its group,
and use its inverse as the weight of that sample:
$$
w_i = \frac{1}{\Pr[A=a_i | X_i]}
$$
%matplotlib inline
from causallib.datasets import load_nhefs
from causallib.estimation import IPW
from causallib.evaluation import PropensityEvaluator
from sklearn.linear_model import LogisticRegression
The effect of quitting to smoke on weight loss.
Data example is taken from Hernan and Robins Causal Inference Book
data = load_nhefs()
data.X.join(data.a).join(data.y).head()
The causal model has a machine learning model at its core, provided as learner
parameter.
This ML model will be used to predict probability of quit smoking given the covariates.
These probabilities will be used to obtain $w_i$.
Then, we'll estimate average balanced outcome using Horvitz–Thompson estimator:
$$
\hat{E}[Y^a] = \frac{1}{\sum_{i:A_i=a}w_i} \cdot \sum_{i:A_i=a}w_i y_i
$$
Lastly, we'll use these average counterfactual outcome estimation to predict the effect: $$ \hat{E}[Y^1] - \hat{E}[Y^0] $$
# Train:
learner = LogisticRegression(solver="liblinear")
ipw = IPW(learner)
ipw.fit(data.X, data.a)
# We can now preict the weight of each individual:
ipw.compute_weights(data.X, data.a).head()
# Estimate average outcome
outcomes = ipw.estimate_population_outcome(data.X, data.a, data.y)
outcomes
# Estimate the effect:
effect = ipw.estimate_effect(outcomes[1], outcomes[0])
effect
We can see that on average, indiviudals who quit smoking gained 3.5 Lbs
on the course of 11 years
We just saw a simple example hiding many model's parameters.
We now dig a bit deeper to every stage.
Machine learning model:
Any scikit-learn model can be specified (even pipelines)
learner = LogisticRegression(penalty="l1", C=0.01, max_iter=500, solver='liblinear')
IPW model has two additional parameters:
clip_min
, clip_max
: caliper values to trim very small or very large probabilitiesstabilized
: Whether to scale weights with treatment prevalenceclip_min = 0.2
clip_max = 0.8
ipw = IPW(learner, clip_min=clip_min, clip_max=clip_max, use_stabilized=False)
ipw.fit(data.X, data.a);
Now we can predict the probability of quit smoking (treatment_values=1
)
and validate our truncation worked:
probs = ipw.compute_propensity(data.X, data.a, treatment_values=1)
probs.between(clip_min, clip_max).all()
During the "predict" phase (i.e. computing weights or probabilities),
We can alter the parameters we placed during initiation:
probs = ipw.compute_propensity(data.X, data.a, treatment_values=1, clip_min=0.0, clip_max=1.0)
probs.between(clip_min, clip_max).all()
We can even predict stabilized weights.
However, we will get a warning.
This is because treatment prevalence is an estimation of
the trainning data.
During fit
, when the model had it's initial values, use_stabilized
was False
(default).
So when coming to compute_weights
now, the model will use the prevalence from the provided data to estimate treatment prevalence.
This is not a big deal here, since we compute on the same data we trained on, but this does not have to be the general case.
(This warning would not exists if we redefine the model with use_stabilized=True
and re-train it)
stabilized_weights = ipw.compute_weights(data.X, data.a, treatment_values=1,
clip_min=0.0, clip_max=1.0, use_stabilized=True)
weights = ipw.compute_weights(data.X, data.a, treatment_values=1,
clip_min=0.0, clip_max=1.0)
stabilized_weights.eq(weights).all()
Since IPW utilizes probabilites, for each sample we can get a probability (or weight) for each treatment value
# ipw.compute_weight_matrix(data.X, data.a).head()
ipw.compute_propensity_matrix(data.X, data.a).head()
We can choose whether we wish for addtive (diff
) or multiplicative (ratio
) effect
(If outcome y
was probabilites, we could also ask for odds-ratio (or
))
Providing weights w
is optional, if not provided the weights would be simply
calulated again using the provided X
.
outcomes = ipw.estimate_population_outcome(data.X, data.a, data.y, w=weights)
effects = ipw.estimate_effect(outcomes[1], outcomes[0], effect_types=["diff", "ratio"])
effects
We can also evaluate the performance of the IPW model
Evaluates a fitted model on the provided dataset
evaluator = PropensityEvaluator(ipw)
results = evaluator.evaluate_simple(data.X, data.a, data.y, plots=None)
results
contains models
, plots
and scores
,
but since we did not ask for plots, and did not refit the model,
our main interest is the scores.
We have both the prediction performance scores and
a table1 with standardized mean differences with and without balancing
results.scores.prediction_scores
results.scores.covariate_balance.head()
Can check general model specification as it evaluates using cross-validation and refitting the model on each fold
from sklearn import metrics
plots=["roc_curve", "pr_curve", "weight_distribution",
"calibration", "covariate_balance_love", "covariate_balance_slope"]
metrics = {"roc_auc": metrics.roc_auc_score,
"avg_precision": metrics.average_precision_score,}
ipw = IPW(LogisticRegression(solver="liblinear"))
evaluator = PropensityEvaluator(ipw)
results = evaluator.evaluate_cv(data.X, data.a, data.y,
plots=plots, metrics_to_evaluate=metrics)
results.scores.prediction_scores
print(len(results.models))
results.models[2]