econml.dml._rlearner

The R Learner is an approach for estimating flexible non-parametric models of conditional average treatment effects in the setting with no unobserved confounders. The method is based on the idea of Neyman orthogonality and estimates a CATE whose mean squared error is robust to the estimation errors of auxiliary submodels that also need to be estimated from data:

  1. the outcome or regression model

  2. the treatment or propensity or policy or logging policy model

References

Xinkun Nie, Stefan Wager (2017). Quasi-Oracle Estimation of Heterogeneous Treatment Effects.

https://arxiv.org/abs/1712.04912

Dylan Foster, Vasilis Syrgkanis (2019). Orthogonal Statistical Learning.

ACM Conference on Learning Theory. https://arxiv.org/abs/1901.09036

Chernozhukov et al. (2017). Double/debiased machine learning for treatment and structural parameters.

The Econometrics Journal. https://arxiv.org/abs/1608.00060

Classes

_RLearner(*, discrete_outcome, ...[, ...])

Base class for CATE learners that residualize treatment and outcome and run residual on residual regression.

class econml.dml._rlearner._RLearner(*, discrete_outcome, discrete_treatment, treatment_featurizer, categories, cv, random_state, mc_iters=None, mc_agg='mean', allow_missing=False, use_ray=False, ray_remote_func_options=None)[source]

Bases: econml._ortho_learner._OrthoLearner

Base class for CATE learners that residualize treatment and outcome and run residual on residual regression. The estimator is a special of an _OrthoLearner estimator, so it follows the two stage process, where a set of nuisance functions are estimated in the first stage in a crossfitting manner and a final stage estimates the CATE model. See the documentation of _OrthoLearner for a description of this two stage process.

In this estimator, the CATE is estimated by using the following estimating equations:

\[Y - \E[Y | X, W] = \Theta(X) \cdot (T - \E[T | X, W]) + \epsilon\]

Thus if we estimate the nuisance functions \(q(X, W) = \E[Y | X, W]\) and \(f(X, W)=\E[T | X, W]\) in the first stage, we can estimate the final stage cate for each treatment t, by running a regression, minimizing the residual on residual square loss:

\[\hat{\theta} = \arg\min_{\Theta} \E_n\left[ (\tilde{Y} - \Theta(X) \cdot \tilde{T})^2 \right]\]

Where \(\tilde{Y}=Y - \E[Y | X, W]\) and \(\tilde{T}=T-\E[T | X, W]\) denotes the residual outcome and residual treatment.

Parameters
  • discrete_outcome (bool) – Whether the outcome should be treated as binary

  • discrete_treatment (bool) – Whether the treatment values should be treated as categorical, rather than continuous, quantities

  • treatment_featurizer (transformer or None) – Must support fit_transform and transform. Used to create composite treatment in the final CATE regression. The final CATE will be trained on the outcome of featurizer.fit_transform(T). If featurizer=None, then CATE is trained on T.

  • categories (‘auto’ or list) – The categories to use when encoding discrete treatments (or ‘auto’ to use the unique sorted values). The first category will be treated as the control treatment.

  • cv (int, cross-validation generator or an iterable) – Determines the cross-validation splitting strategy. Possible inputs for cv are:

    • None, to use the default 3-fold cross-validation,

    • integer, to specify the number of folds.

    • CV splitter

    • An iterable yielding (train, test) splits as arrays of indices.

    For integer/None inputs, if the treatment is discrete StratifiedKFold is used, else, KFold is used (with a random shuffle in either case).

    Unless an iterable is used, we call split(concat[W, X], T) to generate the splits. If all W, X are None, then we call split(ones((T.shape[0], 1)), T).

  • random_state (int, RandomState instance or None) – If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

  • mc_iters (int, optional) – The number of times to rerun the first stage models to reduce the variance of the nuisances.

  • mc_agg ({‘mean’, ‘median’}, default ‘mean’) – How to aggregate the nuisance value for each sample across the mc_iters monte carlo iterations of cross-fitting.

  • allow_missing (bool) – Whether to allow missing values in X, W. If True, will need to supply nuisance models that can handle missing values.

  • use_ray (bool, default False) – Whether to use Ray to speed up the cross-fitting step.

  • ray_remote_func_options (dict, optional) – Options to pass to ray.remote function decorator. see more at https://docs.ray.io/en/latest/ray-core/api/doc/ray.remote.html

Examples

The example code below implements a very simple version of the double machine learning method on top of the _RLearner class, for expository purposes. For a more elaborate implementation of a Double Machine Learning child class of the class checkout DML and its child classes:

import numpy as np
from sklearn.linear_model import LinearRegression
from econml.dml._rlearner import _RLearner
from econml.sklearn_extensions.model_selection import SingleModelSelector
from sklearn.base import clone
class ModelFirst:
    def __init__(self, model):
        self._model = clone(model, safe=False)
    def fit(self, X, W, Y, sample_weight=None):
        self._model.fit(np.hstack([X, W]), Y)
        return self
    def predict(self, X, W):
        return self._model.predict(np.hstack([X, W]))
class ModelSelector(SingleModelSelector):
    def __init__(self, model):
        self._model = ModelFirst(model)
    def train(self, is_selecting, folds, X, W, Y, sample_weight=None):
        self._model.fit(X, W, Y, sample_weight=sample_weight)
        return self
    @property
    def best_model(self):
        return self._model
    @property
    def best_score(self):
        return 0
class ModelFinal:
    def fit(self, X, T, T_res, Y_res, sample_weight=None, freq_weight=None, sample_var=None):
        self.model = LinearRegression(fit_intercept=False).fit(X * T_res.reshape(-1, 1),
                                                               Y_res)
        return self
    def predict(self, X):
        return self.model.predict(X)
class RLearner(_RLearner):
    def _gen_model_y(self):
        return ModelSelector(LinearRegression())
    def _gen_model_t(self):
        return ModelSelector(LinearRegression())
    def _gen_rlearner_model_final(self):
        return ModelFinal()
np.random.seed(123)
X = np.random.normal(size=(1000, 3))
y = X[:, 0] + X[:, 1] + np.random.normal(0, 0.01, size=(1000,))
est = RLearner(cv=2, discrete_outcome=False, discrete_treatment=False,
               treatment_featurizer=None, categories='auto', random_state=None)
est.fit(y, X[:, 0], X=np.ones((X.shape[0], 1)), W=X[:, 1:])
>>> est.const_marginal_effect(np.ones((1,1)))
array([0.999631...])
>>> est.effect(np.ones((1,1)), T0=0, T1=10)
array([9.996314...])
>>> est.score(y, X[:, 0], X=np.ones((X.shape[0], 1)), W=X[:, 1:])
9.73638006...e-05
>>> est.rlearner_model_final_.model
LinearRegression(fit_intercept=False)
>>> est.rlearner_model_final_.model.coef_
array([0.999631...])
>>> est.score_
9.82623204...e-05
>>> [mdl._model for mdls in est.models_y for mdl in mdls]
[LinearRegression(), LinearRegression()]
>>> [mdl._model for mdls in est.models_t for mdl in mdls]
[LinearRegression(), LinearRegression()]
models_y

A nested list of instances of the model_y object. Number of sublist equals to number of monte carlo iterations, each element in the sublist corresponds to a crossfitting fold and is the model instance that was fitted for that training fold.

Type

nested list of objects of type(model_y)

models_t

A nested list of instances of the model_t object. Number of sublist equals to number of monte carlo iterations, each element in the sublist corresponds to a crossfitting fold and is the model instance that was fitted for that training fold.

Type

nested list of objects of type(model_t)

rlearner_model_final_

An instance of the model_final object that was fitted after calling fit.

Type

object of type(model_final)

score_

The MSE in the final residual on residual regression

Type

float

nuisance_scores_y

The out-of-sample scores for each outcome model

Type

nested list of float

nuisance_scores_t

The out-of-sample scores for each treatment model

\[\frac{1}{n} \sum_{i=1}^n (Y_i - \hat{E}[Y|X_i, W_i] - \hat{\theta}(X_i)\cdot (T_i - \hat{E}[T|X_i, W_i]))^2\]

If sample_weight is not None at fit time, then a weighted average is returned. If the outcome Y is multidimensional, then the average of the MSEs for each dimension of Y is returned.

Type

nested list of float

ate(X=None, *, T0=0, T1=1)

Calculate the average treatment effect \(E_X[\tau(X, T0, T1)]\).

The effect is calculated between the two treatment points and is averaged over the population of X variables.

Parameters
  • T0 ((m, d_t) matrix or vector of length m) – Base treatments for each sample

  • T1 ((m, d_t) matrix or vector of length m) – Target treatments for each sample

  • X ((m, d_x) matrix, optional) – Features for each sample

Returns

τ – Average treatment effects on each outcome Note that when Y is a vector rather than a 2-dimensional array, the result will be a scalar

Return type

float or (d_y,) array

ate_inference(X=None, *, T0=0, T1=1)

Inference results for the quantity \(E_X[\tau(X, T0, T1)]\) produced by the model. Available only when inference is not None, when calling the fit method.

Parameters
  • X ((m, d_x) matrix, optional) – Features for each sample

  • T0 ((m, d_t) matrix or vector of length m, default 0) – Base treatments for each sample

  • T1 ((m, d_t) matrix or vector of length m, default 1) – Target treatments for each sample

Returns

PopulationSummaryResults – The inference results instance contains prediction and prediction standard error and can on demand calculate confidence interval, z statistic and p value. It can also output a dataframe summary of these inference results.

Return type

object

ate_interval(X=None, *, T0=0, T1=1, alpha=0.05)

Confidence intervals for the quantity \(E_X[\tau(X, T0, T1)]\) produced by the model. Available only when inference is not None, when calling the fit method.

Parameters
  • X ((m, d_x) matrix, optional) – Features for each sample

  • T0 ((m, d_t) matrix or vector of length m, default 0) – Base treatments for each sample

  • T1 ((m, d_t) matrix or vector of length m, default 1) – Target treatments for each sample

  • alpha (float in [0, 1], default 0.05) – The overall level of confidence of the reported interval. The alpha/2, 1-alpha/2 confidence interval is reported.

Returns

lower, upper – The lower and the upper bounds of the confidence interval for each quantity.

Return type

tuple(type of ate(X, T0, T1), type of ate(X, T0, T1)) )

cate_feature_names(feature_names=None)

Public interface for getting feature names.

To be overriden by estimators that apply transformations the input features.

Parameters

feature_names (list of str of length X.shape[1] or None) – The names of the input features. If None and X is a dataframe, it defaults to the column names from the dataframe.

Returns

out_feature_names – Returns feature names.

Return type

list of str or None

cate_output_names(output_names=None)

Public interface for getting output names.

To be overriden by estimators that apply transformations the outputs.

Parameters

output_names (list of str of length Y.shape[1] or None) – The names of the outcomes. If None and the Y passed to fit was a dataframe, it defaults to the column names from the dataframe.

Returns

output_names – Returns output names.

Return type

list of str

cate_treatment_names(treatment_names=None)

Get treatment names.

If the treatment is discrete or featurized, it will return expanded treatment names.

Parameters

treatment_names (list of str of length T.shape[1], optional) – The names of the treatments. If None and the T passed to fit was a dataframe, it defaults to the column names from the dataframe.

Returns

out_treatment_names – Returns (possibly expanded) treatment names.

Return type

list of str

const_marginal_ate(X=None)

Calculate the average constant marginal CATE \(E_X[\theta(X)]\).

Parameters

X ((m, d_x) matrix, optional) – Features for each sample.

Returns

theta – Average constant marginal CATE of each treatment on each outcome. Note that when Y or featurized-T (or T if treatment_featurizer is None) is a vector rather than a 2-dimensional array, the corresponding singleton dimensions in the output will be collapsed (e.g. if both are vectors, then the output of this method will also be a scalar)

Return type

(d_y, d_f_t) matrix where d_f_t is the dimension of the featurized treatment. If treatment_featurizer is None, d_f_t = d_t.

const_marginal_ate_inference(X=None)

Inference results for the quantities \(E_X[\theta(X)]\) produced by the model. Available only when inference is not None, when calling the fit method.

Parameters

X ((m, d_x) matrix, optional) – Features for each sample

Returns

PopulationSummaryResults – The inference results instance contains prediction and prediction standard error and can on demand calculate confidence interval, z statistic and p value. It can also output a dataframe summary of these inference results.

Return type

object

const_marginal_ate_interval(X=None, *, alpha=0.05)

Confidence intervals for the quantities \(E_X[\theta(X)]\) produced by the model. Available only when inference is not None, when calling the fit method.

Parameters
  • X ((m, d_x) matrix, optional) – Features for each sample

  • alpha (float in [0, 1], default 0.05) – The overall level of confidence of the reported interval. The alpha/2, 1-alpha/2 confidence interval is reported.

Returns

lower, upper – The lower and the upper bounds of the confidence interval for each quantity.

Return type

tuple(type of const_marginal_ate(X) , type of const_marginal_ate(X) )

const_marginal_effect(X=None)

Calculate the constant marginal CATE \(\theta(·)\).

The marginal effect is conditional on a vector of features on a set of m test samples X[i].

Parameters

X ((m, d_x) matrix, optional) – Features for each sample.

Returns

theta – Constant marginal CATE of each featurized treatment on each outcome for each sample X[i]. Note that when Y or featurized-T (or T if treatment_featurizer is None) is a vector rather than a 2-dimensional array, the corresponding singleton dimensions in the output will be collapsed (e.g. if both are vectors, then the output of this method will also be a vector)

Return type

(m, d_y, d_f_t) matrix or (d_y, d_f_t) matrix if X is None where d_f_t is the dimension of the featurized treatment. If treatment_featurizer is None, d_f_t = d_t.

const_marginal_effect_inference(X=None)

Inference results for the quantities \(\theta(X)\) produced by the model. Available only when inference is not None, when calling the fit method.

Parameters

X ((m, d_x) matrix, optional) – Features for each sample

Returns

InferenceResults – The inference results instance contains prediction and prediction standard error and can on demand calculate confidence interval, z statistic and p value. It can also output a dataframe summary of these inference results.

Return type

object

const_marginal_effect_interval(X=None, *, alpha=0.05)

Confidence intervals for the quantities \(\theta(X)\) produced by the model. Available only when inference is not None, when calling the fit method.

Parameters
  • X ((m, d_x) matrix, optional) – Features for each sample

  • alpha (float in [0, 1], default 0.05) – The overall level of confidence of the reported interval. The alpha/2, 1-alpha/2 confidence interval is reported.

Returns

lower, upper – The lower and the upper bounds of the confidence interval for each quantity.

Return type

tuple(type of const_marginal_effect(X) , type of const_marginal_effect(X) )

effect(X=None, *, T0=0, T1=1)

Calculate the heterogeneous treatment effect \(\tau(X, T0, T1)\).

The effect is calculated between the two treatment points conditional on a vector of features on a set of m test samples \(\{T0_i, T1_i, X_i\}\).

Parameters
  • T0 ((m, d_t) matrix or vector of length m) – Base treatments for each sample

  • T1 ((m, d_t) matrix or vector of length m) – Target treatments for each sample

  • X ((m, d_x) matrix, optional) – Features for each sample

Returns

τ – Heterogeneous treatment effects on each outcome for each sample Note that when Y is a vector rather than a 2-dimensional array, the corresponding singleton dimension will be collapsed (so this method will return a vector)

Return type

(m, d_y) matrix

effect_inference(X=None, *, T0=0, T1=1)

Inference results for the quantities \(\tau(X, T0, T1)\) produced by the model. Available only when inference is not None, when calling the fit method.

Parameters
  • X ((m, d_x) matrix, optional) – Features for each sample

  • T0 ((m, d_t) matrix or vector of length m, default 0) – Base treatments for each sample

  • T1 ((m, d_t) matrix or vector of length m, default 1) – Target treatments for each sample

Returns

InferenceResults – The inference results instance contains prediction and prediction standard error and can on demand calculate confidence interval, z statistic and p value. It can also output a dataframe summary of these inference results.

Return type

object

effect_interval(X=None, *, T0=0, T1=1, alpha=0.05)

Confidence intervals for the quantities \(\tau(X, T0, T1)\) produced by the model. Available only when inference is not None, when calling the fit method.

Parameters
  • X ((m, d_x) matrix, optional) – Features for each sample

  • T0 ((m, d_t) matrix or vector of length m, default 0) – Base treatments for each sample

  • T1 ((m, d_t) matrix or vector of length m, default 1) – Target treatments for each sample

  • alpha (float in [0, 1], default 0.05) – The overall level of confidence of the reported interval. The alpha/2, 1-alpha/2 confidence interval is reported.

Returns

lower, upper – The lower and the upper bounds of the confidence interval for each quantity.

Return type

tuple(type of effect(X, T0, T1), type of effect(X, T0, T1)) )

fit(Y, T, *, X=None, W=None, sample_weight=None, freq_weight=None, sample_var=None, groups=None, cache_values=False, inference=None)[source]

Estimate the counterfactual model from data, i.e. estimates function \(\theta(\cdot)\).

Parameters
  • Y ((n, d_y) matrix or vector of length n) – Outcomes for each sample

  • T ((n, d_t) matrix or vector of length n) – Treatments for each sample

  • X ((n, d_x) matrix, optional) – Features for each sample

  • W ((n, d_w) matrix, optional) – Controls for each sample

  • sample_weight ((n,) array_like, optional) – Individual weights for each sample. If None, it assumes equal weight.

  • freq_weight ((n, ) array_like of int, optional) – Weight for the observation. Observation i is treated as the mean outcome of freq_weight[i] independent observations. When sample_var is not None, this should be provided.

  • sample_var ({(n,), (n, d_y)} nd array_like, optional) – Variance of the outcome(s) of the original freq_weight[i] observations that were used to compute the mean outcome represented by observation i.

  • groups ((n,) vector, optional) – All rows corresponding to the same group will be kept together during splitting. If groups is not None, the cv argument passed to this class’s initializer must support a ‘groups’ argument to its split method.

  • cache_values (bool, default False) – Whether to cache inputs and first stage results, which will allow refitting a different final model

  • inference (str, Inference instance, or None) – Method for performing inference. This estimator supports ‘bootstrap’ (or an instance of:class:.BootstrapInference).

Returns

self

Return type

_RLearner instance

marginal_ate(T, X=None)

Calculate the average marginal effect \(E_{T, X}[\partial\tau(T, X)]\).

The marginal effect is calculated around a base treatment point and averaged over the population of X.

Parameters
  • T ((m, d_t) matrix) – Base treatments for each sample

  • X ((m, d_x) matrix, optional) – Features for each sample

Returns

grad_tau – Average marginal effects on each outcome Note that when Y or T is a vector rather than a 2-dimensional array, the corresponding singleton dimensions in the output will be collapsed (e.g. if both are vectors, then the output of this method will be a scalar)

Return type

(d_y, d_t) array

marginal_ate_inference(T, X=None)

Inference results for the quantities \(E_{T,X}[\partial \tau(T, X)]\) produced by the model. Available only when inference is not None, when calling the fit method.

Parameters
  • T ((m, d_t) matrix) – Base treatments for each sample

  • X ((m, d_x) matrix, optional) – Features for each sample

Returns

PopulationSummaryResults – The inference results instance contains prediction and prediction standard error and can on demand calculate confidence interval, z statistic and p value. It can also output a dataframe summary of these inference results.

Return type

object

marginal_ate_interval(T, X=None, *, alpha=0.05)

Confidence intervals for the quantities \(E_{T,X}[\partial \tau(T, X)]\) produced by the model. Available only when inference is not None, when calling the fit method.

Parameters
  • T ((m, d_t) matrix) – Base treatments for each sample

  • X ((m, d_x) matrix, optional) – Features for each sample

  • alpha (float in [0, 1], default 0.05) – The overall level of confidence of the reported interval. The alpha/2, 1-alpha/2 confidence interval is reported.

Returns

lower, upper – The lower and the upper bounds of the confidence interval for each quantity.

Return type

tuple(type of marginal_ate(T, X), type of marginal_ate(T, X) )

marginal_effect(T, X=None)

Calculate the heterogeneous marginal effect \(\partial\tau(T, X)\).

The marginal effect is calculated around a base treatment point conditional on a vector of features on a set of m test samples \(\{T_i, X_i\}\). If treatment_featurizer is None, the base treatment is ignored in this calculation and the result is equivalent to const_marginal_effect.

Parameters
  • T ((m, d_t) matrix) – Base treatments for each sample

  • X ((m, d_x) matrix, optional) – Features for each sample

Returns

grad_tau – Heterogeneous marginal effects on each outcome for each sample Note that when Y or T is a vector rather than a 2-dimensional array, the corresponding singleton dimensions in the output will be collapsed (e.g. if both are vectors, then the output of this method will also be a vector)

Return type

(m, d_y, d_t) array

marginal_effect_inference(T, X=None)

Inference results for the quantities \(\partial \tau(T, X)\) produced by the model. Available only when inference is not None, when calling the fit method.

Parameters
  • T ((m, d_t) matrix) – Base treatments for each sample

  • X ((m, d_x) matrix, optional) – Features for each sample

Returns

InferenceResults – The inference results instance contains prediction and prediction standard error and can on demand calculate confidence interval, z statistic and p value. It can also output a dataframe summary of these inference results.

Return type

object

marginal_effect_interval(T, X=None, *, alpha=0.05)

Confidence intervals for the quantities \(\partial \tau(T, X)\) produced by the model. Available only when inference is not None, when calling the fit method.

Parameters
  • T ((m, d_t) matrix) – Base treatments for each sample

  • X ((m, d_x) matrix, optional) – Features for each sample

  • alpha (float in [0, 1], default 0.05) – The overall level of confidence of the reported interval. The alpha/2, 1-alpha/2 confidence interval is reported.

Returns

lower, upper – The lower and the upper bounds of the confidence interval for each quantity.

Return type

tuple(type of marginal_effect(T, X), type of marginal_effect(T, X) )

refit_final(inference=None)

Estimate the counterfactual model using a new final model specification but with cached first stage results.

In order for this to succeed, fit must have been called with cache_values=True. This call will only refit the final model. This call we use the current setting of any parameters that change the final stage estimation. If any parameters that change how the first stage nuisance estimates has also been changed then it will have no effect. You need to call fit again to change the first stage estimation results.

Parameters

inference (inference method, optional) – The string or object that represents the inference method

Returns

self – This instance

Return type

object

score(Y, T, X=None, W=None, sample_weight=None)[source]

Score the fitted CATE model on a new data set. Generates nuisance parameters for the new data set based on the fitted residual nuisance models created at fit time. It uses the mean prediction of the models fitted by the different crossfit folds. Then calculates the MSE of the final residual Y on residual T regression.

If model_final does not have a score method, then it raises an AttributeError

Parameters
  • Y ((n, d_y) matrix or vector of length n) – Outcomes for each sample

  • T ((n, d_t) matrix or vector of length n) – Treatments for each sample

  • X ((n, d_x) matrix, optional) – Features for each sample

  • W ((n, d_w) matrix, optional) – Controls for each sample

  • sample_weight ((n,) vector, optional) – Weights for each samples

Returns

score – The MSE of the final CATE model on the new data.

Return type

float

shap_values(X, *, feature_names=None, treatment_names=None, output_names=None, background_samples=100)

Shap value for the final stage models (const_marginal_effect)

Parameters
  • X ((m, d_x) matrix) – Features for each sample. Should be in the same shape of fitted X in final stage.

  • feature_names (list of str of length X.shape[1], optional) – The names of input features.

  • treatment_names (list, optional) – The name of featurized treatment. In discrete treatment scenario, the name should not include the name of the baseline treatment (i.e. the control treatment, which by default is the alphabetically smaller)

  • output_names (list, optional) – The name of the outcome.

  • background_samples (int , default 100) – How many samples to use to compute the baseline effect. If None then all samples are used.

Returns

shap_outs – A nested dictionary by using each output name (e.g. ‘Y0’, ‘Y1’, … when output_names=None) and each treatment name (e.g. ‘T0’, ‘T1’, … when treatment_names=None) as key and the shap_values explanation object as value. If the input data at fit time also contain metadata, (e.g. are pandas DataFrames), then the column metatdata for the treatments, outcomes and features are used instead of the above defaults (unless the user overrides with explicitly passing the corresponding names).

Return type

nested dictionary of Explanation object

property dowhy

Get an instance of DoWhyWrapper to allow other functionalities from dowhy package. (e.g. causal graph, refutation test, etc.)

Returns

DoWhyWrapper – An instance of DoWhyWrapper

Return type

instance

property residuals_

A tuple (y_res, T_res, X, W), of the residuals from the first stage estimation along with the associated X and W. Samples are not guaranteed to be in the same order as the input order.