econml.solutions.causal_analysis.CausalAnalysis

class econml.solutions.causal_analysis.CausalAnalysis(feature_inds, categorical, heterogeneity_inds=None, feature_names=None, classification=False, upper_bound_on_cat_expansion=5, nuisance_models='linear', heterogeneity_model='linear', *, categories='auto', n_jobs=- 1, verbose=0, cv=5, mc_iters=3, skip_cat_limit_checks=False, random_state=None)[source]

Bases: object

Note: this class is experimental and the API may evolve over our next few releases.

Gets causal importance of features.

Parameters
  • feature_inds (array_like of int, str, or bool) – The features for which to estimate causal effects, expressed as either column indices, column names, or boolean flags indicating which columns to pick

  • categorical (array_like of int, str, or bool) – The features which are categorical in nature, expressed as either column indices, column names, or boolean flags indicating which columns to pick

  • heterogeneity_inds (array_like of int, str, or bool, or None or list of array_like elements or None, default None) – If a 1d array, then whenever estimating a heterogeneous (local) treatment effect model, then only the features in this array will be used for heterogeneity. If a 2d array then its first dimension should be len(feature_inds) and whenever estimating a local causal effect for target feature feature_inds[i], then only features in heterogeneity_inds[i] will be used for heterogeneity. If heterogeneity_inds[i]=None, then all features are used for heterogeneity when estimating local causal effect for feature_inds[i], and likewise if heterogeneity_inds[i]=[] then no features will be used for heterogeneity. If heterogeneity_ind=None then all features are used for heterogeneity for all features, and if heterogeneity_inds=[] then no features will be.

  • feature_names (list of str, optional) – The names for all of the features in the data. Not necessary if the input will be a dataframe. If None and the input is a plain numpy array, generated feature names will be [‘X1’, ‘X2’, …].

  • upper_bound_on_cat_expansion (int, default 5) – The maximum number of categorical values allowed, because they are expanded via one-hot encoding. If a feature has more than this many values, then a causal effect model is not fitted for that target feature and a warning flag is raised. The remainder of the models are fitted.

  • classification (bool, default False) – Whether this is a classification (as opposed to regression) task

  • nuisance_models (one of {‘linear’, ‘automl’}, default ‘linear’) – The model class to use for nuisance estimation. Separate nuisance models are trained to predict the outcome and also each individual feature column from all of the other columns in the dataset as a prerequisite step before computing the actual causal effect for that feature column. If ‘linear’, then LassoCV (for regression) or LogisticRegressionCV (for classification) is used for these models. If ‘automl’, then model selection picks the best-performing among several different model classes for each model being trained using k-fold cross-validation, which requires additional computation.

  • heterogeneity_model (one of {‘linear’, ‘forest’}, default ‘linear’) – The type of model to use for the final heterogeneous treatment effect model. ‘linear’ means that a the estimated treatment effect for a column will be a linear function of the heterogeneity features for that column, while ‘forest’ means that a forest model will be trained to compute the effect from those heterogeneity features instead.

  • categories (‘auto’ or list of (‘auto’ or list of values), default ‘auto’) – What categories to use for the categorical columns. If ‘auto’, then the categories will be inferred for all categorical columns; otherwise this argument should have as many entries as there are categorical columns, and each entry should be either ‘auto’ to infer the values for that column or the list of values for the column. If explicit values are provided, the first value is treated as the “control” value for that column against which other values are compared.

  • n_jobs (int, default -1) – Degree of parallelism to use when training models via joblib.Parallel

  • verbose (int, default 0) – Controls the verbosity when fitting and predicting.

  • cv (int, cross-validation generator or an iterable, default 5) – Determines the strategy for cross-fitting used when training causal models for each feature. Possible inputs for cv are:

    • integer, to specify the number of folds.

    • CV splitter

    • An iterable yielding (train, test) splits as arrays of indices.

    For integer inputs, if the treatment is discrete StratifiedKFold is used, else, KFold is used (with a random shuffle in either case).

  • mc_iters (int, default 3) – The number of times to rerun the first stage models to reduce the variance of the causal model nuisances.

  • skip_cat_limit_checks (bool, default False) – By default, categorical features need to have several instances of each category in order for a model to be fit robustly. Setting this to True will skip these checks (although at least 2 instances will always be required for linear heterogeneity models, and 4 for forest heterogeneity models even in that case).

  • random_state (int, RandomState instance, or None, default None) – Controls the randomness of the estimator. The features are always randomly permuted at each split. When max_features < n_features, the algorithm will select max_features at random at each split before finding the best split among them. But the best found split may vary across different runs, even if max_features=n_features. That is the case, if the improvement of the criterion is identical for several splits and one split has to be selected at random. To obtain a deterministic behaviour during fitting, random_state has to be fixed to an integer.

nuisance_models_

The nuisance models setting used for the most recent call to fit

Type

str

heterogeneity_model

The heterogeneity model setting used for the most recent call to fit

Type

str

feature_names_

The list of feature names from the data in the most recent call to fit

Type

list of str

trained_feature_indices_

The list of feature indices where models were trained successfully

Type

list of int

untrained_feature_indices_

The list of indices that were requested but not able to be trained succesfully, along with either a reason or caught Exception for each

Type

list of tuple of (int, str or Exception)

__init__(feature_inds, categorical, heterogeneity_inds=None, feature_names=None, classification=False, upper_bound_on_cat_expansion=5, nuisance_models='linear', heterogeneity_model='linear', *, categories='auto', n_jobs=- 1, verbose=0, cv=5, mc_iters=3, skip_cat_limit_checks=False, random_state=None)[source]

Methods

__init__(feature_inds, categorical[, ...])

cohort_causal_effect(Xtest, *[, alpha, ...])

Gets the average causal effects for a particular cohort defined by a population of X's.

fit(X, y[, warm_start])

Fits global and local causal effect models for each feature in feature_inds on the data

global_causal_effect(*[, alpha, keep_all_levels])

Get the global causal effect for each feature as a pandas DataFrame.

individualized_policy(Xtest, feature_index, *)

Get individualized treatment policy based on the learned model for a feature, sorted by the predicted effect.

local_causal_effect(Xtest, *[, alpha, ...])

Gets the local causal effect for each feature as a pandas DataFrame.

plot_heterogeneity_tree(Xtest, feature_index, *)

Plot an effect heterogeneity tree using matplotlib.

plot_policy_tree(Xtest, feature_index, *[, ...])

Plot a recommended policy tree using matplotlib.

typical_treatment_value(feature_index)

Get the typical treatment value used for the specified feature

whatif(X, Xnew, feature_index, y, *[, alpha])

Get counterfactual predictions when feature_index is changed to Xnew from its observational counterpart.

cohort_causal_effect(Xtest, *, alpha=0.05, keep_all_levels=False)[source]

Gets the average causal effects for a particular cohort defined by a population of X’s.

Parameters
  • Xtest (array_like) – The cohort samples for which to return the average causal effects within cohort

  • alpha (float, default 0.05) – The confidence level of the confidence interval

  • keep_all_levels (bool, default False) – Whether to keep all levels of the output dataframe (‘outcome’, ‘feature’, and ‘feature_level’) even if there was only a single value for that level; by default single-valued levels are dropped.

Returns

cohort_effects – DataFrame with the following structure:

Columns

[‘point’, ‘stderr’, ‘zstat’, ‘pvalue’, ‘ci_lower’, ‘ci_upper’]

Index

[‘feature’, ‘feature_value’]

Rows

For each feature that is numerical, we have an entry with index [‘{feature_name}’, ‘num’], where ‘num’ is literally the string ‘num’ and feature_name is the input feature name. For each feature that is categorical, we have an entry with index [‘{feature_name}’, ‘{cat}v{base}’] where cat is the category value and base is the category used as baseline. If all features are numerical then the feature_value index is dropped in the dataframe, but not in the serialized dict.

Return type

DataFrame

fit(X, y, warm_start=False)[source]

Fits global and local causal effect models for each feature in feature_inds on the data

Parameters
  • X (array_like) – Feature data

  • y (array_like of shape (n,) or (n,1)) – Outcome. If classification=True, then y should take two values. Otherwise an error is raised that only binary classification is implemented for now. TODO. enable multi-class classification for y (post-MVP)

  • warm_start (bool, default False) – If False, train models for each feature in feature_inds. If True, train only models for features in feature_inds that had not already been trained by the previous call to fit, and for which neither the corresponding heterogeneity_inds, nor the automl flag have changed. If heterogeneity_inds have changed, then the final stage model of these features will be refit. If the automl flag has changed, then whole model is refit, despite the warm start flag.

global_causal_effect(*, alpha=0.05, keep_all_levels=False)[source]

Get the global causal effect for each feature as a pandas DataFrame.

Parameters
  • alpha (float, default 0.05) – The confidence level of the confidence interval

  • keep_all_levels (bool, default False) – Whether to keep all levels of the output dataframe (‘outcome’, ‘feature’, and ‘feature_level’) even if there was only a single value for that level; by default single-valued levels are dropped.

Returns

global_effects – DataFrame with the following structure:

Columns

[‘point’, ‘stderr’, ‘zstat’, ‘pvalue’, ‘ci_lower’, ‘ci_upper’]

Index

[‘feature’, ‘feature_value’]

Rows

For each feature that is numerical, we have an entry with index [‘{feature_name}’, ‘num’], where ‘num’ is literally the string ‘num’ and feature_name is the input feature name. For each feature that is categorical, we have an entry with index [‘{feature_name}’, ‘{cat}v{base}’] where cat is the category value and base is the category used as baseline. If all features are numerical then the feature_value index is dropped in the dataframe, but not in the serialized dict.

Return type

DataFrame

individualized_policy(Xtest, feature_index, *, n_rows=None, treatment_costs=0, alpha=0.05)[source]

Get individualized treatment policy based on the learned model for a feature, sorted by the predicted effect.

Parameters
  • Xtest (array_like) – Features

  • feature_index (int or str) – Index of the feature to be considered as treatment

  • n_rows (int, optional) – How many rows to return (all rows by default)

  • treatment_costs (array_like, default 0) – Cost of treatment, as a scalar value or per-sample. For continuous features this is the marginal cost per unit of treatment; for discrete features, this is the difference in cost between each of the non-default values and the default value (i.e., if non-scalar the array should have shape (n,d_t-1))

  • alpha (float in [0, 1], default 0.05) – Confidence level of the confidence intervals A (1-alpha)*100% confidence interval is returned

Returns

output – Dataframe containing recommended treatment, effect, confidence interval, sorted by effect

Return type

DataFrame

local_causal_effect(Xtest, *, alpha=0.05, keep_all_levels=False)[source]

Gets the local causal effect for each feature as a pandas DataFrame.

Parameters
  • Xtest (array_like) – The samples for which to return the causal effects

  • alpha (float, default 0.05) – The confidence level of the confidence interval

  • keep_all_levels (bool, default False) – Whether to keep all levels of the output dataframe (‘sample’, ‘outcome’, ‘feature’, and ‘feature_level’) even if there was only a single value for that level; by default single-valued levels are dropped.

Returns

global_effect – DataFrame with the following structure:

Columns

[‘point’, ‘stderr’, ‘zstat’, ‘pvalue’, ‘ci_lower’, ‘ci_upper’]

Index

[‘sample’, ‘feature’, ‘feature_value’]

Rows

For each feature that is numeric, we have an entry with index [‘{sampleid}’, ‘{feature_name}’, ‘num’], where ‘num’ is literally the string ‘num’ and feature_name is the input feature name and sampleid is the index of the sample in Xtest. For each feature that is categorical, we have an entry with index [‘{sampleid’, ‘{feature_name}’, ‘{cat}v{base}’] where cat is the category value and base is the category used as baseline. If all features are numerical then the feature_value index is dropped in the dataframe, but not in the serialized dict.

Return type

DataFrame

plot_heterogeneity_tree(Xtest, feature_index, *, max_depth=3, min_samples_leaf=2, min_impurity_decrease=0.0001, include_model_uncertainty=False, alpha=0.05)[source]

Plot an effect heterogeneity tree using matplotlib.

Parameters
  • X (array_like) – Features

  • feature_index – Index of the feature to be considered as treament

  • max_depth (int, default 3) – maximum depth of the tree

  • min_samples_leaf (int, default 2) – minimum number of samples on each leaf

  • min_impurity_decrease (float, default 1e-4) – The minimum decrease in the impurity/uniformity of the causal effect that a split needs to achieve to construct it

  • include_model_uncertainty (bool, default False) – Whether to include confidence interval information when building a simplified model of the cate model.

  • alpha (float in [0, 1], default 0.05) – Confidence level of the confidence intervals displayed in the leaf nodes. A (1-alpha)*100% confidence interval is displayed.

plot_policy_tree(Xtest, feature_index, *, treatment_costs=0, max_depth=3, min_samples_leaf=2, min_value_increase=0.0001, include_model_uncertainty=False, alpha=0.05)[source]

Plot a recommended policy tree using matplotlib.

Parameters
  • X (array_like) – Features

  • feature_index – Index of the feature to be considered as treament

  • treatment_costs (array_like, default 0) – Cost of treatment, as a scalar value or per-sample. For continuous features this is the marginal cost per unit of treatment; for discrete features, this is the difference in cost between each of the non-default values and the default value (i.e., if non-scalar the array should have shape (n,d_t-1))

  • max_depth (int, default 3) – maximum depth of the tree

  • min_samples_leaf (int, default 2) – minimum number of samples on each leaf

  • min_value_increase (float, default 1e-4) – The minimum increase in the policy value that a split needs to create to construct it

  • include_model_uncertainty (bool, default False) – Whether to include confidence interval information when building a simplified model of the cate model.

  • alpha (float in [0, 1], default 0.05) – Confidence level of the confidence intervals displayed in the leaf nodes. A (1-alpha)*100% confidence interval is displayed.

typical_treatment_value(feature_index)[source]

Get the typical treatment value used for the specified feature

Parameters

feature_index (int or str) – The index of the feature to be considered as treatment

Returns

treatment_value – The treatment value considered ‘typical’ for this feature

Return type

float

whatif(X, Xnew, feature_index, y, *, alpha=0.05)[source]

Get counterfactual predictions when feature_index is changed to Xnew from its observational counterpart.

Note that this only applies to regression use cases; for classification what-if analysis is not supported.

Parameters
  • X (array_like) – Features

  • Xnew (array_like) – New values of a single column of X

  • feature_index (int or str) – The index of the feature being varied to Xnew, either as a numeric index or the string name if the input is a dataframe

  • y (array_like) – Observed labels or outcome of a predictive model for baseline y values

  • alpha (float in [0, 1], default 0.05) – Confidence level of the confidence intervals displayed in the leaf nodes. A (1-alpha)*100% confidence interval is displayed.

Returns

y_new – The predicted outputs that would have been observed under the counterfactual features

Return type

DataFrame