econml.policy.PolicyForest
- class econml.policy.PolicyForest(n_estimators=100, *, criterion='neg_welfare', max_depth=None, min_samples_split=10, min_samples_leaf=5, min_weight_fraction_leaf=0.0, max_features='auto', min_impurity_decrease=0.0, max_samples=0.5, min_balancedness_tol=0.45, honest=True, n_jobs=- 1, random_state=None, verbose=0, warm_start=False)[source]
Bases:
econml._ensemble._ensemble.BaseEnsemble
Welfare maximization policy forest. Trains a forest to maximize the objective: \(1/n \sum_i \sum_j a_j(X_i) * y_{ij}\), where, where \(a(X)\) is constrained to take value of 1 only on one coordinate and zero otherwise. This corresponds to a policy optimization problem.
- Parameters
n_estimators (int, default 100) – The total number of trees in the forest. The forest consists of a forest of sqrt(n_estimators) sub-forests, where each sub-forest contains sqrt(n_estimators) trees.
criterion ({
'neg_welfare'
}, default ‘neg_welfare’) – The criterion typemax_depth (int, default None) – The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
min_samples_split (int or float, default 10) – The minimum number of samples required to split an internal node:
If int, then consider min_samples_split as the minimum number.
If float, then min_samples_split is a fraction and ceil(min_samples_split * n_samples) are the minimum number of samples for each split.
min_samples_leaf (int or float, default 5) – The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least
min_samples_leaf
training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.If int, then consider min_samples_leaf as the minimum number.
If float, then min_samples_leaf is a fraction and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.
min_weight_fraction_leaf (float, default 0.0) – The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
max_features (int, float, {“auto”, “sqrt”, “log2”}, or None, default None) – The number of features to consider when looking for the best split:
If int, then consider max_features features at each split.
If float, then max_features is a fraction and int(max_features * n_features) features are considered at each split.
If “auto”, then max_features=n_features.
If “sqrt”, then max_features=sqrt(n_features).
If “log2”, then max_features=log2(n_features).
If None, then max_features=n_features.
Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than
max_features
features.min_impurity_decrease (float, default 0.0) – A node will be split if this split induces a decrease of the impurity greater than or equal to this value. The weighted impurity decrease equation is the following:
N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity)
where
N
is the total number of samples,N_t
is the number of samples at the current node,N_t_L
is the number of samples in the left child, andN_t_R
is the number of samples in the right child.N
,N_t
,N_t_R
andN_t_L
all refer to the weighted sum, ifsample_weight
is passed.max_samples (int or float in (0, 1], default .5,) – The number of samples to use for each subsample that is used to train each tree:
If int, then train each tree on max_samples samples, sampled without replacement from all the samples
If float, then train each tree on ceil(max_samples * n_samples), sampled without replacement from all the samples.
min_balancedness_tol (float in [0, .5], default .45) – How imbalanced a split we can tolerate. This enforces that each split leaves at least (.5 - min_balancedness_tol) fraction of samples on each side of the split; or fraction of the total weight of samples, when sample_weight is not None. Default value, ensures that at least 5% of the parent node weight falls in each side of the split. Set it to 0.0 for no balancedness and to .5 for perfectly balanced splits. For the formal inference theory to be valid, this has to be any positive constant bounded away from zero.
honest (bool, default True) – Whether the data should be split in two equally sized samples, such that the one half-sample is used to determine the optimal split at each node and the other sample is used to determine the value of every node.
n_jobs (int or None, default -1) – The number of jobs to run in parallel for both fit and predict.
None
means 1 unless in ajoblib.parallel_backend()
context.-1
means using all processors. See Glossary for more details.verbose (int, default 0) – Controls the verbosity when fitting and predicting.
random_state (int, RandomState instance, or None, default None) – If int, random_state is the seed used by the random number generator; If
RandomState
instance, random_state is the random number generator; If None, the random number generator is theRandomState
instance used bynp.random
.warm_start (bool, default False) – When set to
True
, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest.
- feature_importances_
The feature importances based on the amount of parameter heterogeneity they create. The higher, the more important the feature.
- Type
ndarray of shape (n_features,)
- __init__(n_estimators=100, *, criterion='neg_welfare', max_depth=None, min_samples_split=10, min_samples_leaf=5, min_weight_fraction_leaf=0.0, max_features='auto', min_impurity_decrease=0.0, max_samples=0.5, min_balancedness_tol=0.45, honest=True, n_jobs=- 1, random_state=None, verbose=0, warm_start=False)[source]
Methods
__init__
([n_estimators, criterion, ...])apply
(X)Apply trees in the forest to X, return leaf indices.
Return the decision path in the forest.
feature_importances
([max_depth, ...])The feature importances based on the amount of parameter heterogeneity they create.
fit
(X, y, *[, sample_weight])Build a forest of trees from the training set (X, y) and any other auxiliary variables.
get_params
([deep])Get parameters for this estimator.
Re-generate the example same sample indices as those at fit time using same pseudo-randomness.
predict
(X)Predict the best treatment for each sample
Predict the probability of recommending each treatment
Predict the expected value of each treatment for each sample
set_params
(**params)Set the parameters of this estimator.
Attributes
- apply(X)[source]
Apply trees in the forest to X, return leaf indices.
- Parameters
X (array_like of shape (n_samples, n_features)) – The input samples. Internally, it will be converted to
dtype=np.float64
.- Returns
X_leaves – For each datapoint x in X and for each tree in the forest, return the index of the leaf x ends up in.
- Return type
ndarray of shape (n_samples, n_estimators)
- decision_path(X)[source]
Return the decision path in the forest.
- Parameters
X (array_like of shape (n_samples, n_features)) – The input samples. Internally, it will be converted to
dtype=np.float64
.- Returns
indicator (sparse matrix of shape (n_samples, n_nodes)) – Return a node indicator matrix where non zero elements indicates that the samples goes through the nodes. The matrix is of CSR format.
n_nodes_ptr (ndarray of shape (n_estimators + 1,)) – The columns from indicator[n_nodes_ptr[i]:n_nodes_ptr[i+1]] gives the indicator value for the i-th estimator.
- feature_importances(max_depth=4, depth_decay_exponent=2.0)[source]
The feature importances based on the amount of parameter heterogeneity they create. The higher, the more important the feature.
- Parameters
max_depth (int, default 4) – Splits of depth larger than max_depth are not used in this calculation
depth_decay_exponent (double, default 2.0) – The contribution of each split to the total score is re-weighted by 1 / (1 + depth)**2.0.
- Returns
feature_importances_ – Normalized total parameter heterogeneity inducing importance of each feature
- Return type
ndarray of shape (n_features,)
- fit(X, y, *, sample_weight=None, **kwargs)[source]
Build a forest of trees from the training set (X, y) and any other auxiliary variables.
- Parameters
X (array_like of shape (n_samples, n_features)) – The training input samples. Internally, its dtype will be converted to
dtype=np.float64
.y (array_like of shape (n_samples,) or (n_samples, n_treatments)) – The outcome values for each sample and for each treatment.
sample_weight (array_like of shape (n_samples,), default None) – Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node.
**kwargs (dictionary of array_like items of shape (n_samples, d_var)) – Auxiliary random variables
- Returns
self
- Return type
- get_params(deep=True)
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
- get_subsample_inds()[source]
Re-generate the example same sample indices as those at fit time using same pseudo-randomness.
- predict(X)[source]
Predict the best treatment for each sample
- Parameters
X ({array_like} of shape (n_samples, n_features)) – The input samples. Internally, it will be converted to
dtype=np.float64
.- Returns
treatment – The recommded treatment, i.e. the treatment index most often predicted to have the highest reward for each sample. Recommended treatments are aggregated from each tree in the ensemble and the treatment that receives the most votes is returned. Use predict_proba to get the fraction of trees in the ensemble that recommend each treatment for each sample.
- Return type
array_like of shape (n_samples)
- predict_proba(X)[source]
Predict the probability of recommending each treatment
- Parameters
X ({array_like} of shape (n_samples, n_features)) – The input samples. Internally, it will be converted to
dtype=np.float64
.check_input (bool, default True) – Allow to bypass several input checking. Don’t use this parameter unless you know what you do.
- Returns
treatment_proba – The probability of each treatment recommendation
- Return type
array_like of shape (n_samples, n_treatments)
- predict_value(X)[source]
Predict the expected value of each treatment for each sample
- Parameters
X ({array_like} of shape (n_samples, n_features)) – The input samples. Internally, it will be converted to
dtype=np.float64
.- Returns
welfare – The conditional average welfare for each treatment for the group of each sample defined by the tree
- Return type
array_like of shape (n_samples, n_treatments)
- set_params(**params)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance