econml.sklearn_extensions.linear_model.MultiOutputDebiasedLasso

class econml.sklearn_extensions.linear_model.MultiOutputDebiasedLasso(alpha='auto', n_alphas=100, alpha_cov='auto', n_alphas_cov=10, fit_intercept=True, precompute=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, random_state=None, selection='cyclic', n_jobs=None)[source]

Bases: MultiOutputRegressor

Debiased MultiOutputLasso model.

Implementation was derived from <https://arxiv.org/abs/1303.0518>. Applies debiased lasso once per target. If only a flat target is passed in, it reverts to the DebiasedLasso algorithm.

Parameters:
  • alpha (str | float, optional. Default ‘auto’.) – Constant that multiplies the L1 term. Defaults to ‘auto’. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object.

  • n_alphas (int, default 100) – How many alphas to try if alpha=’auto’

  • alpha_cov (str | float, default ‘auto’) – The regularization alpha that is used when constructing the pseudo inverse of the covariance matrix Theta used to for correcting the lasso coefficient. Each such regression corresponds to the regression of one feature on the remainder of the features.

  • n_alphas_cov (int, default 10) – How many alpha_cov to try if alpha_cov=’auto’.

  • fit_intercept (bool, default True) – Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (e.g. data is expected to be already centered).

  • precompute (True | False | array_like, default False) – Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. For sparse input this option is always True to preserve sparsity.

  • copy_X (bool, default True) – If True, X will be copied; else, it may be overwritten.

  • max_iter (int, optional) – The maximum number of iterations

  • tol (float, optional) – The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.

  • warm_start (bool, optional) – When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.

  • random_state (int, RandomState instance, or None, default None) – The seed of the pseudo random number generator that selects a random feature to update. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. Used when selection='random'.

  • selection (str, default ‘cyclic’) – If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4.

  • n_jobs (int, optional) – How many jobs to use whenever parallelism is invoked

coef_

Parameter vector (w in the cost function formula).

Type:

array, shape (n_targets, n_features) or (n_features,)

intercept_

Independent term in decision function.

Type:

array, shape (n_targets, ) or float

selected_alpha_

Penalty chosen through cross-validation, if alpha=’auto’.

Type:

array, shape (n_targets, ) or float

coef_stderr_

Estimated standard errors for coefficients (see coef_ attribute).

Type:

array, shape (n_targets, n_features) or (n_features, )

intercept_stderr_

Estimated standard error intercept (see intercept_ attribute).

Type:

array, shape (n_targets, ) or float

__init__(alpha='auto', n_alphas=100, alpha_cov='auto', n_alphas_cov=10, fit_intercept=True, precompute=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, random_state=None, selection='cyclic', n_jobs=None)[source]

Methods

__init__([alpha, n_alphas, alpha_cov, ...])

coef__interval([alpha])

Get a confidence interval bounding the fitted coefficients.

fit(X, y[, sample_weight])

Fit the multi-output debiased lasso model.

get_metadata_routing()

Get metadata routing of this object.

get_params([deep])

Get parameters for this estimator.

intercept__interval([alpha])

Get a confidence interval bounding the fitted intercept.

partial_fit(X, y[, sample_weight])

Incrementally fit the model to data, for each output variable.

predict(X)

Get the prediction using the debiased lasso.

predict_interval(X[, alpha])

Build prediction confidence intervals using the debiased lasso.

prediction_stderr(X)

Get the standard error of the predictions using the debiased lasso.

score(X, y[, sample_weight])

Return the coefficient of determination of the prediction.

set_fit_request(*[, sample_weight])

Request metadata passed to the fit method.

set_params(**params)

Set parameters for this estimator.

set_partial_fit_request(*[, sample_weight])

Request metadata passed to the partial_fit method.

set_score_request(*[, sample_weight])

Request metadata passed to the score method.

coef__interval(alpha=0.05)[source]

Get a confidence interval bounding the fitted coefficients.

Parameters:

alpha (float, default 0.05) – The confidence level. Will calculate the alpha/2-quantile and the (1-alpha/2)-quantile of the parameter distribution as confidence interval

Returns:

(coef_lower, coef_upper) – Returns lower and upper interval endpoints for the coefficients.

Return type:

tuple of array, shape (n_targets, n_coefs) or (n_coefs, )

fit(X, y, sample_weight=None)[source]

Fit the multi-output debiased lasso model.

Parameters:
  • X (ndarray or scipy.sparse matrix, (n_samples, n_features)) – Input data.

  • y (array, shape (n_samples, n_targets) or (n_samples, )) – Target. Will be cast to X’s dtype if necessary

  • sample_weight (numpy array of shape [n_samples]) – Individual weights for each sample. The weights will be normalized internally.

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Added in version 1.3.

Returns:

routing – A MetadataRouter encapsulating routing information.

Return type:

MetadataRouter

get_params(deep=True)[source]

Get parameters for this estimator.

intercept__interval(alpha=0.05)[source]

Get a confidence interval bounding the fitted intercept.

Parameters:

alpha (float, default 0.05) – The confidence level. Will calculate the alpha/2-quantile and the (1-alpha/2)-quantile of the parameter distribution as confidence interval

Returns:

(intercept_lower, intercept_upper) – Returns lower and upper interval endpoints for the intercept.

Return type:

tuple of array of size (n_targets, ) or tuple of floats

partial_fit(X, y, sample_weight=None, **partial_fit_params)

Incrementally fit the model to data, for each output variable.

Parameters:
  • X ({array-like, sparse matrix} of shape (n_samples, n_features)) – The input data.

  • y ({array-like, sparse matrix} of shape (n_samples, n_outputs)) – Multi-output targets.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights. If None, then samples are equally weighted. Only supported if the underlying regressor supports sample weights.

  • **partial_fit_params (dict of str -> object) – Parameters passed to the estimator.partial_fit method of each sub-estimator.

    Only available if enable_metadata_routing=True. See the User Guide.

    Added in version 1.3.

Returns:

self – Returns a fitted instance.

Return type:

object

predict(X)[source]

Get the prediction using the debiased lasso.

Parameters:

X (ndarray or scipy.sparse matrix, (n_samples, n_features)) – Samples.

Returns:

prediction – The prediction at each point.

Return type:

array_like, shape (n_samples, ) or (n_samples, n_targets)

predict_interval(X, alpha=0.05)[source]

Build prediction confidence intervals using the debiased lasso.

Parameters:
  • X (ndarray or scipy.sparse matrix, (n_samples, n_features)) – Samples.

  • alpha (float in [0, 1], default 0.05) – The overall level of confidence of the reported interval. The alpha/2, 1-alpha/2 confidence interval is reported.

Returns:

(y_lower, y_upper) – Returns lower and upper interval endpoints.

Return type:

tuple of array, shape (n_samples, n_targets) or (n_samples, )

prediction_stderr(X)[source]

Get the standard error of the predictions using the debiased lasso.

Parameters:

X (ndarray or scipy.sparse matrix, (n_samples, n_features)) – Samples.

Returns:

prediction_stderr – The standard error of each coordinate of the output at each point we predict

Return type:

array_like, shape (n_samples, ) or (n_samples, n_targets)

score(X, y, sample_weight=None)

Return the coefficient of determination of the prediction.

The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns:

score\(R^2\) of self.predict(X) w.r.t. y.

Return type:

float

Notes

The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score(). This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).

set_fit_request(*, sample_weight: bool | None | str = '$UNCHANGED$') MultiOutputDebiasedLasso

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in fit.

Returns:

self – The updated object.

Return type:

object

set_params(**params)[source]

Set parameters for this estimator.

set_partial_fit_request(*, sample_weight: bool | None | str = '$UNCHANGED$') MultiOutputDebiasedLasso

Request metadata passed to the partial_fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to partial_fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to partial_fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in partial_fit.

Returns:

self – The updated object.

Return type:

object

set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') MultiOutputDebiasedLasso

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:

sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

Returns:

self – The updated object.

Return type:

object